patent_id
stringlengths
7
8
description
stringlengths
125
2.47M
length
int64
125
2.47M
11860715
Throughout the drawings and the detailed description, unless otherwise described, the same drawing reference numerals will be understood to refer to the same elements, features, and structures. The relative size and depiction of these elements may be exaggerated or adjusted for clarity, illustration, and/or convenience. DETAILED DESCRIPTION In the following description, specific details are set forth in order to provide a thorough understanding of the various example embodiments. It should be appreciated that various modifications to the embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments and applications without departing from the spirit and scope of the disclosure. Moreover, in the following description, numerous details are set forth for the purpose of explanation. However, one of ordinary skill in the art should understand that embodiments may be practiced without the use of these specific details. In other instances, well-known structures and processes are not shown or described in order not to obscure the description with unnecessary detail. Thus, the present disclosure is not intended to be limited to the embodiments shown, but is to be accorded the widest scope consistent with the principles and features disclosed herein. HTTP-based application, such as those using SAP Fiori, send OData requests to the backend system as HTTP requests. When there is an error that still allows the request to be processed, but something goes wrong, the OData service has no way to inform the HTTP-based application of such issues. For example, a user may have an invalid input in one field, but a valid input in another field. Thus, the valid input can be processed successfully, but the invalid input cannot be processed successfully. In this scenario, the OData service is able to process part of the request successfully, while unsuccessfully processing the other part of the request. However, prior to the example embodiments, the OData service was unable to provide notice of the unsuccessful part of the request to the application or why. In the example embodiments, HTTP messages are how data is exchanged between a client application and an OData service (or host application of the OData service) which the client application desires to access data from. There are two types of messages: requests sent by the client application to trigger an action by the OData service, and responses, the answer from the OData service. HTTP messages are composed of textual information encoded in ASCII, and span over multiple lines. In HTTP/1.1, and earlier versions of the protocol, these messages were openly sent across the connection. In HTTP/2, the once human-readable message is now divided up into HTTP frames, providing optimization and performance improvements. Web developers, or webmasters, may craft these textual HTTP messages themselves. As another example, software, a Web browser, proxy, or Web server, and the like, may generate HTTP messages. For example, these entities may provide HTTP messages through config files (e.g., for proxies or servers), APIs (e.g., for browsers), or other interfaces. HTTP requests, and responses, share similar structure and may include (i) a start-line (e.g., a single line in the message) describing the requests to be implemented, or its status of whether successful or a failure (ii) an optional set of HTTP headers specifying the request, or describing the body included in the message, (iii) a blank line indicating all meta-information for the request has been sent, (iv) an optional body containing data associated with the request (like content of an HTML form), or the document associated with a response, and the like. The presence of the body and its size is specified by the start-line and HTTP headers. In some embodiments, the start-line and HTTP headers of the HTTP message are collectively know as the head of the requests, whereas its payload is known as the body. In the example embodiments, a new type of message is designed which provides an HTTP response from the OData service which includes an HTTP response header that specifies the field where the error is located (i.e., the target field), and a human-readable error message with text content to be displayed by the application. Thus, the HTTP application can display additional information to the user that was not previously capable of being provided. The HTTP response message from the OData service provides a “target” of the error. Although the examples herein refer to “error” messages being transmitted from an OData service to an HTTP-based application, it should also be appreciated that the HTTP messages according to various embodiments may carry other types of content besides error indications. For example, an OData service may send a warning message to the application if a problem or inconsistency has arisen, a success message when an action has been performed without errors, an information message if non-critical information is being provided to the application, a confirmation message which prompts a user of the application to confirm an action before it is executed, and the like. Cloud services and other support packages and suites offered by host platforms may provide for a generic way to significantly shorten the service development time by providing out-of-the-box solutions for software applications running on the host platform and interacting with the underlying services, such as OData. This allows developers to focus on the specific business logic of the application thus reducing the overall time for development. Examples of out of the box solutions for recurrent tasks include automatically servicing of CRUD requests for a particular data model. CRUD includes commit, read, update, and delete operations. These capabilities allow an application to fetch/run services quickly by just composing model entity definitions and running simple common line interface that you can run at your terminal that automatically creates all of these services and do the deployment and start a server to handling of the requests. Meanwhile, OData is an open protocol designed to work with all types of platforms. OData provides predefined messaging for HTTP requests and HTTP responses that are to be adhered to by applications when interacting with an OData service. However, OData does not currently define a messaging protocol for when an HTTP request was processed successfully but with some errors or other issues. For example, suppose that a user enters some data into a first field but leaves a mandatory field blank, and then tries to store/save the data to the backend. In that scenario, the data in the first field will be saved but the null value (blank) in the mandatory field will create an issue. The example embodiments provide a messaging protocol that makes it possible for the OData service to communicate both the success and the errors to the application as well as an identifier of a target of the error as well as a reason (description) for the error. In response, the application can identify the user interface element (text box, checkbox, button, etc.) that is associated with the error and display a visual identifier of the error on the user interface in association with the user interface element. REST-based OData services enable applications to share data with a wide range of devices, technologies, and platforms in a way that is easy to understand and consume. REST services provides for various advantages including obtaining human readable results that can be visualized within a web browser, use of stateless applications, receipt of related pieces of information, and the use of standard commands such as GET, PUT, POST, DELETE, and QUERY. OData enables the creation of REST-based data services which allow resources, identified using Uniform Resource Locators (URLs) and defined in a data model, to be published and edited by Web clients using simple HTTP messages. This specification defines the core semantics and the behavioral aspects of the protocol. OData includes a Web protocol for querying and updating data, applying and building on web technologies such as HTTP, Atom Publishing Protocol (AtomPub), and RSS (Really Simple Syndication) to provide access to information from a variety of applications. It is easy to understand and extensible, and provides consumers with a predictable interface for querying a variety of data sources. OData provides additional features as well, such as feed customization that allows mapping part of the structured content into the standard Atom elements, and the ability to link data entities within an OData service (via “ . . . related . . . ” links) and beyond (via media link entries). OData is also extensible, like the underlying AtomPub, and thereby allows the addition of features that are required when building easy-to-use applications, both mobile and browser-based. The OData Protocol is different from other REST-based web service approaches in that it provides a uniform way to describe both the data and the data model. This improves semantic interoperability between systems and allows an ecosystem to emerge. Towards that end, the OData Protocol follows design principles including mechanisms that work on a variety of data sources, do not assume a relational data model, support extended functionality without breaking clients unaware of those extensions, and follow REST principles. An OData metadata document is a representation of a service's data model exposed for client consumption. In various host environments, the host may provide its own metadata document for applications to consume. Here, the metadata document may include the data model for use with the OData services offered by the host platform or otherwise available via the host platform. Thus, applications can consume the metadata document and understand the identifiers, codes, types, etc., to use when messaging with the OData services. The central concepts in the metadata document are entities, relationships, entity sets, actions, and functions. Entities are instances of entity types (e.g. Customer, Employee, etc.). Entity types are named structured types with a key. They define the named properties and relationships of an entity. Entity types may derive by single inheritance from other entity types. The key of an entity type is formed from a subset of the primitive properties (e.g., Customer ID, Order ID, Line ID, etc.) of the entity type. Complex types are keyless named structured types consisting of a set of properties. These are value types whose instances cannot be referenced outside of their containing entity. Complex types are commonly used as property values in an entity or as parameters to operations. Properties declared as part of a structured type's definition are called declared properties. Instances of structured types may contain additional undeclared dynamic properties. A dynamic property cannot have the same name as a declared property. Entity or complex types which allow clients to persist additional undeclared properties are called open types. Relationships from one entity to another are represented as navigation properties. Navigation properties are generally defined as part of an entity type, but can also appear on entity instances as undeclared dynamic navigation properties. Each relationship has a cardinality. Enumeration types are named primitive types whose values are named constants with underlying integer values. Type definitions are named primitive types with fixed facet values such as maximum length or precision. Type definitions can be used in place of primitive typed properties, for example, within property definitions. Entity sets are named collections of entities (e.g. Customers is an entity set containing Customer entities). An entity's key uniquely identifies the entity within an entity set. If multiple entity sets use the same entity type, the same combination of key values can appear in more than one entity set and identifies different entities, one per entity set where this key combination appears. Each of these entities has a different entity-id. Entity sets provide entry points into the data model. Operations allow the execution of custom logic on parts of a data model. Functions are operations that do not have side effects and may support further composition, for example, with additional filter operations, functions or an action. Actions are operations that allow side effects, such as data modification, and cannot be further composed in order to avoid non-deterministic behavior. Actions and functions are either bound to a type, enabling them to be called as members of an instance of that type, or unbound, in which case they are called as static operations. Action imports and function imports enable unbound actions and functions to be called from the service root. Singletons are named entities which can be accessed as direct children of the entity container. A singleton may also be a member of an entity set. An OData resource is anything in the model that can be addressed (an entity set, entity, property, or operation). Every OData service may have a different set of entities and a different set of fields for a user interface. According to various embodiments, a software application can discover the OData service information when it is deployed on the host platform. For example, data models may be defined by the runtime environment of the host platform. The data modeling may include predefined names, identifiers, content types, table identifiers, constraints, etc., for various user interface elements (e.g., text boxes, radio buttons, checkboxes, etc.) that are displayed via a user interface of the application and used for interacting with data provided by an OData service. When the application is deployed on the host platform, the application may be integrated with metadata of the runtime environment which includes the data models for the user interface elements. For example, a metadata document including the data modeling may be compiled with the application. Thus, the application is able to communicate with the OData service about the user interface elements (input fields) that are receiving inputs via the user interface and the values that are being provided as the inputs (or in some cases, null). The targets will be different for each service. The example embodiments provide a way to express these targets in a generic way that can be used with every OData service even though the OData service may have different structures. That is, by reading in the metadata of the OData service information (UI element information) at deployment time, the application will have an understanding of the unique identifiers of UI elements that are used by the OData service thereby enabling the application to send identifiers of such UI elements to the OData service and also receive information about such UI elements in response from the OData service. Thus, the OData service can identify a target of an error by its identifier, along with a human-readable description of the error. This information can be added to the HTTP response (e.g., in a separate field newly provided by the example embodiments). The HTTP response includes the target that needs the error message and the error message content to be displayed. The backend knows which field has an invalid value, and this field is the problem (in the response). FIG.1Aillustrates a computing environment100for OData services in accordance with an example embodiment. Referring toFIG.1A, an application120(e.g., a software application, service, program, etc.) may be deployed on a host platform110where it is able to interact with data stored in a data store140, such as a database, via one or more OData services. In the example ofFIG.1A, an application130hosts an OData service132for accessing data associated with the application120from the data store140. The data store140may include tables142of data in row/column format, views144which define how the data is to be visualized, virtual tables146, and the like. A user may interact with a user interface122output by the application120. For example, the user may enter data into one or more input fields of the user interface122and attempt to submit the data to the data store140by pressing on a button or some other mechanism on the user interface122. In response, the application120may generate an HTTP request124with identifiers of the one or more input fields and data values to be stored in the one or more input fields. Here, the data values may include express values that are input (e.g., text, button selections, etc.). As another example, the data values may be include a null value indicating that the text box, radio button, etc. has been left empty or unchecked. The application120may send the HTTP request124to the OData service132. Here, the OData service132may process the HTTP request124against the data stored in the data store140. For example, any of the CRUD operations (create, read, update, delete) may be performed. In this case there are three possible scenarios for the data processing of the HTTP request124by the OData service132including successfully processed, unsuccessfully processed, and successfully processed but with errors. OData defines the content of the HTTP response message from the OData service132for the first two outcomes (complete success and completely unsuccessful). But OData does not define how to communicate errors when part of the request was processed successfully but another part was not processed successfully. FIG.1Billustrates the user interface122of the application120that interacts with the OData service132shown inFIG.1A, in accordance with an example embodiment. In this example, the user has navigated to a page of the application which corresponds to a Split Air Conditioning Unit. The user has chosen a general tab on a user interface150and may enter content into any of the one or more of the input fields151,152,153,154,155,156,157, and158. The input fields may refer to text box inputs, checkboxes, drop-down menus, radio buttons, menus, and the like. In some embodiments, the input fields may be referred to herein as UI elements or UI controls. In the example ofFIG.1B, mandatory fields are noted with an asterisk. In this example the Product ID input field154and Category input field157are mandatory fields which must be filled-in for the record to be valid with respect to the business logic of the application120. Here, a user has entered data into some of the fields, but the user has left the Category input field157empty. Here, the user goes to save the inputs by pressing on a save button159. In this case, by pressing on the save button159instead of a post button, or the like, the data will be saved to the data store140even though there may be errors. FIG.1Cillustrates an HTTP response header160from the OData service132to the application120in accordance with an example embodiment. Referring toFIG.1C, the OData service132may generate an HTTP response with an HTTP response header160that includes notifications of both a successfully processed part162and an unsuccessfully processed part164. Here, the HTTP response header160also includes metadata166that provides information about an error that occurred. However, the metadata166is not legible to a human. In this example, the HTTP response may be extended using a @common.numericSeverity instance annotation value that can be added to the HTTP response header160. The @common.numericSeverity instance annotation is a value that is used to identify a severity of the message and/or a type of message. For example, a value of ‘1’ may indicate that the message is a success message/notification, a value of ‘2’ may indicate that the message is an information message/notification, a value of ‘3’ may indicate that the message is a warning message/notification, and a value of ‘4’ may indicate that the message is an error message/notification. When the application120receives the HTTP response including the HTTP response header160from the OData service132, the application120will not know how to address the unsuccessfully processed part164of the HTTP response. That is, OData protocol does not provide a mechanism for interpreting an HTTP response that includes both successes and errors. Therefore, the application120will not provide any information about the unsuccessfully processed part164of the HTTP request124to the user. FIG.2illustrates a process200of deploying an application210on a host platform220in accordance with an example embodiment. Referring toFIG.2, the host platform220may include a cloud platform, a web server, a database, an on-premises server, or the like. Here, the application210may be developed using a programming language and then compiled and deployed on the host platform220. In this example, the compiling may be performed by a developer or it may be performed by the host platform220. In the example ofFIG.2, the compiling is performed by a controller222of the host platform. During the compiling, the controller222may integrate files of user interface metadata224for OData services into an instance of the application (application instance226) that is installed and then launched/deployed and running on the host platform220. In the example embodiments, the user interface metadata224may include data model information for the OData services such as described herein, including entities, relationships, entity sets, actions, and functions of the OData services. The application instance226may follow the same data model. In addition, the user interface metadata224may include identifiers of user interface elements (e.g., text box input fields, drop-down menus, radio buttons, checkboxes, etc.) of the user interface for communication between the application instance226and the OData service. According to various embodiments, the deployed application (such as application instance226shown inFIG.2), can communicate with OData services and provide a user with information that has not previously been provided by other OData technologies. In the example embodiments, requests sent from an application to an OData service that are processed successfully in part, but also with errors or other issues, may be handled in a new way. In particular, an identifier of the user interface element that is associated with the error can be provided along with a human-readable description of a reason for the error. This information can be provided to the application thereby enabling the application to identify a corresponding element on the screen/user interface and display the reason in association therewith. FIGS.3A and3Billustrate examples of HTTP messages with error target data in accordance with an example embodiment. Referring toFIG.3A, an HTTP request message includes an HTTP request header310. For example, the HTTP request header310may be included within a HTTP request from the application120to the OData service132shown inFIG.1A. As an example, the request may be a PUT request to store data from the user interface150shown inFIG.1Bto the data store140via the OData service132. In this example, the HTTP request header310includes a unique identifier312of the user interface element where the value is to be added. In this example, the user interface element includes an ID of “CategoryID0123”. However, it should be appreciated that the unique identifier may be a global unique identifier (GUID) GUID, such as a 128-bit GUID or the like which is commonly used in databases. The HTTP request header310may also include a value field314that includes the value to be added in the field of the target user interface element. Here, the value is null because there is nothing in the text input field of the Category input field157shown inFIG.1B. InFIG.3A, an item identifier312of the Category input field157ofFIG.1Bis shown along with a value field314for the item. However, it should be appreciated that the HTTP request may also include identifiers of other user interface elements (e.g., any of fields151-156and158shown inFIG.1B, etc.) and values stored therein, as well. For simplicity, these other lines have been removed. The HTTP request may be sent from the application120to the OData service132. In response, the OData service132may process the data request. In this case, the OData service may be able to perform some of the requests, for example, for input fields151-156, and158. However, the OData service132may not be able to perform the request for the Category input field157which results in an error because this field is mandatory and has been left blank. In this case, the OData service may generate an HTTP response that includes a response header320such as shown inFIG.3Band which includes a target identifier324of user interface element where the error occurred, and a description322of the error. The target identifier324may identify a user interface element among other elements on a page within the application, and it may also identify a page location among a plurality of pages. Examples of a target identifier324include a GUID, but embodiments are not limited thereto. The target identifier324may identify a page location of the UI element from among a plurality of application pages, and a UI element from among other UI elements within a single page of the application. Furthermore, the description322may include human-readable text description of the reason for the error. Here, the OData service can forward the HTTP response to the application, including the target identifier324and the description322. Because of the additional information, the application can provide the user with information that was not possible before. For example,FIG.3Cillustrates a user interface330displaying an error notification, in accordance with an example embodiment. Referring toFIG.3C, the user interface330corresponds to the user interface150shown inFIG.1B. However, in this example, a box332has been displayed around the Category input field157. Here, the box332highlights the Category input field157using a different color or shading. The user interface330also includes an error description334describing the reason for the error and including the human-readable text from the description322in the response header320ofFIG.3B. FIG.4is a diagram of a server node400according to some embodiments. The server node400may include a general-purpose computing apparatus and may execute program code to perform any of the functions described herein. The server node400may comprise an implementation of a host platform, in some embodiments. It should also be appreciated that the server node400may include other unshown elements according to some embodiments and may not include all of the elements shown inFIG.4. Server node400includes processing unit(s)410(i.e., processors) operatively coupled to communication device420, data storage device430, input device(s)440, output device(s)450, and memory460. Communication device420may facilitate communication with external devices, such as an external network or a data storage device. Input device(s)440may comprise, for example, a keyboard, a keypad, a mouse or other pointing device, a microphone, knob or a switch, an infra-red (IR) port, a docking station, and/or a touch screen. Input device(s)440may be used, for example, to enter information into the server node400. Output device(s)450may comprise, for example, a display (e.g., a display screen) a speaker, and/or a printer. Data storage device430may comprise any appropriate persistent storage device, including combinations of magnetic storage devices (e.g., magnetic tape, hard disk drives and flash memory), optical storage devices, Read Only Memory (ROM) devices, etc., while memory460may comprise Random Access Memory (RAM). In some embodiments, the data storage device430may store user interface elements in tabular form. For example, one or more columns and one or more rows of user interface elements may be displayed in a two-dimensional spreadsheet, table, document, digital structure, or the like. Application server431and query processor432may each comprise program code executed by processing unit(s)410to cause server node400to perform any one or more of the processes described herein. Such processes may include estimating selectivities of queries on tables434based on statistics433. Embodiments are not limited to execution of these processes by a single computing device. Data storage device430may also store data and other program code for providing additional functionality and/or which are necessary for operation of server node400, such as device drivers, operating system files, etc. FIG.5illustrates a method500of displaying an error that occurs during processing of an OData request, in accordance with an example embodiment. As an example, the method500may be performed by a web server, a cloud platform, an on-premises server, a database node included within a distributed database system, a user device, and the like. Referring toFIG.5, in510, the method may include transmitting, via an application, a hypertext transfer protocol (HTTP) request to an Open Data Protocol (OData) service, the HTTP request comprising identifiers of one or more input fields displayed on a user interface and one or more values for the one or more input fields. The HTTP request may be submitted from an HTTP-based application that is interacting with the OData service via the Internet. In520, the method may include receiving, from the OData service, an HTTP response indicating that the HTTP request was processed successfully with one or more errors. In530, the method may include identifying an input field from among the one or more input fields which is a target of the error and a reason for the error from a field in the HTTP response indicating that the HTTP request was processed successfully with one or more errors. In540, the method may include rendering, via the application, a visual identifier of the error in association with a display of the input field on the user interface. In some embodiments, the transmitting may include identifying names of the one or more input fields from a data model of the OData service, and transmitting the names in the HTTP request. In some embodiments, the HTTP response may include a target identifier field in an HTTP response header, and the target identifier field includes an identifier of the input field and a description of the error. In some embodiments, the rendering may include displaying a box around the input field with one or more of a distinguishing color and a distinguishing shading with respect to an application background. In some embodiments, the rendering may include displaying a message that describes the reasons for the error adjacent to the input field on the user interface. In some embodiments, the HTTP response may include a format that is compatible with OData Version 4.0 or higher. In some embodiments, the field in the HTTP response may include a relative resource path to the input field in the application, and the rendering is performed based on the relative resource path. In some embodiments, the field of the HTTP response may include an i18N name of the input field and a human-readable text description of the error. FIG.6illustrates a method600of generating an HTTP response with error target information in accordance with an example embodiment. As an example, the method600may be performed by a web server, a cloud platform, an on-premises server, a database node included within a distributed database system, a user device, and the like. Referring toFIG.6, in610, the method may include receiving, via an Open Data Protocol (OData) service, a hypertext transfer protocol (HTTP) request from an application, the HTTP request comprising identifiers of one or more input fields displayed on a user interface and one or more values for the one or more input fields. For example, the HTTP request may identify an input field by its unique identifier defined by the metadata document of the runtime and a value for the input field. In620, the method may include generating, via the OData service, an HTTP response indicating that the HTTP request was processed successfully with one or more errors. In630, the method may include storing a target identifier in the HTTP response which identifies an input field from among the one or more input fields which is a target of the error and a reason for the error. In640, the method may include transmitting the HTTP response to the application which includes information indicating the HTTP request was processed successfully with one or more errors and which includes the target identifier. In some embodiments, the receiving may include receiving an identifier of the input field and a data value for the input field via the HTTP request. In some embodiments, the storing may include storing an identifier of the input field and a description of the error in a target identifier field of the HTTP response header. In some embodiments, the HTTP response may include a format that is compatible with OData Version 4.0 or higher. As will be appreciated based on the foregoing specification, the above-described examples of the disclosure may be implemented using computer programming or engineering techniques including computer software, firmware, hardware or any combination or subset thereof. Any such resulting program, having computer-readable code, may be embodied or provided within one or more non transitory computer-readable media, thereby making a computer program product, i.e., an article of manufacture, according to the discussed examples of the disclosure. For example, the non-transitory computer-readable media may be, but is not limited to, a fixed drive, diskette, optical disk, magnetic tape, flash memory, external drive, semiconductor memory such as read-only memory (ROM), random-access memory (RAM), and/or any other non-transitory transmitting and/or receiving medium such as the Internet, cloud storage, the Internet of Things (IoT), or other communication network or link. The article of manufacture containing the computer code may be made and/or used by executing the code directly from one medium, by copying the code from one medium to another medium, or by transmitting the code over a network. The computer programs (also referred to as programs, software, software applications, “apps”, or code) may include machine instructions for a programmable processor, and may be implemented in a high-level procedural and/or object-oriented programming language, and/or in assembly/machine language. As used herein, the terms “machine-readable medium” and “computer-readable medium” refer to any computer program product, apparatus, cloud storage, internet of things, and/or device (e.g., magnetic discs, optical disks, memory, programmable logic devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The “machine-readable medium” and “computer-readable medium,” however, do not include transitory signals. The term “machine-readable signal” refers to any signal that may be used to provide machine instructions and/or any other kind of data to a programmable processor. The above descriptions and illustrations of processes herein should not be considered to imply a fixed order for performing the process steps. Rather, the process steps may be performed in any order that is practicable, including simultaneous performance of at least some steps. Although the disclosure has been described in connection with specific examples, it should be understood that various changes, substitutions, and alterations apparent to those skilled in the art can be made to the disclosed embodiments without departing from the spirit and scope of the disclosure as set forth in the appended claims.
36,178
11860716
DETAILED DESCRIPTION An information processing apparatus according to an embodiment includes a processing circuit. The processing circuit calculates a first input/output error related to normal data and a second input/output error related to pseudo abnormal data different from the normal data for each of a plurality of autoencoders having different network structures. The processing circuit outputs relational data indicating a relation between the network structure and the first input/output error and the second input/output error. Hereinafter, an information processing apparatus, an information processing method, and a storage medium according to the present embodiment will be described with reference to the drawings. First Embodiment FIG.1is a diagram illustrating a configuration example of an information processing apparatus100according to a first embodiment. As illustrated inFIG.1, the information processing apparatus100is a computer including a processing circuit1, a storage device2, an input device3, a communication device4, and a display device5. Data communication between the processing circuit1, the storage device2, the input device3, the communication device4, and the display device5is performed via a bus. The processing circuit1includes a processor such as a central processing unit (CPU) and a memory such as a random access memory (RAM). The processing circuit1includes a normal data acquisition unit11, a model training unit12, a pseudo abnormal data acquisition unit13, a performance index calculation unit14, a recording unit15, an output control unit16, and an abnormality detection unit17. The processing circuit1realizes functions of the respective units11to17by executing a design support program related to design support of a network structure of an autoencoder. The design support program is stored in a non-transitory computer-readable recording medium such as the storage device2. The design support program may be mounted as a single program that describes all the functions of the respective units11to17described above, or may be mounted as a plurality of modules divided into several functional units. Each of the units11to17may be mounted by an integrated circuit such as an application specific integrated circuit (ASIC). In this case, each of the units11to17may be mounted on a single integrated circuit or may be individually mounted on a plurality of integrated circuits. The normal data acquisition unit11acquires normal data. The normal data is a type of input data input to the autoencoder, and is data when an inspection target is normal. For example, in a case where the inspection target is a factory machine, the normal data is data output by the factory machine or an inspection device thereof when the factory machine normally operates. In addition, when the inspection target is an article such as a semiconductor product, the normal data is data output by an inspection device of the article when the article is normal. The model training unit12trains a plurality of autoencoders having different network structures. Typically, the model training unit12trains a plurality of autoencoders based on normal data. The pseudo abnormal data acquisition unit13acquires pseudo abnormal data. The pseudo abnormal data is abnormal data generated in a pseudo manner. The abnormal data is data different from the normal data. That is, the abnormal data is data that is not used for training of the autoencoder and cannot be reproduced by the autoencoder. For example, in a case where the inspection target is a factory machine, the normal data is data output by the factory machine or an inspection device thereof when the factory machine abnormally operates. When the inspection target is an article such as a semiconductor product, the normal data is data output by an inspection device of the article when the article is abnormal. In many cases, it is difficult to reproduce the abnormality of the inspection target in advance. For this reason, it is difficult to prepare the abnormal data at the time of training of the autoencoder. The pseudo abnormal data is abnormal data generated in a pseudo manner in order to evaluate abnormality detection performance of each autoencoder. The pseudo abnormal data may be data obtained by performing data augmentation on the normal data, or may be data in another domain different from the normal data. The performance index calculation unit14calculates, for each of the plurality of autoencoders, a first input/output error related to the normal data and a second input/output error related to the pseudo abnormal data different from the normal data. The input/output error is also referred to as a reconfiguration error. Further, the performance index calculation unit14may calculate a difference between the first input/output error and the second input/output error. The first input/output error is an error between the normal data and output data of the autoencoder when the normal data is input. The second input/output error is an error between the pseudo abnormal data and output data of the autoencoder when the pseudo abnormal data is input. The first input/output error, the second input/output error, the difference between the first input/output error and the second input/output error, and the like are examples of indices (hereinafter, referred to as performance indices) for evaluating the performance of each autoencoder. The recording unit15records the performance index in the storage device2or the like for each network structure of the autoencoder, and generates relational data (hereinafter, referred to as structure/performance relational data) between the network structure and the performance index. The output control unit16outputs the structure/performance relational data. The structure/performance relational data may be displayed on the display device5, may be output to an external device such as a computer via the communication device4, or may be stored in the storage device2. The abnormality detection unit17performs abnormality detection using an autoencoder. For example, the abnormality detection unit17performs abnormality detection using an autoencoder selected by a user or the like via the input device3or the like among a plurality of autoencoders trained by the model training unit12. The storage device2includes a read only memory (ROM), a hard disk drive (HDD), a solid state drive (SSD), an integrated circuit storage device, and the like. The storage device2stores normal data, pseudo abnormal data, a performance index, structure/performance relational data, a setting program, and the like. The input device3inputs various commands from the user. As the input device3, a keyboard, a mouse, various switches, a touch pad, a touch panel display, and the like can be used. An output signal from the input device3is supplied to the processing circuit1. Note that the input device3may be an input device of a computer connected to the processing circuit1in a wired or wireless manner. The communication device4is an interface for performing data communication with an external device connected to the information processing apparatus100via a network. The display device5displays various types of information. For example, the display device5displays the structure/performance relational data under the control of the output control unit16. As the display device5, a cathode-ray tube (CRT) display, a liquid crystal display, an organic electro luminescence (EL) display, a light-emitting diode (LED) display, a plasma display, or any other display known in the art can be appropriately used. Further, the display device5may be a projector. Hereinafter, the information processing apparatus100according to the first embodiment will be described in detail. First, a processing example of abnormality detection using an autoencoder executed by the abnormality detection unit17will be briefly described.FIG.2is a schematic diagram of an autoencoder20. As illustrated inFIG.2, the autoencoder20is a neural network including an input layer21, a hidden layer22, and an output layer23. Input data is input to the input layer21. The hidden layer22performs encoding and decoding on the input data in series, and converts the input data into output data. The output layer23outputs the output data. As illustrated inFIG.2, the hidden layer22includes a plurality of hidden layers. The plurality of hidden layers22may include a fully coupled layer, a convolution layer, or any other layer. The plurality of hidden layers22includes a bottleneck layer24. The bottleneck layer24is a layer having the smallest width among the plurality of hidden layers22. The “width” corresponds to the number of nodes in the fully coupled layer or the number of channels in the convolution layer. A position of the bottleneck layer24is not particularly limited as long as it is between the input layer21and the output layer23. Note that the “position” corresponds to the number of hidden layers22from the input layer21or the output layer23to the bottleneck layer24. The “position” is also referred to as a depth. The width and the position are one of parameters defining the network structure. The input data is propagated from the input layer21to the bottleneck layer24and reduced to a feature map, and the feature map is propagated from the bottleneck layer24to the output layer23and restored to output data having the same resolution as the input data. The autoencoder20is also referred to as an encoder/decoder/network. FIG.3is a diagram illustrating a processing example of the abnormality detection using the autoencoder. As illustrated inFIG.3, the abnormality detection unit17inputs input data to the autoencoder and generates output data corresponding to the input data. The input data is assumed to be image data, waveform data, or the like, but may be data in any format. In a case of an ideal autoencoder, when the input data is normal data, output data obtained by reproducing the input data is output from the autoencoder, and when the input data is abnormal data, output data different from the input data is output from the autoencoder. As illustrated inFIG.3, the abnormality detection unit17calculates an input/output error between the input data and the output data (step S301). The input/output error is an index based on a difference between the same sampling points of an input data value and an output data value. When the input data is normal data, the input/output error is substantially zero. When the input data is abnormal data, the input/output error does not become substantially zero. When step S301is performed, the abnormality detection unit17determines the presence or absence of an abnormality based on the input/output error (step S302). For example, when the input/output error is equal to or larger than a threshold value, the abnormality is determined, and when the input/output error is less than the threshold value, the normality is determined. That is, in a case where the input data is normal data, since the input/output error is less than the threshold value, the normality is determined. In a case where the input data is abnormal data, since the input/output error is equal to or larger than the threshold value, the abnormality is determined. In this way, the abnormality detection is performed using the autoencoder. It is assumed that the abnormality detection performance according to the present embodiment is the performance of the autoencoder, and is the ability to correctly reproduce input data to be normal data and not to correctly reproduce input data to be abnormal data. The abnormality detection performance depends on the network structure such as the width or the position of the bottleneck layer24. FIG.4is a diagram illustrating a transition of output data for each number of nodes (hereinafter, referred to as the number of bottleneck nodes) in the bottleneck layer. As illustrated inFIG.4, it is assumed that the input data is image data in which numerals from “0” to “9” have been drawn. It is assumed that, in the autoencoder, image data of “0”, “1”, and “3” to “9” is trained and image data of “2” is not trained. That is, “0”, “1”, and “3” to “9” are normal data, and “2” is abnormal data. As illustrated inFIG.4, in a case where the number of bottleneck nodes is four or less, the autoencoder cannot reproduce not only the abnormal data but also the normal data. In a case where the number of bottleneck nodes is 512 or more, the autoencoder can reproduce not only the normal data but also the abnormal data, and identity mapping learning is performed. In these cases, it can be said that the autoencoder cannot exhibit good abnormality detection performance. On the other hand, in a case where the number of bottleneck nodes is in a range of 8 to 64, it can be said that the autoencoder can reproduce the normal data, but cannot reproduce the abnormal data, and exhibits good abnormality detection performance. FIG.5is a graph representing a relation between an input/output error (LOSS) and AUROC for each number of bottleneck nodes (NUMBER OF NODES). InFIG.5, a horizontal axis represents the number of bottleneck nodes, a left vertical axis represents the input/output error, and a right vertical axis represents AUROC. The input/output error inFIG.5is an input/output error reflecting both an input/output error related to normal data and an input/output error related to abnormal data. AUROC is an AUC (area under the curve) of an ROC curve. AUROC is a ratio between a true positive rate, which is a ratio at which the abnormal data is not correctly reproduced, and a true negative rate, which is a ratio at which the normal data is correctly reproduced, and is an example of a performance index for evaluating the abnormality detection performance of the autoencoder. In supervised learning, it is possible to experimentally determine an optimum learning parameter depending on the magnitude of the input/output error. However, as illustrated inFIG.5, in unsupervised learning performed by the autoencoder, AUROC is not necessarily improved even if the input/output error is lowered. Therefore, it is not possible to determine the optimum learning parameter of the autoencoder only by minimizing the input/output error. This is because, as illustrated inFIG.4, when the number of bottleneck nodes is increased to reduce the input/output error, identity mapping occurs, and as a result, the abnormality detection performance is deteriorated. Note that the learning parameter is a parameter such as a weighting coefficient or bias trained by machine learning. The information processing apparatus100according to the first embodiment supports provision of a network structure of an autoencoder having good abnormality detection performance. FIG.6is a diagram illustrating a typical flow of network structure design support processing by the information processing apparatus100according to the first embodiment. The processing circuit1starts the network structure design support processing by reading and executing a design support program from the storage device2in accordance with a start instruction input by the user via the input device3or a predetermined trigger set in advance. It is assumed that the normal data is already acquired by the normal data acquisition unit11at a start time point ofFIG.6and stored in the storage device2. The normal data is not particularly limited, but is assumed to be image data in which Arabic numerals are drawn as illustrated inFIG.4. One numeral is drawn in each piece of image data. The drawn Arabic numeral may be one digit or two or more digits, and the same numeral may be drawn in two or more pieces of image data. As illustrated inFIG.6, the model training unit12trains a plurality of autoencoders having different network structures based on the normal data (step S601). In step S601, the model training unit12individually performs unsupervised learning on the plurality of autoencoders based on common normal data. As a result, learning parameters such as weights or biases of the respective autoencoders are determined. Examples of the network structure set differently in the plurality of autoencoders include a width and a position related to the bottleneck layer. As described above, the width means the number of nodes or the number of channels. The position means the depth of the bottleneck layer from the input layer or the output layer. In the present embodiment, it is assumed that the width of the bottleneck layer, more specifically, the number of bottleneck nodes is different in the plurality of autoencoders. FIG.7is a diagram schematically illustrating a plurality of autoencoders20ntrained in step S601. Note that “n” represents a number of the autoencoder, and 2≤n≤N is satisfied. “N” is the total number of autoencoders, and N≥2 is satisfied. As illustrated inFIG.7, N untrained autoencoders20nare prepared. Each autoencoder20nis designed such that the number of nodes (the number of bottleneck nodes) in a bottleneck layer24nwhich is an example of the network structure is different. It is assumed that the width of each layer is the same, except for the bottleneck layer24n. In addition, it is assumed that the position of the bottleneck layer24nis the same. The number of autoencoders to be trained is not particularly limited. In addition, the lower limit and the upper limit of the number of bottleneck nodes are not particularly limited. When step S601is performed, the pseudo abnormal data acquisition unit13generates pseudo abnormal data based on the normal data (step S602). FIG.8is a diagram schematically illustrating pseudo abnormal data generation processing. As illustrated inFIG.8, the pseudo abnormal data acquisition unit13generates pseudo abnormal data by performing data augmentation on the normal data used for training of the autoencoder in step S601. The data augmentation is horizontal movement or parallel movement, for example, shifting several pixels horizontally and/or vertically. In addition, the data augmentation may be other modifications such as inversion and rotation. Note that it is assumed that the data augmentation is not accompanied by deformation of an image size. That is, image sizes of the normal data and the pseudo abnormal data are the same. When step S602is performed, the performance index calculation unit14calculates an input/output error related to the normal data and an input/output error related to the pseudo abnormal data, for each of the plurality of autoencoders (step S603). FIG.9is a diagram schematically illustrating calculation processing of an input/output error related to each of the normal data and the pseudo abnormal data for each autoencoder20n. As illustrated inFIG.9, the performance index calculation unit14first inputs the normal data (input normal data) to the autoencoder20nand calculates output data (output normal data) corresponding to the normal data. In addition, the performance index calculation unit14calculates an error between the input normal data and the output normal data as an input/output error. Similarly, the performance index calculation unit14first inputs the pseudo abnormal data (input pseudo abnormal data) to the autoencoder20n, and calculates output data (hereinafter, referred to as output pseudo abnormal data) corresponding to the pseudo abnormal data. In addition, the performance index calculation unit14calculates an error between the input pseudo abnormal data and the output pseudo abnormal data as an input/output error. The performance index calculation unit14calculates input/output errors for both the normal data and the pseudo abnormal data, for each of the plurality of autoencoders20n. The input/output error is an index for evaluating an error between the input data and the output data. As the input/output error, for example, an error average may be used. The error average is defined as an average of differences between the input data and the output data for each pixel. Here, the normal data and the pseudo abnormal data are defined as xniand xpai, respectively. “i” indicates i-th data. The outputs of the autoencoder of the number k of bottleneck nodes when the inputs are xniand xpai, are defined as yni(k) and ypai(k), respectively. An input/output error average Ln(k) related to the normal data is calculated by the following formula (1), and an input/output error average Lpa(k) related to the pseudo abnormal data is calculated by the following formula (2). In the formula (1), Nnindicates the number of data in the normal data, and in the formula (2), Npaindicates the number of data in the pseudo abnormal data. Ln(k)=1Nn⁢∑jNnyjn(k)-xjn2(1)Lpa(k)=1Npa⁢∑jNpayjpa(k)-xjpa2(2) When step S603is performed, the recording unit15records an input/output error for each network structure (step S604). More specifically, an input/output error related to the normal data and an input/output error related to the pseudo abnormal data are recorded for each network structure. The structure/performance relational data indicating the relation between the input/output error related to the normal data and the input/output error related to the pseudo abnormal data for each network structure is referred to as structure/error relational data. For example, when the network structure is the number of bottleneck nodes, the recording unit15records the number k of bottleneck nodes and the error averages Ln(k) and Lpa(k) at the number of bottleneck nodes as the structure/error relational data. Further, the recording unit15may record, as the structure/error relational data, the number k of bottleneck nodes and a difference (hereinafter, referred to as an error average difference) Lpa(k)−Ln(k) between the error average Lpa(k) and the error average Ln(k) at the number of bottleneck nodes. The error average difference is an example of the performance index related to the network structure, and is calculated by the performance index calculation unit14. When step S604is performed, the output control unit16outputs relational data (structure/error relational data) between the network structure and the input/output error (step S605). In step S605, the output control unit16displays a graph representing the structure/error relational data on the display device5as the structure/error relational data. As the graph representing the structure/error relational data, a graph representing a relation between the number of bottleneck nodes and the error average or a graph representing a relation between the number of bottleneck nodes and the error average difference is displayed. FIG.10is a diagram illustrating an example of a graph representing a relation between the number of bottleneck nodes and the error average. As illustrated inFIG.10, in the graph, a vertical axis represents the error average (LOSS), and a horizontal axis represents the number of bottleneck nodes (NODES).FIG.10illustrates a curve101representing the error average related to the normal data and a curve102representing the error average related to the pseudo abnormal data.FIG.11is a diagram illustrating an example of a graph representing a relation between the number of bottleneck nodes and the error average difference. As illustrated inFIG.11, in the graph, a vertical axis represents the error average difference (DIFF), and a horizontal axis represents the number of bottleneck nodes (NODES).FIG.11illustrates a curve111representing the error average difference. It can be seen whether the normal data can be reproduced with the error average Ln(k) related to the normal data. It can be seen whether identity mapping occurs in the error average Lpa(k) related to the pseudo abnormal data. The user determines the optimum number of bottleneck nodes by observing the graphs inFIGS.10and11. The user may determine the number of bottleneck nodes having a small error average Ln(k) and a large error average Lpa(k) as an optimum network structure. Alternatively, the number of bottleneck nodes when the error average difference Lpa(k)−Ln(k) is large is determined as the optimum network structure. In the cases ofFIGS.10and11, it can be seen that poor reproduction of the normal data is eliminated when the number of bottleneck nodes is approximately 8 or more, and identity mapping of the pseudo abnormal data occurs when the number of bottleneck nodes is approximately 100 or more. In this case, since an autoencoder having the number of bottleneck nodes of 8 or more and less than 100 is expected to have relatively high accuracy, an autoencoder having any number of bottleneck nodes in this range may be determined as an autoencoder having the optimum number of bottleneck nodes. Whether to display the graphs of both the error average Ln(k) and the error average Lpa(k) or the graph of the error average difference Lpa(k)−Ln(k) can be arbitrarily designated by the user via the input device3or the like. Further, the graph of either the error average Ln(k) or the error average Lpa(k) may be displayed. The optimum network structure may be determined from those trained in step S601, or a network structure not trained in step S601may be determined. When the network structure not trained in step S601is determined, an autoencoder having the network structure may be trained by the model training unit12. At this time, the model training unit12may perform unsupervised learning of the autoencoder having the network structure based on the normal data. Thereafter, abnormality detection may be performed by the abnormality detection unit17using the autoencoder determined by the user. When step S605is performed, the design support processing illustrated inFIG.6ends. Note that the first embodiment can be variously modified. As an example, the information processing apparatus100may not include the model training unit12. In this case, the information processing apparatus100may acquire a plurality of autoencoders having different network structures trained by the external device according to step S601. As another example, the information processing apparatus100may not include the abnormality detection unit17. According to the above embodiment, the information processing apparatus100includes the performance index calculation unit14and the output control unit16. The performance index calculation unit14calculates a first input/output error related to normal data and a second input/output error related to pseudo abnormal data different from the normal data, for each of a plurality of autoencoders having different network structures. The output control unit16outputs relational data indicating a relation between the network structure and the first input/output error and the second input/output error. The first input/output error functions as an index for measuring the degree of reproduction of the normal data, and the second input/output error functions as an index for measuring the degree of identity mapping of the pseudo abnormal data. The different network structures such as the width and the position of the bottleneck layer greatly affect the abnormality detection accuracy by the autoencoder. As described above, by outputting the relational data indicating the relation between the network structure and the first input/output error and the second input/output error, it is possible to support the design of the optimum network structure of the autoencoder. Therefore, it is possible to obtain an autoencoder having good abnormality detection performance. Further, according to the present embodiment, a plurality of autoencoders may be trained by unsupervised learning instead of supervised learning using actual abnormal data. Accordingly, it is possible to achieve good abnormality detection performance without using the actual abnormal data. Second Embodiment An information processing apparatus according to a second embodiment infers an optimum network structure. Hereinafter, the second embodiment will be described. Note that, in the following description, components having substantially the same functions as those of the first embodiment will be denoted by the same reference numerals, and duplicate explanations will be given only when necessary. FIG.12is a diagram illustrating a configuration example of an information processing apparatus200according to the second embodiment. As illustrated inFIG.12, a processing circuit1of the information processing apparatus200includes an inference unit18in addition to a normal data acquisition unit11, a model training unit12, a pseudo abnormal data acquisition unit13, a performance index calculation unit14, a recording unit15, an output control unit16, and an abnormality detection unit17. The inference unit18infers a recommended range or an optimum value of a network structure based on a structure/error relational data indicating a relation between an input/output error related to normal data and an input/output error related to pseudo abnormal data. FIG.13is a diagram illustrating a typical flow of network structure design support processing by the information processing apparatus200according to the second embodiment. The processing circuit1starts the network structure design support processing by reading and executing a design support program from the storage device2in accordance with a start instruction input by the user via the input device3or a predetermined trigger set in advance. Since steps S1301to S1304illustrated inFIG.13are similar to steps S601to S604illustrated inFIG.6, the description thereof is omitted here. In addition, it is assumed that the network structure is the number of bottleneck nodes. When step S1304is performed, the inference unit18infers the recommended range of the network structure (the number of bottleneck nodes) based on the input/output error recorded in step S1304(step S1305). More specifically, the inference unit18infers the recommended range of the number of bottleneck nodes based on the difference between the input/output error related to the normal data and the input/output error related to the pseudo abnormal data. FIG.14is a diagram illustrating an example of a graph representing a relation between the number of bottleneck nodes and an error average on which a recommended range141is superimposed.FIG.15is a diagram illustrating an example of a graph representing a relation between the number of bottleneck nodes and an error average difference on which the recommended range141is superimposed. Note that the error average is an example of the input/output error, and the error average difference is an example of a difference between the input/output error related to the normal data and the input/output error related to the pseudo abnormal data. As illustrated inFIGS.14and15, the recommended range141is set to a range of the number of bottleneck nodes in which the error average difference is equal to or larger than a threshold value. In the cases ofFIGS.14and15, the threshold value is set to about 0.06. The recommended range141means a range in which the degree of reproduction of the normal data is a first level or more and the degree of identity mapping of the pseudo abnormal data is a second level or less. The degree of reproduction of the normal data is evaluated by the input/output error of the normal data. A smaller input/output error means a higher degree of reproduction. In the case ofFIG.14, the first level is set to about 0.02. The degree of identity mapping of the pseudo abnormal data is evaluated by the input/output error of the pseudo abnormal data. A smaller input/output error means a higher degree of identity mapping. In the case ofFIG.14, the second level is set to about 0.07. It can be said that an autoencoder having the number of bottleneck nodes satisfying the recommended range141defined as described above has a high degree of reproduction of the normal data and a low degree of reproduction of the abnormal data, and has high abnormality detection accuracy. When step S1305is performed, the output control unit16outputs the recommended range inferred in step S1305(step S1306). In step S1306, the output control unit16displays the recommended range on a display device5. For example, the output control unit16may display the recommended range141to be superimposed on the graph representing the relation between the number of bottleneck nodes and the error average as illustrated inFIG.14, or may display the recommended range141to be superimposed on the graph representing the relation between the number of bottleneck nodes and the error average difference as illustrated inFIG.15. By displaying the recommended range as described above, the user can easily confirm the recommended range141. In addition, the reliability of the recommended range141can be estimated by displaying the recommended range141to be superimposed on a graph representing a relation between the input/output error and the number of bottleneck nodes. When step S1306is performed, the design support processing illustrated inFIG.13ends. The abnormality detection may be performed by the abnormality detection unit17using the autoencoder determined by the user. Note that the abnormality detection unit17may not be included in the information processing apparatus200. Note that the design support processing illustrated inFIG.13can be variously modified. In the above embodiment, the inference unit18infers the recommended range of the network structure based on the difference between the input/output error related to the normal data and the input/output error related to the pseudo abnormal data. However, a method for inferring the recommended range is not limited thereto. For example, the inference unit18may infer the recommended range based on a change in the input/output error of the normal data, the input/output error of the pseudo abnormal data, and/or the input/output error difference accompanying a change in the network structure. Referring toFIG.14, a range from the number of bottleneck nodes at which the input/output error of the normal data rapidly decreases to the number of bottleneck nodes at which the input/output error of the pseudo abnormal data rapidly decreases may be set to the recommended range. Specifically, the inference unit18calculates a differential value at the number of bottleneck nodes for each number of bottleneck nodes, for each of the input/output error of the normal data and the input/output error of the pseudo abnormal data. In addition, the inference unit18specifies the minimum number of nodes in the recommended range based on each differential value of the input/output error of the normal data. For example, the number of bottleneck nodes at which the differential value takes the minimum value, the number of bottleneck nodes immediately before the differential value converges to a range in which the differential value is smaller than the threshold value and the absolute value is relatively small, and the like may be specified as the minimum number of nodes. Further, the inference unit18specifies the maximum number of nodes in the recommended range based on the differential value of the input/output error of each piece of pseudo abnormal data. For example, the number of bottleneck nodes at which the differential value takes the minimum value, the number of bottleneck nodes immediately before the differential value converges to a range in which the differential value is smaller than the threshold value and the absolute value is relatively small, and the like may be specified as the maximum number of nodes. A range from the minimum number of nodes to the maximum number of nodes is set as the recommended range. The minimum number of nodes at which the input/output error of the normal data rapidly decreases and the maximum number of nodes at which the input/output error of the pseudo abnormal data rapidly decreases may be specified based on a curve shape of the input/output error of the normal data and a curve shape of the pseudo abnormal data. A range from the minimum number of nodes to the maximum number of nodes is set as the recommended range. Referring toFIG.15, a range from the number of bottleneck nodes where the input/output error difference rapidly increases to the number of bottleneck nodes where the input/output error difference rapidly decreases may be set to the recommended range. Specifically, the inference unit18calculates a differential value of the input/output error difference at the number of bottleneck nodes for each number of bottleneck nodes. In addition, the inference unit18specifies the minimum number of nodes and the maximum number of nodes in the recommended range based on each differential value of the input/output error difference. For example, the number of bottleneck nodes at which the differential value takes the minimum value, the number of bottleneck nodes immediately before the differential value converges to a range in which the differential value is larger than the threshold value and the absolute value is relatively small, and the like may be specified as the minimum number of nodes. Further, the number of bottleneck nodes at which the differential value takes the minimum value, the number of bottleneck nodes immediately after the differential value converges to a range in which the differential value is smaller than the threshold value and the absolute value is relatively small, and the like may be specified as the maximum number of nodes. The minimum number of nodes at which the input/output error difference rapidly increases and the maximum number of nodes at which the input/output error difference rapidly decreases may be specified based on the curve shape of the input/output error difference, and a range from the minimum number of nodes to the maximum number of nodes may be specified as the recommended range. A predetermined range such as 90% of the maximum value of the input/output error difference may be specified as the recommended range. In the above embodiment, the inference unit18infers the recommended range. However, the inference unit18may infer an optimum value of the network structure based on the relational data (structure/error relational data) between the network structure and the input/output error of the normal data and the input/output error of the pseudo abnormal data. For example, the inference unit18specifies, as the optimum value, a network structure in which the difference between the input/output error of the normal data and the input/output error of the pseudo abnormal data takes a maximum value. Further, the inference unit18may specify, as the optimum value, a network structure that satisfies other condition such as a calculation cost among the network structures included in the recommended range. The output control unit16may display the optimum value on the display device5. The display form is not particularly limited, and the numerical value of the network structure corresponding to the optimum may be simply displayed, or the optimum value may also be displayed to be superimposed on a graph representing the relation between the number of bottleneck nodes and the error average or a graph representing the relation between the number of bottleneck nodes and the error average difference, similarly to the recommended range. Further, the optimum value may be displayed together with the recommended range. Note that the optimum value may be selected from the network structure trained in step S601or step S1301, or an untrained network structure may be selected. In the above embodiment, the model training unit12comprehensively trains autoencoders of a plurality of network structures over a relatively wide range at one time in step S601or S1301. However, the model training unit12may hierarchically trains autoencoders of a plurality of network structures over a wide range. Hereinafter, this embodiment will be described following the example ofFIG.6. First, in step S601, the model training unit12trains autoencoders of a small number of network structures discretely set over a wide range. For example, in the case of the example ofFIG.10or11, training is performed for the autoencoders of the number of bottleneck nodes of five of 100, 101, 102, 103, and 104. Then, steps S602to S605are executed, and a graph representing a relation between each of the number of bottleneck nodes of five and the input/output error is displayed on the display device5. The user observes the graph and designates a range of the number of bottleneck nodes to be examined in detail via the input device3. The model training unit12trains autoencoders of a plurality of network structures discretely set over the designated range. For example, in a case where a range of 101to 102is designated, training is performed for autoencoders of the number of bottleneck nodes of five of 10, 25, 50, 75, and 100. Then, steps S602to S605are executed, and a graph representing a relation between each of the number of bottleneck nodes of five and the input/output error is displayed on the display device5. As a result, it is possible to examine the optimum number of bottleneck nodes in detail for the designated range. By performing training hierarchically as described above, the number of autoencoders to be trained can be reduced, and an optimum network structure can be efficiently searched. Modification In the above embodiment, data generated by performing data augmentation on normal data has been described as a specific example of pseudo abnormal data. The pseudo abnormal data according to the modification may be data of another domain different from the normal data. Hereinafter, the modification will be described following the example ofFIG.6. First, in step S601, the model training unit12trains a plurality of autoencoders having different network structures based on the normal data. It is assumed that the normal data is image data in which numerals have been drawn as illustrated inFIG.4and the like. In step S602, the pseudo abnormal data acquisition unit13acquires image data of a domain different from that of the normal data from the storage device2, an external device, or the like. In the case of the image data, the domain means a drawing target, a photographing condition, an image parameter, and the like. For example, it is assumed that the pseudo abnormal data is image data in which clothes, shoes, and the like have been drawn. Thereafter, similarly to the above embodiment, steps S603to S605are performed, and a graph or the like representing the relation between the network structure and the input/output error is displayed on the display device5. At this time, the output control unit16may output a list of output data of each of the plurality of autoencoders when the pseudo abnormal data is input for each network structure. This list is referred to as an output data list. The output data list will be described. Note that it is assumed that the network structure is, for example, the number of bottleneck nodes. FIG.16is a diagram illustrating an example of an output data list161. As illustrated inFIG.16, the output data list161arranges output image data of each of the plurality of autoencoders for each number of bottleneck nodes. In other words, the output data list161represents a change in the output image data of each of the plurality of autoencoders accompanying a change in the number of bottleneck nodes. In the uppermost row, image data in which 10 types of clothes or shoes have been drawn is arranged as input image data (pseudo abnormal data) to the autoencoder. Output image data from the autoencoder of each number of bottleneck nodes at the time of inputting each piece of pseudo abnormal data is arranged along a vertical axis. The output data list illustrated inFIG.16is displayed on the display device5, for example. The user can visually grasp a change in the output image data accompanying a change in the number of bottleneck nodes by referring to the output data list161. For example, generally, it is visually understood that, when the number of bottleneck nodes is 1, the output image data does not reproduce even the normal data and the abnormality detection accuracy is poor reproduction, when the number of bottleneck nodes is in a range from 10 to 96, the output image data does not reproduce the pseudo abnormal data and the abnormality detection accuracy is relatively good, and when the number of bottleneck nodes is in a range from 256 to 1024, the output image data reproduces the pseudo abnormal data and identity mapping occurs. In the above example, when data in another domain is used as the pseudo abnormal data, the output data list is output, but the present embodiment is not limited thereto. The output data list may be output even when data obtained by performing data augmentation on the normal data is used as the pseudo abnormal data. Further, the output data list may be not only displayed on the display device5but also stored in the storage device2or may be displayed on an external device via the communication device4. In the above example, the case where the network structure is the width or depth of the bottleneck layer has been described as an example, but the present embodiment is not limited thereto. The network structure may be a combination of the width and the depth of the bottleneck layer. In this way, it is possible to support creation of an autoencoder having high abnormality detection performance. While certain embodiments have been described, these embodiments have been presented by way of example only, and are not intended to limit the scope of the inventions. Indeed, the novel embodiments described herein may be embodied in a variety of other forms; furthermore, various omissions, substitutions and changes in the form of the embodiments described herein may be made without departing from the spirit of the inventions. The accompanying claims and their equivalents are intended to cover such forms or modifications as would fall within the scope and spirit of the inventions.
46,180
11860717
DETAILED DESCRIPTION The subject matter of embodiments of the invention is described with specificity herein to meet statutory requirements. However, the description itself is not intended to limit the scope of this patent. Rather, the inventor has contemplated that the claimed subject matter might also be embodied in other ways, to include different steps or combinations of steps similar to the ones described in this document, in conjunction with other present or future technologies. Moreover, although the terms “step” and/or “block” may be used herein to connote different elements of methods employed, the terms should not be interpreted as implying any particular order among or between various steps herein disclosed unless and except when the order of individual steps is explicitly described. For purposes of this disclosure, the word “including” has the same broad meaning as the word “comprising.” In addition, words such as “a” and “an,” unless otherwise indicated to the contrary, include the plural as well as the singular. Thus, for example, the requirement of “a feature” is satisfied where one or more features are present. Also, the term “or” includes the conjunctive, the disjunctive and both (a or b thus includes either a or b, as well as a and b). Incomplete purchases can be monitored in an effort to gain valuable insight as to monetary loss related to an application running on a client device. In this regard, conventional systems can track items placed in an electronic shopping cart but not ultimately purchased by the consumer. In some cases, however, an application may crash before an item is ultimately purchased and, due to the application crash, data is not captured. Application errors occurring during an attempt to complete a transaction can result in a monetary impact to the provider of the application. For example, a consumer may lose items previously included in his or her electronic shopping cart. Rather than the consumer subsequently replacing the items in the shopping cart, however, the consumer may decide to forego the purchase resulting in a lost monetary opportunity for the provider of the mobile application. As another example, repetitive incomplete money transfers associated with a banking application may result in a consumer utilizing another banking application. As such, these missed monetary opportunities due to application interruptions, such errors and crashes, can greatly impact monetary gain to the entity providing the application, particularly when an error(s) occurs frequently. Embodiments of the present invention provide methods and systems for tracking incomplete purchases in correlation with application performance, such as application errors or crashes. In this regard, aspects of the invention facilitate monitoring transaction events and application errors at a mobile device to capture data associated therewith. The captured data can be analyzed to correlate incomplete transactions with application errors. In this regard, transaction data, such as a monetary amount or items, associated with an incomplete purchase can be identified in connection with an application error(s) and provided to an application developer. The application developer can then use the information related to incomplete transactions corresponding with application errors to, for example, prioritize debugging errors associated with the application. For instance, the application developer may choose to dedicate resources to debugging an error associated with a largest number of incomplete transactions. Analysis to correlate incomplete transactions with application errors might be performed by an analytics service, such as the SPLUNK® ENTERPRISE system produced by Splunk Inc. of San Francisco, California, or any other backend service. The SPLUNK® ENTERPRISE system generally uses an event-based system to store and process data. The SPLUNK® ENTERPRISE system is the leading platform for providing real-time operational intelligence that enables organizations to collect, index, and harness machine-generated data from various websites, applications, servers, networks, and mobile devices that power their businesses. The SPLUNK® ENTERPRISE system is particularly useful for analyzing unstructured performance data, which is commonly found in system log files. Although an event-based system is generally referred to, the techniques are also applicable to other types of systems. In the SPLUNK® ENTERPRISE system, performance data is stored as “events,” wherein each event comprises a collection of performance data and/or diagnostic information that is generated by a computer system and is correlated with a specific point in time. Events can be derived from “time series data,” wherein time series data comprises a sequence of data points (e.g., performance measurements from a computer system) that are associated with successive points in time and are typically spaced at uniform time intervals. Events can also be derived from “structured” or “unstructured” data. Structured data has a predefined format, wherein specific data items with specific data formats reside at predefined locations in the data. For example, structured data can include data items stored in fields in a database table. In contrast, unstructured data does not have a predefined format. This means that unstructured data can comprise various data items having different data types that can reside at different locations. For example, when the data source is an operating system log, an event can include one or more lines from the operating system log containing raw data that includes different types of performance and diagnostic information associated with a specific point in time. Examples of data sources from which an event may be derived include, but are not limited to: web servers; application servers; databases; firewalls; routers; operating systems; and software applications that execute on computer systems, mobile devices, and sensors. The data generated by such data sources can be produced in various forms including, for example and without limitation, server log files, activity log files, configuration files, messages, network packet data, performance measurements, and sensor measurements. An event typically includes a timestamp that may be derived from the raw data in the event, or may be determined through interpolation between temporally proximate events having known timestamps. The SPLUNK® ENTERPRISE system also facilitates using a flexible schema to specify how to extract information from the event data, wherein the flexible schema may be developed and redefined as needed. Note that a flexible schema may be applied to event data “on the fly,” when it is needed (e.g., at search time), rather than at ingestion time of the data as in traditional database systems. Because the schema is not applied to event data until it is needed (e.g., at search time), it is referred to as a “late-binding schema.” During operation, the SPLUNK® ENTERPRISE system starts with raw data, which can include unstructured data, machine data, performance measurements, or other time-series data, such as data obtained from weblogs, syslogs, or sensor readings. It divides this raw data into “portions,” and optionally transforms the data to produce timestamped events. The system stores the timestamped events in a data store, and enables a user to run queries against the data store to retrieve events that meet specified criteria, such as containing certain keywords or having specific values in defined fields. Note that the term “field” refers to a location in the event data containing a value for a specific data item. As noted above, the SPLUNK® ENTERPRISE system facilitates using a late-binding schema while performing queries on events. A late-binding schema specifies “extraction rules” that are applied to data in the events to extract values for specific fields. More specifically, the extraction rules for a field can include one or more instructions that specify how to extract a value for the field from the event data. An extraction rule can generally include any type of instruction for extracting values from data in events. In some cases, an extraction rule comprises a regular expression, in which case the rule is referred to as a “regex rule.” In contrast to a conventional schema for a database system, a late-binding schema is not defined at data ingestion time. Instead, the late-binding schema can be developed on an ongoing basis until the time a query is actually executed. This means that extraction rules for the fields in a query may be provided in the query itself, or may be located during execution of the query. Hence, as an analyst learns more about the data in the events, the analyst can continue to refine the late-binding schema by adding new fields, deleting fields, or changing the field extraction rules until the next time the schema is used by a query. Because the SPLUNK® ENTERPRISE system maintains the underlying raw data and provides a late-binding schema for searching the raw data, it enables an analyst to investigate questions that arise as the analyst learns more about the events. In the SPLUNK® ENTERPRISE system, a field extractor may be configured to automatically generate extraction rules for certain fields in the events when the events are being created, indexed, or stored, or possibly at a later time. Alternatively, a user may manually define extraction rules for fields using a variety of techniques. Also, a number of “default fields” that specify metadata about the events rather than data in the events themselves can be created automatically. For example, such default fields can specify: a timestamp for the event data; a host from which the event data originated; a source of the event data; and a source type for the event data. These default fields may be determined automatically when the events are created, indexed or stored. In some embodiments, a common field name may be used to reference two or more fields containing equivalent data items, even though the fields may be associated with different types of events that possibly have different data formats and different extraction rules. By enabling a common field name to be used to identify equivalent fields from different types of events generated by different data sources, the system facilitates use of a “common information model” (CIM) across the different data sources. In accordance with embodiments of the present invention, transaction events and/or application errors are monitored at a mobile device.FIG.1illustrates an exemplary computing environment100for developing an application that includes monitoring functionality, in accordance with embodiments of the present invention. The environment100is an example of one suitable environment. The environment100should not be interpreted as having any dependency or requirement related to any single module/component or combination of modules/components illustrated therein. Each may comprise a single device or multiple devices cooperating in a distributed environment. For instance, components may comprise multiple devices arranged in a distributed environment that collectively provide the functionality described herein. Additionally, other components not shown may also be included within the infrastructure. The environment100may include developer computing device102, SDK provider104, and network106. The network106may include, without limitation, one or more local area networks (LANs) and/or wide area networks (WANs). Such networking environments are commonplace in offices, enterprise-wide computer networks, intranets and the Internet. Accordingly, the network106is not further described herein. Any number of developer computing devices and SDK providers may be employed in the environment100within the scope of embodiments of the present invention. Each may comprise a single device/interface or multiple devices/interfaces cooperating in a distributed environment. For instance, the developer computing device102may comprise multiple devices and/or modules arranged in a distributed environment that collectively provide the functionality of providing content to client devices. Additionally, other components/modules not shown also may be included within the environment100. It should be understood that this and other arrangements described herein are set forth only as examples. Other arrangements and elements (e.g., machines, interfaces, functions, orders, and groupings of functions) can be used in addition to or instead of those shown, and some elements may be omitted all together. In particular, other arrangements may support scalable implementations of embodiments of the present invention. The infrastructure100can be scalable to support the ability to be enlarged to accommodate growth. Further, many of the elements described herein are functional entities that may be implemented as discrete or distributed components or in conjunction with other components, and in any suitable combination and location. Various functions described herein as being performed by one or more entities may be carried out by hardware, firmware, and/or software. For instance, various functions may be carried out by a processor executing instructions stored in memory. The developer computing device102is configured for use in developing or generating an application(s) that can be distributed to various client devices, such as client device208ofFIG.2. In this manner, a developer can utilize the developer computing device102to develop application110. An application refers to a computer program that enables content to be provided to a user. An application may be a mobile application or any other application that is executable via a client device. An application can provide or be related to any type of content, such as, for example, email, stock market, sports, social networking, games, merchandise, banking, investing, shopping, or any other content including a potential transaction that might be initiated, etc. In accordance with embodiments described herein, the application110incorporates or includes a monitoring component112. At a high level, the monitoring component112is a software component that facilitates monitoring events, such as transaction events and/or performance events, when the application is installed on a client device. In embodiments, the monitoring component112is a plug-in, extension, or add-on (generally referred to herein as a plug-in). A plug-in refers to a software component that adds a specific feature or set of features to an existing software application or a software application being developed. An application that supports plug-ins can enable customization of the application. In some embodiments, the monitoring component112can operate using a set of custom monitoring classes that facilitate event monitoring. A set of custom monitoring classes are executable code of the monitoring component that support monitoring events. A class refers to an extensible program-code-template for creating objects, providing initial values for state, and implementations of behavior. In this way, a class is a template or pattern in code that defines methods to generate a particular object and behaviors thereof. A custom class refers to a class that is customized or generated to perform a specific functionality that may be unique from a default or native class. A default or native class refers to a class originally developed for implementation. A default or native class may be designed by, for instance, a platform on which an application may run, such as Android™, iOS®, and Windows Phone®, to name a few. A custom monitoring class is a class that is customized or generated to facilitate performance of a monitoring aspect as described herein. It is contemplated that custom monitoring classes can be incorporated in the application110as a monitoring component or plug-in via a software development kit (SDK), for instance. In such a case, a software development kit (SDK)114can be downloaded, for example, from SDK provider104via network106. AlthoughFIG.1illustrates a SDK provider104to provide SDK114to the developer computing device102, it can be appreciated that any other means can be used to obtain an SDK at the developer computing device102. The SDK114can include the custom monitoring classes that can be incorporated into the application110as the monitoring component112. In this manner, a custom class or set of custom classes can be provided (e.g., via a library) through a software development kit (SDK) environment in which the developer of the application110writes the corresponding code. When the code is converted to an executable application, the set of custom classes can become part of the application itself. In some cases, a custom class or set of custom classes are generated and offered (e.g., via an SDK) by a developer or provider of an analytics service, such as a service provided by analytics server218ofFIG.2. In such cases, the analytics service provider can use custom classes to facilitate or control monitoring of events to identify transaction and/or performance events that can be used to analyze transaction impacts based on application errors or crashes. In embodiments, the custom monitoring classes can be incorporated into the application110via code that calls the monitoring component112. In this regard, custom monitoring classes can become accessible via a library, for example. As such, using the developer computing device102, a developer of the application110can add code into the application code that is used to trigger the monitoring component112(e.g., set of custom monitoring classes) during execution of the application. Code that triggers the monitoring component may be referred to herein as a monitor trigger. Code triggering the monitoring component112may be added or incorporated into any aspect of the application code. For instance, a monitor trigger may be included at or near the beginning of the executable code of the application110such that the monitoring component112is initiated or triggered as the application is launched or started. By way of example only, and without limitation, a developer using the developer computing device102to develop an application may download an SDK that facilitates monitoring of events in association with the application. The SDK may be downloaded from a provider of an analytics service, such as a provider associated with analytics server218ofFIG.2that analyzes data to report transaction impact associated with application performance (e.g., errors or crashes). The SDK may include monitoring operations, including a set of custom monitoring classes that, in implementation, facilitate monitoring of events at client devices. To call the monitoring operations or functionality during execution of the application at a client device, the developer may provide a monitor trigger within the code of the application. For example, a monitor trigger may be provided at the initialization of the application such that the monitoring component is called as the application is opened, activated, logged into, or initialized at a client device. The developer operating the developer computing device102can then make the application executable such that the application, along with the monitoring component associated therewith, can be downloaded and executed at client devices. U.S. application Ser. No. 14/524,748, filed, Oct. 27, 2014, titled “Utilizing Packet Headers to Monitor Network Traffic in Association with a Client Device,” describes utilization of custom classes provided via a SDK, which is herein incorporated by reference. As can be appreciated, the application110incorporating monitoring operations can be generated or developed in any manner and is not intended be limited to embodiments described herein (e.g., use of custom classes provided via a SDK). For example, although an SDK might create and send collected data, embodiments are not limited to any specific SDK. As another example, an SDK, or portion thereof, is not needed to capture and/or send data to an analysis service. For example, in some cases, a developer using the developer computing device102can independently generate the application with monitoring operations using any programming means (e.g., without use of an SDK). Upon developing the application110that incorporates monitoring operations, via monitoring component112, the application can be distributed to various client devices, such as client device208ofFIG.2. Applications can be distributed to client devices in any manner. In some cases, the application may be distributed to a client device directly from the developer computing device102. In other cases, the application may be distributed to a client device via an application marketplace or other application distribution system. For instance, an application marketplace or other application distribution system might distribute the application to a client device based on a request from the client device to download the application. Turning now toFIG.2, an example analytics infrastructure (“infrastructure”)200, in which embodiments of the present invention may be performed, is shown. The infrastructure200is an example of one suitable infrastructure. The infrastructure200should not be interpreted as having any dependency or requirement related to any single module/component or combination of modules/components illustrated therein. Each may comprise a single device or multiple devices cooperating in a distributed environment. For instance, components may comprise multiple devices arranged in a distributed environment that collectively provide the functionality described herein. Additionally, other components not shown may also be included within the infrastructure. The infrastructure200may include client device208, application server216, analytics server218, and network206. The network206may include, without limitation, one or more local area networks (LANs) and/or wide area networks (WANs). Such networking environments are commonplace in offices, enterprise-wide computer networks, intranets and the Internet. Accordingly, the network206is not further described herein. Any number of client devices, application servers, and analytics servers may be employed in the infrastructure200within the scope of embodiments of the present invention. Each may comprise a single device/interface or multiple devices/interfaces cooperating in a distributed environment. For instance, the application server216may comprise multiple devices and/or modules arranged in a distributed environment that collectively provide the functionality of providing content to client devices. Similarly, the analytics server218may comprise multiple devices and/or modules arranged in a distributed environment that collectively provide the functionality of performing analytics. Additionally, other components/modules not shown also may be included within the infrastructure200. It should be understood that this and other arrangements described herein are set forth only as examples. Other arrangements and elements (e.g., machines, interfaces, functions, orders, and groupings of functions) can be used in addition to or instead of those shown, and some elements may be omitted all together. In particular, other arrangements may support scalable implementations of embodiments of the present invention. The infrastructure200can be scalable to support the ability to be enlarged to accommodate growth. Further, many of the elements described herein are functional entities that may be implemented as discrete or distributed components or in conjunction with other components, and in any suitable combination and location. Various functions described herein as being performed by one or more entities may be carried out by hardware, firmware, and/or software. For instance, various functions may be carried out by a processor executing instructions stored in memory. The client device208may include any type of computing device, such as computing device700described below with reference toFIG.7, for example. The client device208might take on a variety of forms, such as a personal computer (PC), a laptop computer, a mobile phone, a smartphone, a smartwatch, a tablet computer, a wearable computer, a personal digital assistant (PDA), a server, an MP3 player, a global positioning system (GPS) device, a video player, a handheld communications device, a workstation, any combination of these delineated devices, or any other suitable device. Generally, the client device208can provide access to different content, for instance, content provided via various applications, such as application210. In this regard, content can be provided to the client device208from an application server (e.g., application server214) accessible via a network (e.g., network206) such as the Internet or a private local area network. As illustrated inFIG.2, the client device208has application210installed thereon. The application210can be obtained from an application developer, such as developer computing device102ofFIG.1. In other cases, the application may be obtained via an application marketplace or other application distribution system. For instance, the client device208might download application210from an application marketplace or other application distribution system. The application, via the client device208, communicates with the application server216to exchange information (e.g., content, data, etc.) therebetween. The application server216is generally configured to store, process, and deliver content to client devices, such as client device208. The communication between client device208and the application server216may take place using, for example, the Hypertext Transfer Protocol (HTTP) or Hypertext Transfer Protocol Secure (HTTPS). Content delivered from the application server216to the client device208might be HTML documents, which may include images, style sheets and scripts in addition to text content. Any other type of content (e.g., audio, video, etc.) is contemplated within the scope of embodiments of the present invention, and HTML documents are provided only as an example. As can be appreciated, in addition to delivering content to client devices, various request and response packets can also be communicated between client devices and the application server. For example, generally, an application residing on a client device initiates communication with a corresponding application server by making a request for a specific resource (e.g., using HTTP), and the application server responds with the appropriate content. In accordance with embodiments described herein, generally, content delivered from the application server216includes content associated with a transaction or purchase of an item, such as a monetary transaction. For example, in some cases, content may include indications of items that may be purchased or content enabling a money transfer (e.g., via a banking or investing application). A transaction generally refers to an exchange or interaction, generally associated with payment. A transaction may be a purchase of an item or another exchange of money, for example. An incomplete transaction is a transaction that has been initiated but is not completed or finished. With reference to the application210running on the client device208, the application210includes a monitoring component212, such as a plug-in, that is used to facilitate monitoring events, such as transaction events and/or performance events, in association with the application210. In this regard, the monitoring component212may utilize classes, such as custom monitoring classes, to implement monitoring of events associated with the application210. The monitoring component212may be incorporated into the application210in any number of ways. The method employed to incorporate or provide the network monitor described herein is not intended to limit the scope of embodiments of the present invention. Further, although the monitoring component212is generally described herein as being incorporated into the application210, as can be appreciated, such monitoring may operate independent of an application or set of applications for which events are being monitored. For example, the monitor described herein may be implemented as a stand-alone application (e.g., on the client device or remote therefrom) in communication with the application210. At a high-level, the monitoring component212facilitates monitoring events associated with the application210running on the client device208. In particular, transaction events and/or performance events are monitored and identified. Transaction events refer to events related to a transaction or purchase. Transaction events may include, for example, adding an item into a shopping cart, selecting a purchase icon, receiving transaction input (e.g., credit card information), presentation of shipping costs, presentation of taxes, presentation of total costs, etc. Performance events refer to events related to performance of the application. By way of example, and not limitation, a performance event may be an application error, an application crash, an application version, etc. Monitoring events enables application data to be gathered corresponding with performance and/or transactions associated with an application or set of applications. Upon obtaining application data associated with the application, the application data can be provided to the analytics server218for analyzing incomplete transactions correlated with the application performance, as described in more detail below. In one implementation, monitoring component212is invoked upon launching the application210. The application210may be launched or invoked in any manner, such as, for example, upon user selection of the application210via an interface of the client device208or completion of user login to the application. As another example, application210might be remotely launched (e.g., via a remote computer) by a developer or an administrator to cause monitoring. In accordance with initiating execution or login of the application210, the monitoring component212may be triggered or initiated. By way of example, and without limitation, monitor trigger code may be included in the application210to call the monitoring component212. In this regard, when the instructions in the application program call or reference the monitoring component212, such as a monitoring plug-in, during run time of the application210at the client device208, the monitoring component212is triggered or initiated. For instance, a monitoring plug-in may be initiated at start up, login, or initialization of the application210via a single line of code invoking such a plug-in. In some cases, the monitoring component212may run in parallel or concurrent with other functionality provided by the application210in which the monitoring component212is incorporated. In accordance with initiating the monitoring component212, such as a monitoring plug-in, a unique session identifier can be created for a session including a series of events. In some embodiments, the session identifier can be used to correlate performance data (e.g., an error or crash of the application) with transaction data (e.g., data associated with an incomplete purchase). A session may terminate in any number of instances, and is not intended to be limited to sessions described herein. For example, in some cases, a session may terminate upon an error or crash occurring in connection with the application. In this regard, application data created between the session initialization (e.g., start initialization of the plug-in) and an application crash are associated with the same designated session identifier. Other examples of sessions terminating include closing the application, logging out of the application, a new initialization of the application, or as otherwise designated by an application developer. In accordance with the monitoring component212being initiated at the client device208, the monitoring component212can listen for or detect events, such as transaction events and/or performance events. As described, a transaction event refers to any functionality designated as being associated with or corresponding to a transaction. Such a transaction may be a monetary purchase, a monetary exchange, or other exchange for which data is desired to be captured. A transaction event can be detected, for example, in connection with a user selection or interaction with a transaction indicator, such as selection of a purchase icon, selection to add an item to an electronic shopping cart, selection to begin a transaction (e.g., purchase or electronic monetary transfer), selection related to shipping, input of credit card or other payment information, or the like. Other examples of transaction events can include presentation of shipping costs, presentation of total costs, presentation of taxes, presentation of discounts/coupons, etc. In embodiments, any number of lines of code within the application (e.g., via SDK or otherwise) can be used to listen for or detect various events. For example, a provider of one application may wish to detect a first and second transaction event, while a provider of another application may wish to detect a third and fourth transaction event. As such, each provider can implement such event listeners in the code as appropriate. For instance, an extra line of code may be included to recognize that when a user selects a “add item to shopping cart” button, an event is logged. In such a case, upon detecting a user selected a “add item to shopping cart” button, an indication of the transaction event can be logged, e.g., as Mint.purchaseBegin(itemID, cost). Other code can be included to recognize that when a user selects a “purchase” button, an event is logged. In this scenario, upon detecting a user selection of a “purchase” button, an indication of the transaction event can be logged, e.g., as Mint.purchaseComplete(itemID, cost). As can be appreciated, the event log may contain other events, including events occurring before, between, and/or after these example events described. In accordance with detecting a transaction event, data associated with the transaction event, namely, transaction data, can be identified and/or captured. Transaction data refers to any data associated with a transaction or potential transaction (e.g., transaction in progress). By way of example only, and not limitation, transaction data may be an item identifier that identifies an item desired to be purchased, a monetary amount associated with an item desired to be purchased, a number of items desired to be purchased, a total monetary amount associated with items desired to be purchased, shipping costs, time of event, or the like. In some embodiments, the monitoring component212can additionally or alternatively listen for or detect a performance event(s). A performance event refers to an event associated with performance of the application. As described herein, a performance event is generally related to an error or crash associated with an application. In this regard, when the application being monitored incurs an error or a crash, a performance event is detected. In accordance with detecting a performance event, data associated with the performance event, namely, performance data, can be identified and/or captured. Performance data refers to any data associated with performance of the application. As examples, and without limitation, performance data may be an error or crash identifier (generally referred to herein as an error identifier) that identifies the type of error that occurred in connection with the application, a version identifier that identifies the version of the application, an application identifier that identifies the specific application, a time the error occurred, a geographical location of the client device at the time of the error, or the like. In this regard, an occurrence of an error may result in or correspond with, for example, information about the user, the application, and/or the user device. The application data, such as transaction data and/or the performance data, can be captured, for example, in an event log that logs data associated with the transaction and/or performance events. As can be appreciated, a single event log can be used to capture both the transaction data and the performance data. In other embodiments, separate event logs can be used to capture the transaction data and the performance data. A session identifier identifying a session can also be included in the event log in connection with the transaction data and/or performance data. Upon obtaining application data in association with the application210, the application data can be stored (e.g., via an event log) at the client device208and/or communicated to the analytics server218for processing. Alternatively or additionally, data processing may occur at the client device208. As can be appreciated, the application data can be stored at the client device208and communicated to an analytics server218at any time, such as immediately, upon a time delay, in accordance with an occurrence of an event, etc. For instance, application data, such as transaction data and/or purchase data (e.g., in the event log), may be sent at the end or completion of a session, upon the occurrence of a performance event (e.g., an application crash), upon initialization of the application after an occurrence of a performance event (e.g., application crash), upon collecting a particular amount of data (e.g., 150 kilobytes of data), upon initialization of the application (e.g., communicate data collected in previous session), in real-time in accordance with collecting the data, or the like. In some cases, collected information may be transformed to JSON objects at the client device208and/or the analytics server218for use in processing the data. As can be appreciated, although the monitoring component212is described herein as detecting transaction and performance data and collecting data associated therewith, it can be appreciated that such functionality may be performed by multiple components or separate components. For instance, a first monitoring component might monitor transaction events and capture data associated therewith, while a second monitoring component might monitor performance data and capture data associated therewith. Such data might be aggregated or communicated separately to an analytics server for processing. The analytics server218facilitates the discovery and communication of meaningful patterns in data. In particular, the analytics server218may operate with additional components and modules in the infrastructure200to perform analytics based on monitored application data. In accordance with aspects of the present invention, the analytics server218can obtain various types of data, including application data identified and captured at various client devices. Such data can be analyzed and used to provide, determine, or calculate transaction impact or attributes based on application performance (e.g., errors). To this end, various application data captured at client devices can be communicated to the analytics server218at which transaction attributes can be calculated therefrom. A transaction attribute refers to a characteristic, quality, or feature used to indicate or measure some component of data related to a transaction or set of transactions (e.g., an incomplete transaction or purchase) corresponding or associated with application performance (e.g., an application error). A transaction refers to an exchange or interaction, generally associated with payment. An incomplete transaction is a transaction that has been initiated but not completed. As described in more detail below, transaction attributes are determined to provide information related to an incomplete transaction or set of incomplete transactions corresponding with an application error(s). In embodiments, a transaction attribute indicates a transaction impact or purchase impact associated with an error of an application. By way of example, and without limitation, a transaction attribute may be an item identifier(s) identifying a specific item(s) associated with an incomplete purchase, an amount of money associated with an item or set of items for an incomplete purchase, a number of items associated with an incomplete purchase, an error identifier identifying an error associated with an incomplete transaction, a quantity of errors (e.g., a specific type of error) associated with incomplete transactions within a time period, a location(s) associated with incomplete transaction(s), a version of an application associated with incomplete transaction(s), a type of device associated with incomplete transaction(s), etc. In embodiments, to determine transaction attributes, the analytics server218may initially receive application data. For example, application data associated with an application installed on a mobile device can be provided to the analytics server218, as described above. Such data may include transaction data, performance data, and/or a session identifier. Application data can be stored in association with the analytics server218such that the data is accessible by the analytics server218for analyzing the data. As can be appreciated, application data can be provided from any number of client devices and in any number of manners. For instance, a first set of application data might be provided from a first mobile device upon an application crash occurring at the first mobile device, and a second set of application data might be provided from a second mobile device upon an application crash at the second mobile device. The application data can be referenced and used to determine that an application error correlates with an incomplete transaction. That is, the application data can be used to identify that an error that occurred in association with the application, such as application210installed on the device208, correlates with an incomplete transaction (e.g., monetary transaction) initiated via the application. Application data used to make such a determination can be performance data, transaction data, and/or a session identifier. In embodiments, a session identifier can be used to associate performance data (e.g., an error) with transaction data (e.g., incomplete transaction). A determination of whether an application error correlates with an incomplete transaction can be performed in a number of manners and is not intended to be limited to examples provided herein. In one implementation, a correlation can be recognized by initially identifying an incomplete transaction(s). One exemplary method used to identify an incomplete transaction is to identify an initialization of a transaction event that is not associated with a completion of the transaction. For instance, assume that an event log is used to capture transaction events identified by code running on a client device. Such events being monitored and captured may include, e.g., selection of an “add to shopping cart” button, selection of a “purchase” button, presentation of shipping costs, etc. The event log, or portion thereof, can be searched to determine if a transaction event initiated, as indicated by Mint.purchaseBegin(itemID, cost), for example, is followed by a completion of the transaction, as indicated by Mint.purchaseComplete(itemID, cost). In this manner, an incomplete transaction can be identified when an item is added to an electronic shopping cart without a subsequent purchase of the item (e.g., payment for the item). For instance, the event log can be searched to determine that the transaction event was initiated (e.g., item added to shopping cart) as indicated by an event in the event log but not completed as indicated by a lack of subsequent event in the log (e.g., purchase item) and, as such, determine an incomplete transaction exists. Such indicators are examples only and not intended to limit the scope of embodiments herein. When completion of the transaction is not identified, the transaction can be designated as an incomplete transaction. Upon determining an incomplete transaction, that is, a transaction that was not completed, a determination can be made as to whether the incomplete transaction is associated with an error or crash occurring in connection with a mobile application. In some cases, a session identifier is used to make the determination as to whether the incomplete transaction is associated with the application error. For instance, a session identifier associated with the incomplete transaction can be referenced and used to search an error log to determine if the session identifier corresponds with an application error. If the incomplete transaction is associated with an application error, a correlation between the application error and the incomplete transaction can be designated. In another implementation, an error or crash associated with an application is initially identified. In some cases, a specific error identifier might be searched for in an event log to identify a particular error. Upon determining an error associated with an application, a determination can be made as to whether the error corresponds with an incomplete transaction. In some cases, a session identifier is used to make the determination as to whether the application error is associated with the incomplete transaction. For example, a session identifier associated with the application error can be referenced and used to search a transaction log to determine if the session identifier corresponds with an incomplete transaction. If the application error is associated with an incomplete transaction, a correlation between the application error and the incomplete transaction can be designated. Upon determining an application error correlates with the incomplete transaction, the transaction data and/performance data associated therewith can be used to determine a transaction attribute(s), such as a transaction impact associated with the error. In some cases, the analytics server218might provide application data, such as transaction data and/or performance data, as a transaction attribute to an application developer or provider (e.g., at the developer computing device102ofFIG.1). In other cases, application data might be used to determine or calculate a transaction attribute(s). Although the analytics server218is generally discussed as generating transaction attributes, any component can determine transaction attributes, including the client device208. As previously described, a transaction attribute refers to a characteristic, quality, or feature used to indicate or measure some component of data related to a transaction or set of transactions (e.g., an incomplete transaction or purchase). Transaction attributes are generally determined in order to provide information related to an incomplete transaction or set of incomplete transactions corresponding with application performance, such as an application error(s). As such, an application developer can utilize the information in a manner deemed suitable to the developer. For instance, the developer may recognize a monetary amount lost due to a particular type of application error and, in response, proceed with immediate modifications to prevent or reduce the subsequent occurrence of the error. Examples of transaction attributes include an item identifier(s) identifying a specific item(s) associated with an incomplete purchase, an amount of money associated with an item or set of items for an incomplete purchase, a number of items associated with an incomplete purchase, an error identifier identifying an error associated with an incomplete transaction, a quantity of errors (e.g., a specific type of error) associated with incomplete transactions within a time period, a location of a client device at the time of an application error, etc. Another example of a transaction attribute may be an indication of abandonment of a transaction (e.g., a purchase transaction) based on presentation of a particular cost, such as a shipping cost. Such a transaction attribute can be determined, for instance, based on a time duration occurring between a presentation of a shipping cost and abandonment of a shopping cart. Abandonment of the shopping cart may be determined in any number of manners, such as termination of a session, removal of the item from the shopping cart, etc. In some cases, application data can be aggregated together to generate a transaction attribute. In this regard, application data provided via any number of mobile devices can be aggregated and used to generate a transaction attribute. For example, for sessions associated with a particular application (or set of applications) and occurring within a particular time period, a quantity of matching error identifiers can be calculated to identify a number of occurrences of each error that resulted in an incomplete transaction. By way of example, and with reference toFIG.3,FIG.3illustrates a set of errors300correlating with incomplete transactions in association with an application. As shown inFIG.3, the set of errors300includes the most frequent errors occurring in the last seven days. For each error identifier302, a total number of error occurrences304is provided, as well as the trend306associated with the error during the seven day duration and the monetary amount lost308associated with the error during the seven day duration (e.g., items dropped or lost from shopping cart upon error). Although not illustrated, it is contemplated that any amount of type of data associated with the errors could also be calculated and provided to the user. For example, the uncollected monetary amount, number of unpurchased items, indications of the unpurchased items, or the like could be provided in connection with the errors and/or used to rank the errors. As further examples, a geographical location of the mobile devices, a version of the application, or other device or application data could also be presented in connection with the errors to illustrate other information or trends related to the application errors. As another example, for sessions associated with a particular application (or set of applications) and occurring within a particular time period, transaction data associated with incomplete purchases can be aggregated. For instance, item identifiers, monetary amount associated with unpurchased items, etc. can be aggregated to identify a transaction impact associated with an error or set of errors. As can be appreciated, in some cases, transaction data associated with a particular error identifier can be aggregated together to generate a transaction attribute. That is, incomplete purchases associated with a particular error identifier can be aggregated to calculate, for instance, the amount of money lost due to a specific type of application crash. Upon generating a set of one or more transaction attributes, a set of transaction attributes can then be communicated to an application developer or provider so that the application developer can utilize such information to optimize and/or resolve issues associated with an application(s). For example, the analytics server218may obtain application data indicating transactions and/or performance in relation to a particular mobile application running on any number of mobile devices. The captured application data can be analyzed to generate transaction attributes that indicate impacts of incomplete purchases based on application errors. Such transaction attributes can be reported to the application developer or provider (e.g., via developer computing device102ofFIG.1or other device accessible by a developer) such that the application developer is informed in relation to transactions and/or performance associated with the application. In various implementations, the analytics server218may report transaction attributes related to a specific application, a set of applications (e.g., applications provided by a specific application developer), client devices having a particular application installed thereon, etc. As can be appreciated, transaction attributes can be provided to the application developer in any manner. In one embodiment, for example, the transaction attributes can be accessed via a web browser or application operating at a computing device of an application developer. In this regard, a dashboard can be presented to the developer including analytics data that indicate errors associated with incomplete transactions. In some cases, a notification can be provided to a developer upon detection of a transaction attribute or set of transaction attributes. For example, a developer may be notified of a single error and transaction attribute(s) associated therewith. As another example, a developer may be notified of a transaction attribute(s) when the attribute exceeds a threshold, such as, for instance, a monetary amount, quantity, etc. Transaction attributes, or indications thereof, may additionally or alternatively be provided to a user of the client device. For example, upon detection of a transaction attribute, a user may be notified of the error occurrence. In some cases, the user may be provided with a notification communicated to the client device, which includes a link to refill an electronic shopping cart with an item(s) previously included in a shopping cart but lost due to the occurrence of the error. Embodiments of the present invention may be implemented via a cloud computing platform. In particular, a scalable implementation can be implemented using a cloud computing platform that comprises components described herein in a distributed manner. A cloud computing platform may span wide geographic locations, including countries and continents. The cloud computing platform may include nodes (e.g., computing devices, processing units, or blades in a server rack) that are allocated to run one or more portions of the components of the present invention. Components may be partitioned into virtual machines or physical machines that concurrently support functional portions such that each functional portion is able to run on a separate virtual machine. With reference now toFIG.4a flow diagram is provided that illustrates a method400for facilitating tracking incomplete purchases in correlation with application errors in accordance with an embodiment of the present invention. Such an implementation may be performed at an analytics server, such as analytics server218ofFIG.2. Initially, at block402, application data associated with an application installed on a mobile device is received. Application data can include, for instance, performance data, transaction data, session identifiers, or the like. At block404, the application data is used to determine that an error that occurred in association with the application installed on the mobile device correlates with an incomplete monetary transaction initiated via the application. Based on the correlation between the incomplete monetary transaction and the error, a transaction attribute associated with the error is determined, as indicated at block406. The transaction attribute may be any characteristic or measure of information related to a transaction or set of transactions (e.g., incomplete transactions). Examples of transaction attributes include an item identifier(s) identifying a specific item(s) associated with an incomplete purchase, an amount of money associated with an item or set of items for an incomplete purchase, a number of items associated with an incomplete purchase, an error identifier identifying an error associated with an incomplete transaction, a quantity of errors (e.g., a specific type of error) associated with incomplete transactions within a time period, a location of a client device at the time of an application error, etc. With reference now toFIG.5, a flow diagram is provided that illustrates a method500for facilitating tracking incomplete purchases in correlation with application errors in accordance with an embodiment of the present invention. Initially, at block502, a monitor running in connection with an application on a mobile device is initiated. A monitor may be automatically initiated upon initialization of the application. At block504, a session identifier is generated. Subsequently, at block506, a purchase begin event is detected and logged in an event log in association with the session identifier. At block508, an error occurs in association with the application on the mobile device. Data associated with the session are communicated to an analytics server, as indicated at block510. At block512, incomplete transactions are identified. Incomplete transactions can be identified at any time and in association with any collected data. For instance, such an identification may occur upon receiving the data, upon a lapse of a predetermined time period, etc. for data collected within a predetermined time period. Session identifiers associated with the incomplete transactions are referenced and used to search if associated with an application error, as indicated at block514. In some cases, a specific error may be searched for, while in other cases, any errors occurring in connection with the application may be identified. At block516, an amount of money lost per application error is identified and summed together to calculate a total amount of money lost in connection with an application error(s). Turning now toFIG.6, a flow diagram is provided that illustrates a method600for facilitating tracking incomplete purchases in correlation with application errors in accordance with an embodiment of the present invention. Initially, at block602, a monitor running in connection with an application on a mobile device is initiated. A monitor may be automatically initiated upon initialization of the application. At block604, a session identifier is generated. Subsequently, at block606, an event associated with a purchase is detected and logged in an event log in association with the session identifier. At block608, an error occurs in association with the application on the mobile device. Data associated with the session are communicated to an analytics server, as indicated at block610. At block612, application errors are identified. In some cases, a specific error may be searched for, while in other cases, any errors occurring in connection with the application may be identified. Application errors can be identified at any time and in association with any collected data. For instance, such an identification may occur upon receiving the data, upon a lapse of a predetermined time period, etc. for data collected within a predetermined time period. Session identifiers associated with the application errors are referenced and used to search if associated with an incomplete transaction, as indicated at block614. At block616, a quantity of incomplete transactions per application error is identified. The quantity of incomplete transactions on a per error basis can be provided to the developer, for example, as illustrated inFIG.3. Having described an overview of embodiments of the present invention, an exemplary operating environment in which embodiments of the present invention may be implemented is described below in order to provide a general context for various aspects of the present invention. Referring toFIG.7in particular, an exemplary operating environment for implementing embodiments of the present invention is shown and designated generally as computing device700. The computing device700is but one example of a suitable computing environment and is not intended to suggest any limitation as to the scope of use or functionality of embodiments of the invention. Neither should the computing device700be interpreted as having any dependency or requirement relating to any one component nor any combination of components illustrated. Embodiments of the invention may be described in the general context of computer code or machine-useable instructions, including computer-useable or computer-executable instructions such as program modules, being executed by a computer or other machine, such as a personal data assistant or other handheld device. Generally, program modules include routines, programs, objects, components, data structures, and the like, and/or refer to code that performs particular tasks or implements particular abstract data types. Embodiments of the invention may be practiced in a variety of system configurations, including, but not limited to, hand-held devices, consumer electronics, general-purpose computers, more specialty computing devices, and the like. Embodiments of the invention may also be practiced in distributed computing environments where tasks are performed by remote-processing devices that are linked through a communications network. With continued reference toFIG.7, the computing device700includes a bus710that directly or indirectly couples the following devices: a memory712, one or more processors714, one or more presentation components716, one or more input/output (I/O) ports718, one or more I/O components720, and an illustrative power supply722. The bus710represents what may be one or more busses (such as an address bus, data bus, or combination thereof). Although the various blocks ofFIG.7are shown with lines for the sake of clarity, in reality, these blocks represent logical, not necessarily actual, components. For example, one may consider a presentation component such as a display device to be an I/O component. Also, processors have memory. The inventors hereof recognize that such is the nature of the art, and reiterate that the diagram ofFIG.7is merely illustrative of an exemplary computing device that can be used in connection with one or more embodiments of the present invention. Distinction is not made between such categories as “workstation,” “server,” “laptop,” “hand-held device,” etc., as all are contemplated within the scope ofFIG.7and reference to “computing device.” Computing device700typically includes a variety of computer-readable media. Computer-readable media can be any available media that can be accessed by computing device700and includes both volatile and nonvolatile media, removable and non-removable media. By way of example, and not limitation, computer-readable media may comprise computer storage media and communication media. Computer storage media include volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules or other data. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by computing device700. Computer storage media excludes signals per se. Communication media typically embodies computer-readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared and other wireless media. Combinations of any of the above should also be included within the scope of computer-readable media. Memory712includes computer storage media in the form of volatile and/or nonvolatile memory. The memory may be removable, non-removable, or a combination thereof. Exemplary hardware devices include solid-state memory, hard drives, optical-disc drives, etc. Computing device700includes one or more processors714that read data from various entities such as memory712or I/O components720. Presentation component(s)716present data indications to a user or other device. Exemplary presentation components include a display device, speaker, printing component, vibrating component, etc. I/O ports718allow computing device700to be logically coupled to other devices including I/O components720, some of which may be built in. Illustrative I/O components720include a microphone, joystick, game pad, scanner, hard/soft button, touch screen display, etc. As described above, functionality described in association with an analytics server can be performed by an event-based system, such as the SPLUNK® ENTERPRISE system produced by Splunk Inc. of San Francisco, California.FIG.8presents a block diagram of an exemplary event-processing system800, similar to the SPLUNK® ENTERPRISE system. System800includes one or more forwarders801that collect data obtained from a variety of different data sources805, and one or more indexers802that store, process, and/or perform operations on this data, wherein each indexer operates on data contained in a specific data store803. These forwarders and indexers can comprise separate computer systems in a data center, or may alternatively comprise separate processes executing on various computer systems in a data center. During operation, the forwarders801identify which indexers802will receive the collected data and then forward the data to the identified indexers. Forwarders801can also perform operations to strip out extraneous data and detect timestamps in the data. The forwarders next determine which indexers802will receive each data item and then forward the data items to the determined indexers802. Note that distributing data across different indexers facilitates parallel processing. This parallel processing can take place at data ingestion time, because multiple indexers can process the incoming data in parallel. The parallel processing can also take place at search time, because multiple indexers can search through the data in parallel. System800are further described in “Exploring Splunk Search Processing Language (SPL) Primer and Cookbook” by David Carasso, CITO Research, 2012, and in “Optimizing Data Analysis With a Semi-Structured Time Series Database” by Ledion Bitincka, Archana Ganapathi, Stephen Sorkin, and Steve Zhang, SLAML, 2010, each of which is hereby incorporated herein by reference in its entirety for all purposes. Although system800is described as one implementation for performing analytics functionality, any type of system can be implemented and is not limited herein. Embodiments of the present invention have been described in relation to particular embodiments which are intended in all respects to be illustrative rather than restrictive. Alternative embodiments will become apparent to those of ordinary skill in the art to which the present invention pertains without departing from its scope. From the foregoing, it will be seen that this invention in one well adapted to attain all the ends and objects hereinabove set forth together with other advantages which are obvious and which are inherent to the structure. It will be understood that certain features and sub-combinations are of utility and may be employed without reference to other features or sub-combinations. This is contemplated by and is within the scope of the claims.
67,972
11860718
DETAILED DESCRIPTION In order to make the objects, the technical solutions and the advantages of the embodiments of the present application clearer, the technical solutions according to the embodiments of the present application will be clearly and completely described below with reference to the drawings according to the embodiments of the present application. Apparently, the described embodiments are merely certain embodiments of the present application, rather than all of the embodiments. All of the other embodiments that a person skilled in the art obtains on the basis of the embodiments of the present application without paying creative work fall within the protection scope of the present application. In the related art, the BMC interacts with the PECI bus directly connected to the CPU, and acquires the data of the CPU registers by using the single-instruction mode. Such a mode prevents the relying on the stability of the ME. However, because for different CPU types, different reading modes are employed, the data of MSR or CSR registers that are acquired by simply relying on the single instruction are limited. Furthermore, crashing events with different causes result in the problem that the data cannot be read by using the single instruction. In view of the above technical problems, the present embodiment provides a register reading method. After a server is crashed, a CPU-register collecting request is triggered. Different types of CPUs correspond to different types and quantities of registers that require data collection. Therefore, by firstly determining the register required to be read corresponding to the CPU type, and determining the reading mode of the register, the disadvantage that the reading mode that can merely use a single instruction cannot satisfy the demand on field crashing analysis is prevented. Subsequently, by using a PECI bus, the register data of a plurality of registers are read. By collecting the registers of the CPU directly by using the PECI bus, the problem that the performance excessively relies on the stability of the ME due to the intermediate transfer via the ME is prevented, which greatly increases the reading success rate of the registers. In some embodiments, referring toFIG.1,FIG.1is a flow chart of a register reading method according to an embodiment of the present application. The method includes: S101: after a server is crashed, by a BMC, receiving a CPU-register collecting request. The present application mainly aims at the reading of the data in the registers after the server is crashed. After the server is crashed, a CPU-register collecting request for a BMC to collect the register data of a CPU is triggered. The CPU-register collecting request includes the CPU type and the request information. S102: according to the CPU-register collecting request, determining a plurality of registers corresponding to a CPU type. Different types of CPUs correspond to different types and quantities of registers that require data collection, and include cores of different quantities. Therefore, it is required to determine, according to the type of the used CPU detected by the BIOS (Basic Input Output System), which CSR/MSR registers are required to be read. Therefore, firstly, the registers required to be read corresponding to the CPU type are determined. S103: according to a predetermined condition, determining a register-reading mode. The predetermined condition is not limited in the present embodiment, and may be selected according to demands. In some embodiments of the present application, the operation of, according to the predetermined condition, determining the register-reading mode includes:according to the CPU type, determining the register-reading mode; oraccording to the reading success rates of the different reading modes, determining the register-reading mode; oraccording to mapping relations between the crashing scenes and the reading modes and the reading success rates, determining the register-reading mode. The register-reading mode by which the PECI reads the CPU registers may be the register-reading mode determined according to the CPU type; in other words, the reading is according to a predetermined reading mode. The predetermined reading mode includes a single-instruction mode, a sequence-instruction-sequence mode and a crash mode, which may be configured by the user or according to the different CPU types when leaving factory. The configuring may be performed by using an external IPMI command, and the configuration is written into an EEPROM (Electrically Erasable Programmable Read-Only Memory) chip on the mainboard, and does not vary with the updating of the firmware of the BMC. The register-reading mode may also be determined according to the reading success rates of the different reading modes. According to the success rates of the register reading of the different reading modes and the field diagnosis situation, any one of the single-instruction reading mode, the sequence reading mode and the crash reading mode is employed, which may be configured by the user by using the IPMI command. The selection based on the statistical empirical types may greatly prevent reading failure. Certainly, the register-reading mode is also configurable to be automatically selected, and when the register-reading mode is configured to be automatically selected, the BMC counts up relations of crashing scenes with the register-reading modes and reading success rates of the crashing scenes, and selects, from the three register-reading modes, one register-reading mode that has a highest success rate for a current crashing scene and a current register for the reading. In some embodiments, such a mode is a mode of the automatic learning of the BMC. The field servers have various types of failures or crashes, and the failures or crashes happen multiple times. When a crash happens, the BMC reads by using all of the 3 modes, and subsequently selects the registers that are read by using the mode having the highest reading success rate for the analysis, to locate the failure cause. At this point, there is a correspondence relation between the failure cause, the reading mode and the reading success rate. After failures or crashes happen multiple times, there are many data of such correspondence relations. At this point, the BMC counts up, with respect to the server where it is located, which of the reading modes has the highest success rate, and when a failure happens again, it uses that reading mode. When, at a certain time, the register failure location that is read by using that reading mode fails, it re-learns, to select an optimum reading mode from the 3 types (the single-instruction reading mode, the sequence reading mode and the crash reading mode). In this operation, by according to the predetermined condition, determining the register-reading mode, the disadvantage that the reading mode that can merely use a single instruction cannot satisfy the demand on field crashing analysis is prevented. S104: according to the register-reading mode, by using a PECI bus, reading register data of the plurality of registers. By, according to the determined register-reading mode, by using the PECI bus, reading the register data, the problem that the performance excessively relies on the stability of the ME due to the intermediate transfer via the ME is prevented. On the basis of the above technical solution, in the present embodiment, after a server is crashed, a CPU-register collecting request is triggered. Different types of CPUs correspond to different types and quantities of registers that require data collection. Therefore, by firstly determining the register required to be read corresponding to the CPU type, and determining the reading mode of the register, the disadvantage that the reading mode that can merely use a single instruction cannot satisfy the demand on field crashing analysis is prevented. Subsequently, by using a PECI bus, the register data of a plurality of registers are read. By collecting the registers of the CPU directly by using the PECI bus, the problem that the performance excessively relies on the stability of the ME due to the intermediate transfer via the ME is prevented, which greatly increases the reading success rate of the registers. In some embodiments of the present application, the register reading method further includes:setting a preset duration of a timer; andwhen the preset duration is reached, stopping reading the register data. In the present embodiment, the preset duration is set in the timer, which is mainly for the following reasons. Firstly, after the server is crashed, the BIOS maintains the crashing state of the server by using a certain mechanism. However, there is a time limit, and when the time limit is exceeded, the server automatically restarts. After the restarting, the data of the CPU registers are automatically reset, and the register data that the BMC subsequently collects are useless for the analysis. Secondly, when the BMC is collecting the register data, and the data cannot be collected, there is a retry mechanism, which also requires a time limit, and the retry cannot be performed infinitely. It can be seen that, in the present embodiment, by using the preset duration to prevent the reading of invalid register data, the efficiency of the reading is increased. Further, by providing the to-be-read queue and the failing-polling region, and polling the to-be-read registers, it is ensured to the greatest extent that each of the to-be-read registers may be read one time, and the to-be-read registering region of the failing-polling region is read again, which increases the success rate of the reading. Referring toFIG.6,FIG.6is a schematic flow chart of the reading of a registering region according to an embodiment of the present application. The flow includes: S1041: placing the plurality of registers into a to-be-read queue. Based on the model and the core quantity of the CPU, whether a hyper-threading is turned on, and the types and the quantities of the supported registers in the configuration of the server detected by the BIOS, it is determined which CSR/MSR registers are required to be read, and those registers are placed into the to-be-read queue. The quantity of the registers is not limited in the present embodiment, and may be determined according to practical situations by the user. S1042: reading one of the registers from the to-be-read queue, and according to the register-reading mode, by using the PECI bus, reading to-be-read register data of the register. One of the to-be-read registers is read from the to-be-read queue in sequence, and the to-be-read register data are read in the determined register-reading mode. S1043: when the reading fails, placing the register into a queue tail of a failing-polling region. The failing-polling region includes all of the to-be-read registers whose reading fails, which are arranged in the sequence of the times of the reading failures. They are placed into the queue tail after the reading fails. In this operation, the main reason why the to-be-read registers whose reading fails are placed into the failing-polling region is the setting of the preset duration. In case that one to-be-read register is always read without limitation, within the duration the other to-be-read registers, which may be read, do not have the chance of being read. S1044: when the reading succeeds, in response to the to-be-read queue being not empty, reading a next register in the to-be-read queue, or, in response to the to-be-read queue being empty, reading the register in the failing-polling region, and when the reading of the register in the failing-polling region fails, placing the register whose reading fails in the failing-polling region into the queue tail of the failing-polling region. When the reading succeeds, the next to-be-read register in the to-be-read queue is read, till the to-be-read queue is empty. At this point, the failing to-be-read registers in the failing-polling region are read. At this point, they are also read in sequence, and when the reading of the register in the failing-polling region fails, the register whose reading fails in the failing-polling region is placed into the queue tail of the failing-polling region. In this operation, merely when the to-be-read registers are empty, the failing to-be-read registers in the failing-polling region are read, which is mainly in order to ensure that all of the to-be-read registers are read one time, and subsequently the failing to-be-read registers are emphatically read. S1045: when the predetermined condition is reached, stopping the operation of the reading of the registers. In this operation, after the predetermined condition is reached, the reading of the invalid data is prevented, which increases the accuracy of the reading. The predetermined condition is not limited in the present embodiment, and may be set according to practical demands by the user, as long as the purpose of the present embodiment may be realized. In some embodiments, when the reading duration reaches the preset duration, the reading is stopped, or simultaneously when all of the registers are read completely, the reading succeeds. In some embodiments, when the maximum value of the time quantity of the register reading exceeds a preset time quantity, the reading is stopped. On the basis of the above technical solution, in the present embodiment, by providing the to-be-read queue and the failing-polling region, and polling the to-be-read registers, it is ensured to the greatest extent that each of the to-be-read registers may be read one time, and the to-be-read registering region of the failing-polling region is read again, which increases the success rate of the reading. In some embodiments of the present application, in order to prevent the reading of the invalid data, to increase the accuracy of the reading, the register reading method further includes:setting a preset duration of a timer; andwhen any one of conditions that a reading duration reaches the preset duration, that all of the registers are completely read and that a time quantity of the reading of the register in the failing-polling region reaches a corresponding preset time quantity is satisfied, stopping reading the register data. In some embodiments, when any one of the sub-condition A that the reading duration reaches the preset duration, the sub-condition B that all of the registers are completely read and the sub-condition C that the time quantity of the reading of the register in the failing-polling region reaches a corresponding preset time quantity is satisfied, stopping reading the register data. That all of the registers are completely read refers to that all of the registers are successfully read. The preset time quantity of the registers in the failing-polling region may be set according to practical demands, which may be that each of the registers corresponds to one preset time quantity, and may also be that the target registers correspond to a preset time quantity. On the basis of the above technical solution, in the present embodiment, when any one of conditions that a reading duration reaches the preset duration, that all of the registers are completely read and that a time quantity of the reading of the register in the failing-polling region reaches a corresponding preset time quantity is satisfied, the reading of the register data is stopped, which may prevent the reading of the invalid data, which increases the accuracy of the reading. In some embodiments of the present application, the register reading method further includes:according to importances of the registers in fault diagnosis, determining reading priorities of the plurality of registers, wherein the importances refer to importances of action scopes in a fault-diagnosis logic, application frequencies and application diagnosis logic branches in the whole fault-diagnosis logic of the registers; andaccording to the priorities of the registers, determining the time quantity of the reading of the register in the failing-polling region, wherein a reading time quantity of a register of a higher priority is not lower than a reading time quantity of a register of a lower priority. In the present embodiment, according to importances of the registers in fault diagnosis, the reading priorities of the registers are determined. Classification and priority division are performed to the CSR and MSR registers, and the priorities of the CSR and MSR registers are determined according to their importances of action scopes in a fault-diagnosis logic, application frequencies and application diagnosis logic branches in the whole fault-diagnosis logic. In some embodiments, the priorities of the registers are classified into high-priority reading, medium-priority reading and low-priority reading. The definition of the priorities is limited to the second stage of the reading of the registers by the PECI (the register reading in the failing-polling region), i.e., for the stage of reading retry of the registers whose reading for the first time fails, and it is prescribed that each time of the reading with the high priority may be retried a first time quantity, each time of the reading with the medium priority may be retried a second time quantity, and each time of the reading with the low priority may be retried a third time quantity, among them, the first time quantity≥the second time quantity≥the third time quantity, and the third time quantity>0. By setting the different weights of the retry of the reading of the registers of the different priorities, it is ensured that the reading success rate of the registers of the high priority is increased, thereby providing high-quality register analysis data for the whole fault-diagnosis logic. In some embodiments of the present application, the register reading method further includes:determining whether a previous CPU-register collecting request is completed; andwhen the previous CPU-register collecting request is completed, executing the operation of, according to the register-reading mode, by using the PECI bus, reading the register data of the plurality of registers. In the present embodiment, after the previous one CPU-register collecting request is completed, the step S104is executed. When it is not completed, then firstly the previous one CPU-register collecting request is completed, and after it is completed, subsequently the step S104is executed. With ensures that the previous one CPU-register collecting request completely collects the data. In some embodiments of the present application, the method further includes:counting up a quantity of successful reading and a quantity of failing reading of the plurality of registers, and recording a configuration situation of the server. In the present embodiment, the counted-up quantity of the successful reading is the final quantity of the successful reading, and the current configuration situation of the server is recorded, to facilitate the technician to view. In some embodiments of the present application, the method further includes:by using the level information of a CPU pin, determining whether the server is crashed. After the server is crashed, it notifies the BMC via a special pin of the CPU. When the BMC detects that the magnitude of the level of the pin changes, that indicates that a crashing event happens, and it is required to collect the corresponding CPU registers and perform fault diagnosis and location with a certain rule. On the basis of any one of the above embodiments, the present embodiment provides the process of a particular method for increasing the reading success rate of the CPU of the server by using the PECI. In some embodiments, referring toFIG.2,FIG.2is a schematic flow chart of a particular register reading according to an embodiment of the present application. The flow includes:S1: after the server is crashed, triggering the BMC to collect the register data of the CPU.S2: determining whether the previous register-data collecting task is completed; andwhen it is completed, then turning to the step S3, and when it is not completed, then waiting for 2 s, and subsequently repeating the step S2.S3: setting the timer; andsetting the preset duration of the timer to be 180 s (that duration is the total duration of the whole process of the collection of the CPU registers by the BMC, when, in the collection of the CPU registers, it is detected that the time is reached, then the collection by the PECI is directly exited, and, in addition, the value of that duration may be flexibly set according to demands by commands according to the configurations and the application scenes of the field server).S4: according to the current configuration of the server, placing the CSR and MSR registers of the to-be-read CPU into the to-be-read queue.S5: taking out one of the to-be-read registers from the CSR/MSR to-be-read queue, and subsequently reading the data of the corresponding registers by using the predetermined reading mode. The predetermined reading mode includes the single-instruction reading mode, the sequence reading mode and the crash reading mode, which may be configured by commands according to the field environment and scene, and may also be configured as auto. When it is configured as auto, the BMC counts up the relations of the crashing scenes with the reading modes and the success rates of the crashing scenes, and selects from the three reading modes one mode that has the highest success rate for the reading.S6: determining whether the reading succeeds; andwhen the reading fails, then placing the corresponding register into the failing-polling region, and when the reading succeeds, then turning to the step S7.S7: determining whether the time of the timer is reached; andwhen it is reached, then terminating the reading process of the PECI, and when it is not reached, then turning to the step S8.S8: determining whether the current CSR/MSR reading queue is empty, when it is empty then turning to the step S9, and when it is not empty, then taking out one of the registers therefrom, and repeating the steps S5-S8.S9: determining whether the failing-polling region is empty; and when it is empty, then terminating the collecting process of the CPU registers.S10: when it is not empty, then taking out one of the CSR/MSR registers from the failing-polling region, and reading.S11: when the reading fails, then placing it to the queue tail of the failing-polling region.S12: determining whether the time of the timer is reached; andwhen the time is reached, then exiting the process of the register reading by the PECI, and when the time is not reached, then repeating the steps S9-S11.S13: counting up the quantity of the registers that are successfully read currently, the quantity of the registers whose reading fails, and the current configuration situation of the server, and recording into the log. It can be seen that the present embodiment is not limited to one PECI reading mode, wherein the present embodiment may include, according to the setting, selecting to read the registers of the CPU by using the single-instruction mode, the sequence mode or the crash mode, and may also include, according to the reading success rates of the modes of the BMC in the previous stage, adaptively learning the PECI reading mode that is most suitable for the failure. The present embodiment further provides a timer mechanism, in which the numerical value of the timer may be set according to the actual application scene of the user, and the timer is used to overall control the execution duration of the whole process. Moreover, by firstly polling one time to read the CSR/MSR registers one time, and subsequently polling according to the duration of the timer, it is ensured that each of the registers may be read at least one time. A register reading apparatus according to the embodiments of the present application will be described below, and the apparatus described below and the method described above may correspondingly refer to each other. Referring toFIG.3,FIG.3is a schematic structural diagram of a register reading apparatus according to an embodiment of the present application. The apparatus includes:a request receiving module301configured for, after a server is crashed, by a BMC, receiving a CPU-register collecting request;a register determining module302configured for, according to the CPU-register collecting request, determining a plurality of registers corresponding to a CPU type;a register-reading-mode determining module303configured for, according to a predetermined condition, determining a register-reading mode; anda reading module304configured for, according to the register-reading mode, by using a PECI bus, reading register data of the plurality of registers. In some embodiments of the present application, the register-reading-mode determining module303includes:a to-be-read-queue placing-into unit configured for placing the plurality of registers into a to-be-read queue;a first reading unit configured for reading one of the registers from the to-be-read queue, and according to the register-reading mode, by using the PECI bus, reading to-be-read register data of the register;a polling-failing-region placing-into unit configured for, when the reading fails, placing the register into a queue tail of a failing-polling region;a second reading unit configured for, when the reading succeeds, in response to the to-be-read queue being not empty, reading a next register in the to-be-read queue, or, in response to the to-be-read queue being empty, reading the register in the failing-polling region, and when the reading of the register in the failing-polling region fails, placing the register whose reading fails in the failing-polling region into the queue tail of the failing-polling region; anda stopping unit configured for, when the predetermined condition is reached, stopping the operation of the reading of the registers. In some embodiments of the present application, the apparatus further includes:a duration setting module configured for setting a preset duration of a timer; andcorrespondingly, the stopping unit is configured for, when any one of conditions that a reading duration reaches the preset duration, that all of the registers are completely read and that a time quantity of the reading of the register in the failing-polling region reaches a corresponding preset time quantity is satisfied, stopping reading the register data. In some embodiments of the present application, the apparatus further includes:a priority determining module configured for, according to importances of the registers in fault diagnosis, determining reading priorities of the plurality of registers, wherein the importances refer to importances of action scopes in a fault-diagnosis logic, application frequencies and application diagnosis logic branches in the whole fault-diagnosis logic of the registers; anda priority-time-quantity determining module configured for, according to the priorities of the registers, determining the time quantity of the reading of the register in the failing-polling region, wherein a reading time quantity of a register of a higher priority is not lower than a reading time quantity of a register of a lower priority. In some embodiments of the present application, the register-reading-mode determining module303further includes:a determining unit configured for determining whether a previous CPU-register collecting request is completed; andan executing unit configured for, when the previous CPU-register collecting request is completed, executing the operation of, according to the register-reading mode, by using the PECI bus, reading the register data of the plurality of registers. In some embodiments of the present application, the apparatus further includes:a counting-up and recording module configured for counting up a quantity of successful reading and a quantity of failing reading of the plurality of registers, and recording a configuration situation of the server. In some embodiments of the present application, the register-reading-mode determining module303is configured for:according to the CPU type, determining the register-reading mode; oraccording to the reading success rates of the different reading modes, determining the register-reading mode; oraccording to mapping relations between the crashing scenes and the reading modes and the reading success rates, determining the register-reading mode. Because the embodiments of the apparatus and the embodiments of the method correspond to each other, the embodiments of the apparatus may refer to the description on the embodiments of the method, and are not discussed further herein. An electronic device according to the embodiments of the present application will be described below, and the electronic device described below and the method described above may correspondingly refer to each other. Referring toFIG.4,FIG.4is a schematic structural diagram of an electronic device according to an embodiment of the present application. The electronic device includes:a memory100configured for storing a computer program; anda processor200configured for, when executing the computer program, implementing the operations of the method stated above. The memory100includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer-readable instruction. The internal memory provides the environment for the running of the operating system and the computer-readable instruction in the non-volatile storage medium. The processor200provides the capacities of calculating and controlling to the electronic device, and, when executing the computer program stored in the memory100, may implement the following operations:after the server is crashed, by a BMC, receiving a CPU-register collecting request;according to the CPU-register collecting request, determining a plurality of registers corresponding to a CPU type; according to a predetermined condition, determining a register-reading mode; andaccording to the register-reading mode, by using a PECI bus, reading register data of the plurality of registers. It can be seen that, in the present application, after the server is crashed, a CPU-register collecting request is triggered. Different types of CPUs correspond to different types and quantities of registers that require data collection. Therefore, by firstly determining the register required to be read corresponding to the CPU type, and determining the reading mode of the register, the disadvantage that the reading mode that can merely use a single instruction cannot satisfy the demand on field crashing analysis is prevented. Subsequently, by using a PECI bus, the register data of a plurality of registers are read. By collecting the registers of the CPU directly by using the PECI bus, the problem that the performance excessively relies on the stability of the ME due to the intermediate transfer via the ME is prevented, which greatly increases the reading success rate of the registers. In some embodiments, the processor200, when executing a computer subprogram stored in the memory100, may implement the following operations:placing the plurality of registers into a to-be-read queue;reading one of the registers from the to-be-read queue, and according to the register-reading mode, by using the PECI bus, reading to-be-read register data of the register;when the reading fails, placing the register into a queue tail of a failing-polling region;when the reading succeeds, in response to the to-be-read queue being not empty, reading a next register in the to-be-read queue, or, in response to the to-be-read queue being empty, reading the register in the failing-polling region, and when the reading of the register in the failing-polling region fails, placing the register whose reading fails in the failing-polling region into the queue tail of the failing-polling region; andwhen the predetermined condition is reached, stopping the operation of the reading of the registers. In some embodiments, the processor200, when executing a computer subprogram stored in the memory100, may implement the following operations:setting a preset duration of a timer; andwhen any one of conditions that a reading duration reaches the preset duration, that all of the registers are completely read and that a time quantity of the reading of the register in the failing-polling region reaches a corresponding preset time quantity is satisfied, stopping reading the register data. In some embodiments, the processor200, when executing a computer subprogram stored in the memory100, may implement the following operations:according to importances of the registers in fault diagnosis, determining reading priorities of the plurality of registers, wherein the importances refer to importances of action scopes in a fault-diagnosis logic, application frequencies and application diagnosis logic branches in the whole fault-diagnosis logic of the registers; andaccording to the priorities of the registers, determining the time quantity of the reading of the register in the failing-polling region, wherein a reading time quantity of a register of a higher priority is not lower than a reading time quantity of a register of a lower priority. In some embodiments, the processor200, when executing a computer subprogram stored in the memory100, may implement the following operations:determining whether a previous CPU-register collecting request is completed; andwhen the previous CPU-register collecting request is completed, executing the operation of, according to the register-reading mode, by using the PECI bus, reading the register data of the plurality of registers. In some embodiments, the processor200, when executing a computer subprogram stored in the memory100, may implement the following operations:counting up a quantity of successful reading and a quantity of failing reading of the plurality of registers, and recording a configuration situation of the server. In some embodiments, the processor200, when executing a computer subprogram stored in the memory100, may implement the following operations:according to the CPU type, determining the register-reading mode; oraccording to the reading success rates of the different reading modes, determining the register-reading mode; oraccording to mapping relations between the crashing scenes and the reading modes and the reading success rates, determining the register-reading mode. On the basis of the above embodiments, as an embodiment, referring toFIG.5,FIG.5is a structural diagram of another electronic device according to an embodiment of the present application. The electronic device further includes:an input interface300, connected to the processor200, configured for acquiring computer programs, parameters and instructions imported externally, and, under the controlling by the processor200, saving into the memory100. The input interface300may be connected to an inputting device, and receive parameters or instructions inputted manually by the user. The inputting device may be a touch layer covering the display screen, may also be a key, a trackball or a touch-controlling board provided at the housing of a terminal, and may also be a keyboard, a touch-controlling board, a mouse and so on. A displaying unit400, connected to the processor200, configured for displaying the data sent by the processor200. The displaying unit400may be a display screen, a liquid-crystal display screen or an electronic-ink display screen in a PC (Personal Computer). A network port500, connected to the processor200, configured for making communicative connection with external terminal devices. The communication technique employed by the communicative connection may be a wired-communication technique or a wireless-communication technique, such as Mobile High-Definition Link (MHL), Universal Serial Bus (USB), High-Definition Multimedia Interface (HDMI), Wireless Fidelity (WiFi), Bluetooth communication, low-power-consumption Bluetooth communication, and communication based on IEEE802.11s. Because the embodiments of the electronic device and the embodiments of the method correspond to each other, the embodiments of the electronic device may refer to the description on the embodiments of the method, and are not discussed further herein. A non-transitory computer-readable storage medium according to the embodiments of the present application will be described below, and the non-transitory computer-readable storage medium described below and the method described above may correspondingly refer to each other. The present application discloses a non-transitory computer-readable storage medium, wherein the non-transitory computer-readable storage medium stores a computer program, and the computer program, when executed by a processor, implements the operations of the register reading method stated above. Because the embodiments of the non-transitory computer-readable storage medium and the embodiments of the method correspond to each other, the embodiments of the non-transitory computer-readable storage medium may refer to the description on the embodiments of the method, and are not discussed further herein. The embodiments of the description are described in the mode of progression, each of the embodiments emphatically describes the differences from the other embodiments, and the same or similar parts of the embodiments may refer to each other. Regarding the devices according to the embodiments, because they correspond to the methods according to the embodiments, they are described simply, and the relevant parts may refer to the description on the methods. A person skilled in the art may further understand that the units and the algorithm steps of the examples described with reference to the embodiments disclosed herein may be implemented by using electronic hardware, computer software or a combination thereof. In order to clearly explain the interchangeability between the hardware and the software, the above description has described generally the configurations and the steps of the examples according to the functions. Whether those functions are executed by hardware or software depends on the particular applications and the design constraints of the technical solutions. A person skilled in the art may employ different methods to implement the described functions with respect to each of the particular applications, but the implementations should not be considered as extending beyond the scope of the present application. The steps of the method or algorithm described with reference to the embodiments disclosed herein may be implemented directly by using hardware, a software module executed by a processor or a combination thereof. The software module may be embedded in a Random Access Memory (RAM), an internal memory, a read-only memory (ROM), an electrically programmable ROM, an electrically erasable programmable ROM, a register, a hard disk, a removable disk, a CD-ROM, or a storage medium in any other form well known in the art. The register reading method and apparatus, the device and the medium according to the present application have been described in detail above. The principle and the embodiments of the present application are described herein with reference to the particular examples, and the description of the above embodiments is merely intended to facilitate to comprehend the method according to the present application and its core concept. It should be noted that a person skilled in the art may make improvements and modifications on the present application without departing from the principle of the present application, and all of the improvements and modifications fall within the protection scope of the claims of the present application.
40,179
11860719
DESCRIPTION OF EMBODIMENTS The following describes technical solutions in embodiments of the present disclosure with reference to accompanying drawings. First, a storage system applicable to the embodiments of the present disclosure is described. As shown inFIG.2, the storage system in this embodiment of the present disclosure may be a storage array (for example, an Oceanstor® 18000 series of Huawei®, an Oceanstor® Dorado® series of Huawei®, or the like). The storage system includes a front-end interface card, a storage controller A, a storage controller B, and a hard disk enclosure. The front-end interface card is separately connected to an interface CA1of the storage controller A and an interface CB1of the storage controller B, and the connection may be implemented through a Peripheral Component Interconnect Express (PCIe) bus. In this case, the interface CA1of the storage controller A, the interface CB1of the storage controller B, and interfaces that are of the front-end interface card and that are connected to the storage controller A and the storage controller B are PCIe interfaces. The storage controller A and the storage controller B are separately connected to the hard disk enclosure. The hard disk enclosure includes a plurality of hard disks, and the hard disks include a solid-state disk (SSD) and/or a mechanical hard disk. Each of the storage controller A and the storage controller B may be connected to the hard disk enclosure through the PCIe bus, a fiber channel (FC), an Ethernet, infinite bandwidth (IB), or a Serial Attached Small Computer System Interface (SCSI) (SAS) protocol. The Ethernet connection may be remote direct memory access (RDMA) over Converged Ethernet (RoCE). Based on the foregoing connection manner, the storage controller A and the storage controller B may use a SAS protocol, a non-volatile memory express (NVMe) standard protocol, a proprietary protocol, or the like. The front-end interface card may be connected to an HBA of a host by using a switch A and a switch B, and the front-end interface card may also be directly connected to the HBA of the host. An interface used by the front-end interface card to communicate with the host may be an FC interface or an Ethernet interface, or may be an IB interface. In this embodiment of the present disclosure, the front-end interface card is a shared interface card, that is, a plurality of storage controllers is connected to the front-end interface card, and communicate with the host by using the front-end interface card. A structure of the storage controller in this embodiment of the present disclosure is shown inFIG.3. The storage controller includes a central processing unit (CPU)301, a memory302, and an interface303. The memory302may be configured to cache data and an access instruction that are of the storage controller. In addition, in this embodiment of the present disclosure, the memory302is further configured to store a driver of the front-end interface card. The interface303is configured to communicate with the front-end interface card, and the interface303may be a port supporting a PCIe protocol. The storage controller further includes an interface configured to communicate with the hard disk enclosure, and may be the PCIe interface, the FC interface, the Ethernet interface, the IB interface, or the like. A structure of the front-end interface card is shown inFIG.4. The front-end interface card includes a processor401, a first interface402, and a second interface403. The first interface402may be the FC interface, the Ethernet interface, or the IB interface, and the first interface402is configured to communicate with the HBA of the host. The second interface402may be the PCIe interface, and is configured to communicate with the storage controller A and the storage controller B. In addition, the processor401may be a field-programmable gate array (FPGA), an application-specific integrated circuit (ASIC), a CPU, or other hardware. Alternatively, an FPGA or other hardware and a CPU together serve as a processor. The processor401is configured to implement functions of the front-end interface card in this embodiment of the present disclosure. In this embodiment of the present disclosure, there may be one or more first interfaces402and one or more second interfaces403. One first interface402communicates with one HBA. In an implementation, a plurality of HBAs or switches may also reuse one first interface402, namely, the plurality of HBAs communicate with one first interface402. This is not limited in this embodiment of the present disclosure. The driver of the front-end interface card in this embodiment of the present disclosure runs in the storage controller shown inFIG.3. The front-end interface card in this embodiment of the present disclosure is used as a network interface card of the storage controller to communicate with the host. In an implementation of the front-end interface card in this embodiment of the present disclosure, the second interface403may directly communicate with the storage controller, and may be connected to the storage controller by using a switching device. For example, the second interface403is the PCIe interface, and a PCIe switching device is not required for communication between the second interface403and the storage controller. In this embodiment of the present disclosure, in an implementation, the front-end interface card does not require a dynamic random-access memory (RAM), and does not need to cache access request-level data, that is, the front-end interface card does not need to cache data carried in a write request or data requested by a read request. In this embodiment of the present disclosure, as shown inFIG.5, steps for implementing communication between the host and the storage system are as follows. Step501: The front-end interface card establishes a physical link to the HBA of the host. That front-end interface card establishes a physical link to the HBA of the host includes operations such as rate coordination and link initialization. Step502: The HBA of the host sends a connection establishment request to the front-end interface card. After the front-end interface card establishes a physical link to the HBA of the host, the HBA of the host sends the connection establishment request to the front-end interface card, where the request is used to communicate with the storage system. Step503: The front-end interface card selects, for the host, one storage controller from the plurality of storage controllers as a primary storage controller. The front-end interface card selects one storage controller from the plurality of storage controllers as the primary storage controller, to manage a connection established between the storage system and the host, and synchronize the connection to another storage controller of the storage system. With reference to the embodiment shown inFIG.2, for example, the storage controller A is selected as the primary storage controller. In an implementation, the storage controller of the storage system establishes a connection to the HBA of the host, namely, a session is established. In this case, the storage controller directly performs a session with the HBA of the host. In another implementation, the front-end interface card establishes a connection to the HBA of the host, namely, a session is established. The front-end interface card establishes a virtual connection to each storage controller, that is, the front-end interface card that represents the storage controller establishes a connection to the HBA of the host. When the front-end interface card receives a session notification sent by the HBA of the host, the front-end interface card notifies the storage controller of the storage system that the storage system has established the connection to the host, namely, the front-end interface card that represents the storage system establishes a connection to the host. In this embodiment of the present disclosure, the front-end interface card receives an access request sent by the host, determines, based on address information carried in the access request, a storage controller that processes the access request, and sends the access request to the corresponding storage controller for processing. In this embodiment of the present disclosure, the storage controller is the storage controller A. The access request may be a read request, a write request, or a management request. The management request in this embodiment of the present disclosure is used by the host to send a management command to the storage system. In this embodiment of the present disclosure, in an implementation, the front-end interface card distributes the access requests to the storage system at a granularity level of an access request. That the front-end interface card distributes the access requests to the storage system at a granularity level of an access request means that the front-end interface card selects the corresponding storage controller for the access request sent by the host. In a specific implementation, the storage controller that processes the access request may be determined based on the address information carried in the access request. The corresponding storage controller that processes the access request is also referred to as a home storage controller. In this way, the front-end interface card may directly send the access request to the corresponding storage controller, and there is no need to forward the access request between the storage controllers. This improves access request processing efficiency, and reduces an access request processing delay. In this implementation, the front-end interface card stores a correspondence between an address information of an access request and a storage controller. In this implementation, the front-end interface card may send, to different storage controllers for parallel processing, the access request delivered by the host. This improves the access request processing efficiency, and reduces the access request processing delay. In another implementation, the front-end interface card distributes the access requests to the storage system at a granularity level of a link. That the front-end interface card distributes the access requests to the storage system at a granularity level of a link means that the front-end interface card distributes access requests from a same link of the host to a same storage controller for processing. That is, each time a link is established, the front-end interface card specifies one storage controller to process all access requests from the link. As shown inFIG.6, in an embodiment of the present disclosure, a front-end interface card performs the following operations to maintain storage service continuity. Step601: The front-end interface card detects a status of a storage controller. The front-end interface card detects whether the storage controller is in a normal state. The storage controller changes from the normal state to an abnormal state due to various reasons. For example, the storage controller changes to an offline state due to a fault, a power failure, or an upgrade, or the storage controller does not respond due to software issues. Step602: When the front-end interface card detects that the storage controller is in the abnormal state, the front-end interface card selects a new storage controller from a plurality of storage controllers for a host. For example, when a first storage controller is in the abnormal state, the front-end interface card selects a new storage controller from the plurality of storage controllers for the host to process an access request of the host. In this embodiment of the present disclosure, the plurality of storage controllers share one front-end interface card. When a storage controller of the front-end interface card changes from the normal state to the abnormal state, and when a storage system switches the storage controller, the host does not know that the storage system switches the storage controller. Therefore, the host does not need to stop sending the access request to the storage system. This improves service continuity. This embodiment of the present disclosure can also avoid a problem that switching of a storage controller fails because a link cannot be established between a host and the storage controller due to compatibility of host software. Based on the embodiment shown inFIG.6, further, the front-end interface card maintains a storage controller status table that is used to record a status of a storage controller in the storage system. Further, the front-end interface card may further maintain a storage controller switching table that is used to record a backup storage controller of a current storage controller in case the current storage controller changes to the abnormal state. The backup storage controller is a storage controller that may replace the current storage controller. The front-end interface card may determine, based on performance, load, and the like of each storage controller, the backup storage controller for the current storage controller, or set the backup storage controller and record a backup relationship in the storage controller switching table. Therefore, in step602, the front-end interface card may select the new storage controller, namely, the backup storage controller, based on the storage controller switching table. An embodiment of the present disclosure provides a plurality of implementations that are used to process an access request sent by a host in a storage controller switching process. Various implementations are described with reference toFIG.2in this embodiment of the present disclosure. An implementation, as shown inFIG.7, includes the following steps. Step701: A host sends an access request. The access request may be a read request, a write request, or a management request. In this embodiment of the present disclosure, the read request is used as an example for description. Step702: The host starts a timer to determine whether waiting is timed out. The host starts the timer to determine whether a storage system returns a response within a predetermined time period. Step703: A front-end interface card records access request information. In this embodiment of the present disclosure, the access request information includes one or more of a type of the access request, a source address for sending the access request, or a destination address of the access request. Step704: The front-end interface card sends the access request to a storage controller A. Step705: The storage controller A reads data from a cache or a hard disk, and sends description information of the data to the front-end interface card. In this embodiment of the present disclosure, the cache may be located in the memory302shown inFIG.3. The hard disk is shown inFIG.2. The description information of the data includes an address of the data in the cache and a data length. Step706: The front-end interface card obtains the data through direct memory access (DMA) based on the description information of the data, and sends the data to the host. Step707: The front-end interface card detects that the storage controller A changes to an abnormal state, and selects a storage controller B to replace the storage controller A. The front-end interface card detects that the storage controller A changes to the abnormal state, and selects the storage controller B to replace the storage controller A, that is, the front-end interface card switches the storage controller A to the storage controller B. Step708: The front-end interface card sends an access request error message to the host based on the recorded access request information. The access request error message is used to report an error status of the access request. In this embodiment of the present disclosure, the access request error message is a read request error message. Step709: The host resends the access request in response to the access request error message. Step710: The front-end interface card receives the access request sent by the host, and sends the access request to the storage controller B. Step711: The storage controller B reads the data from the cache or the hard disk based on the access request, and sends the description information of the data to the front-end interface card. Step712: The front-end interface card obtains the data through the DMA based on the description information of the data, and sends the data to the host. Step713: The front-end interface card sends the access request to the host, and a normal completion response is returned. In this embodiment of the present disclosure, in an implementation, the front-end interface card does not need a dynamic RAM, and does not need to cache access request-level data, that is, the front-end interface card does not need to cache data carried in the write request or data requested by the read request. In another scenario, the storage controller A receives the access request sent by the front-end interface card, but step705has not been performed. That is, the storage controller A has not processed the access request, or has read the data from the cache or the hard disk, but has not sent the description information of the data to the front-end interface card. In this case, the storage controller A is faulty. For a subsequent implementation, refer to step707to step713in the implementation inFIG.7. Details are not described herein again. For another implementation, refer to step701to step711in the implementation inFIG.7. In this implementation, different from step712, when the front-end interface card has not completely obtained a result of the access request that is of the host and that is processed by the storage controller A, the front-end interface card obtains, from the storage controller B, a processing result of the access request of the host, and selects, from the processing result that is obtained by the storage controller B and that is of the access request of the host, the not completely obtained result of the access request that is of the host and that is processed by the storage controller A. The front-end interface card sends, to the host, the not completely obtained result of the access request that is of the host and that is processed by the storage controller A. Because the storage controller A changes to the abnormal state, the front-end interface card has not completely obtained the result of the access request that is of the host and that is processed by the storage controller A. The data of the read request is used as an example. The front-end interface card obtains only some data of the read request, and sends some data of the read request to the host. Therefore, in this implementation, with reference to step701to step711, the front-end interface card obtains the data in the DMA manner based on the description information of the data. The front-end interface card selects, based on some data that is of the read request and that has been sent to the host, the not completely obtained processing result of the access request of the host from the processing result that is obtained by the storage controller B and that is of the access request of the host, and supplements data that has not been completely sent to the host and that is of the access request. After supplementing the data that is of the request and that has not been sent to the host, the front-end interface card sends the access request to the host, and the normal completion response is returned. For example, the data of the read request of the host is 000011110001, and the front-end interface card obtains 000011 from the storage controller A and sends 000011 to the host. The front-end interface card obtains, from the storage controller B, the data 000011110001 of the read request of the host. Because the front-end interface card has sent 000011 to the host, the front-end interface card only needs to select 110001 from the data 000011110001 that is of the read request of the host and that is obtained from the storage controller B, and send 110001 to the host. In this embodiment of the present disclosure, the front-end interface card detects a status of a storage controller, and when detecting that the storage controller A recovers from the abnormal state to the normal state, the front-end interface card may switch the storage controller B to the storage controller A. Another embodiment of the present disclosure provides a front-end interface card of a storage system. As shown inFIG.8, the front-end interface card includes a first interface801, a second interface802, a detection unit803, and a selection unit804. The first interface801is configured to communicate with a host, and the second interface802is configured to communicate with a plurality of storage controllers of the storage system. The detection unit803is configured to detect a status of a first storage controller, where the first storage controller is a storage controller that is in the plurality of storage controllers and that is configured to process an access request of the host. The selection unit804is configured to, when the first storage controller is in an abnormal state, select a second storage controller from the plurality of storage controllers for the host to process the access request of the host, where the second storage controller is a storage controller that is in the plurality of storage controllers and that is different from the first storage controller. The selection unit804is configured to select the second storage controller based on performance of the plurality of storage controllers. Further, the front-end interface card further includes a sending unit and a receiving unit. The sending unit is configured to send an access request error message to the host when the first storage controller is in the abnormal state, where the access request error message is used to report an error status of the access request. The receiving unit is configured to receive the access request that is resent by the host in response to the access request error message. Based on the front-end interface card shown inFIG.8, the front-end interface card further includes an obtaining unit. The obtaining unit is configured to, when the first storage controller is in the abnormal state and the front-end interface card has not completely obtained a result of the access request that is of the host and that is processed by the first storage controller, obtain, from the second storage controller, a processing result of the access request of the host. The selection unit804is further configured to select, from the processing result of the access request of the host, the not completely obtained result of the access request that is of the host and that is processed by the first storage controller. Based on the front-end interface card shown inFIG.8, the front-end interface card further includes a switching unit. The switching unit is configured to, when the first storage controller changes from the abnormal state to a normal state, switch a storage controller that processes a service request of the host from the second storage controller to the first storage controller. In the front-end interface card shown inFIG.8, the first interface801is an Ethernet interface or an FC interface. In another implementation, the second interface802is a PCIe interface. For a specific implementation of the front-end interface card shown inFIG.8, refer to the implementation of the front-end interface card described in the foregoing embodiment of the present disclosure. Details are not described herein again. The present disclosure provides a chip, and the chip includes a first interface and a second interface. The first interface is configured to communicate with a host, and the second interface is configured to communicate with a plurality of storage controllers of a storage system. The chip further includes a processor. The processor is configured to implement various implementations of the front-end interface card in the embodiments of the present disclosure. The processor may be an FPGA, an ASIC, a CPU, or other hardware. Alternatively, an FPGA or other hardware and a CPU together serve as a processor. In an implementation of the chip provided in this embodiment of the present disclosure, the first interface, the second interface, and the processor may be presented in a product form of an intellectual property core, and are respectively corresponding to different intellectual property cores. The present disclosure provides a computer-readable storage medium. The computer-readable storage medium stores an instruction, and when executing the instruction, a processor is configured to perform functions of the front-end interface card in the embodiments of the present disclosure. The present disclosure provides a computer program product including an instruction. When executing the instruction in the computer program product, a processor is configured to perform functions of the front-end interface card in the embodiments of the present disclosure. The host provided in the embodiments of the present disclosure may be a physical host, or may be a virtual machine (VM). The front-end interface card provided in the embodiments of the present disclosure may directly communicate with an HBA of the host, or may communicate with the HBA of the host by using a switch. This is not limited in the embodiments of the present disclosure. It may be understood that the memory mentioned in the embodiments of the present disclosure may be a volatile memory or a nonvolatile memory, or may include a volatile memory and a nonvolatile memory. The nonvolatile memory may be a read-only memory (ROM), a programmable ROM (PROM), an erasable PROM (EPROM), an electrically EPROM (EEPROM), or a flash memory. The volatile memory may be a RAM that is used as an external cache. Many forms of RAMs may be used, for example, a static RAM (SRAM), a dynamic RAM (DRAM), a synchronous DRAM (SDRAM), a double data rate (DDR) SDRAM, an enhanced synchronous DRAM (ESDRAM), a synchlink DRAM (SLDRAM), and a direct rambus (DR) RAM. Those are examples rather than limitative descriptions. It should be noted that when the processor is a general-purpose processor, a digital signal processor (DSP), an ASIC, an FPGA, or another programmable logic device, discrete gate, transistor logic device, or discrete hardware component, the memory (storage module) may be integrated into the processor. It should be noted that the memory described in this specification is intended to include but is not limited to these and any memory of another proper type. A person of ordinary skill in the art may be aware that, in combination with the examples described in the embodiments disclosed in this specification, units and algorithm steps may be implemented by electronic hardware or a combination of computer software and electronic hardware. Whether the functions are performed by hardware or software depends on particular applications and design constraint conditions of the technical solutions. A person skilled in the art may use different methods to implement the described functions for each particular application, but it should not be considered that the implementation goes beyond the scope of the present disclosure. It may be clearly understood by a person skilled in the art that, for the purpose of convenient and brief description, for a detailed working process of the foregoing system, apparatus, and unit, refer to a corresponding process in the foregoing method embodiments, and details are not described herein again. In the several embodiments provided in the present disclosure, it should be understood that the disclosed system, apparatus, and method may be implemented in other manners. For example, the described apparatus embodiment is merely an example. For example, division into units is merely logical function division and may be other division in an actual implementation. For example, a plurality of units or components may be combined or integrated into another system, or some features may be ignored or not performed. In addition, the displayed or discussed mutual couplings or direct couplings or communication connections may be implemented through some interfaces. The indirect couplings or communication connections between the apparatuses or units may be implemented in electronic, mechanical, or other forms. The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one position, or may be distributed on a plurality of network units. Some or all of the units may be selected based on actual requirements to achieve the objectives of the solutions of the embodiments. In addition, functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each of the units may exist alone physically, or two or more units are integrated into one unit. When the functions are implemented in the form of a software functional unit and sold or used as an independent product, the functions may be stored in a computer-readable storage medium. Based on such an understanding, the technical solutions of the present disclosure essentially, or the part contributing to other approaches, or some of the technical solutions may be implemented in a form of a software product. The computer software product is stored in a storage medium, and includes several instructions for instructing a computer device (which may be a personal computer, a server, or a network device) to perform all or some of the steps of the methods described in the embodiments of the present disclosure. The foregoing storage medium includes any medium that can store program code, such as a Universal Serial Bus (USB) flash drive, a removable hard disk, a ROM, a RAM, a magnetic disk, or a compact disc.
30,117
11860720
DETAILED DESCRIPTION In an embodiment, ML techniques can be used to address some, or all, of these problems. But it is challenging to address these problems using ML techniques, because of a variety of characteristics of MSA systems. For example, the complex dependencies among microservices are typically non-linear. As discussed further below, this can require non-linear causal modeling. As another example, each microservice can have multiple time-varying features associated to it (e.g., metrics, or logs). When a fault occurs in a microservice, only some of the other services will be affected. This means that each data snapshot associated with the fault is likely to only include metrics or log information for a subset of the services, and therefore is likely to present an incomplete view of the full system. Rather than performing causal modeling from each dataset in isolation, it may be more effective to perform joint estimation, borrowing strength across datasets. Further, due to the complexity of the dependencies and the difficulty of root cause localization, it may be effective to leverage prior information on the topology underlying the microservices. In an embodiment, as discussed further below, one or more of these problems can be addressed using an ML architecture incorporating non-linear causal modeling among nodes or entities (e.g. among microservices). For example, each node can be permitted to have multiple feature metrics associated to it, and the feature metrics need not be consistent across nodes. Further, the architecture can include a mechanism to incorporate prior information on node topology by, for example, scaling the penalty parameters corresponding to the causal relationship between node pairs according to their degree of relatedness in the topology. As a further example, the architecture can estimate causality from multiple datasets, each with partially observed data, while enforcing consistency in the relationships that are common across datasets. While one or more of the techniques discussed below are described in the context of microservices and a MSA, this is merely one example. One or more of these techniques can be used to identify causal relationships between any suitable collection of nodes or entities, including devices in an information technology (IT) infrastructure or any other suitable aspect of IT operations management. FIG.1Aillustrates an ML architecture100for predicting MSA features using non-linear causal modeling from diverse data sources, according to one embodiment. In an embodiment, the architecture100includes numerous microservice nodes110,120,130and possibly more, generally N of them. Each microservice node110,120, and130includes a respective collection of features relating to the microservice node. For example, the microservice node110includes the features112A-K (e.g., historical feature data). In an embodiment, each feature112A-K represents an event happening at a given time t, for a given service i (e.g., the microservice node110). For example, the features112A-K can be represented by a vector of numbers: xi(t). As one example, the features112A-K could reflect an active or inactive status of the microservice node110at time t, the values of various properties of the microservice110at time t, an error or debugging status for the microservice110at time t, or any other suitable feature. This is merely one example, and any other suitable features can be used. The microservice120similarly includes the features122A-L, and the microservice130includes the features132A-M. In an embodiment, the features112A-K,122A-L, and132A-M are generally different because they relate to different microservice nodes, and can have different number of features. In some embodiments there can be the same number of features across the different microservice nodes with the same or analogous semantics (e.g., common features that are present in each of the multiple microservices). In an embodiment, the features112A-K,122A-L, and132A-M are provided to ML models150A-C, one for each of the microservice nodes, to infer predicted future feature values152A-C. For example, as illustrated inFIG.1, the architecture100includes multiple ML models150A-C. As discussed further below, in an embodiment each ML model150A-C is trained to infer a different collection of predicted future feature values152A-C, using the same collection of input data: the features112A-K,122A-L, and132A-M. For example, the ML model150A can receive all of the features112A-K,122A-L, and132A-M as input. In an embodiment, the features can be concatenated to get a vector of size n. This vector can be expressed as x(t). The ML model150A can be trained to use this data to predict the future feature values152A. For example, the predicted future feature values152A could be feature values for the microservice node110. In this example, the ML model150A is trained to use feature data across multiple microservice nodes (e.g., across microservice nodes110,120, and130) to infer predicted future feature values for one microservice node (e.g., for microservice node110). In mathematical terms, for x(t) as input, a given ML model will produce a prediction xi+(t+1) for xi(t+1). In this example, xi(t+1) is the event that the service i (e.g., the microservice node110) generates at next time t+1 and xi+(t+1) is a vector of numbers representing the predicted features. For input x(t), the output of a given ML model (e.g., the ML model150A) is xi+(t+1). As discussed further below, in an embodiment each ML model (e.g., the ML model150A) is trained and structured to predict features based on partially observed data. For example, the ML model can estimate causality from multiple datasets (e.g., the features112A-K, the features122A-L, and the features132A-M), each with partially observed data (e.g., data of available features for only a subset of relevant microservices). Similarly, each of the remaining ML models150B-C can use features from multiple microservices (e.g., all of features112A-K,122A-L, and132A-M from respective microservice nodes110,120, and130) to predict future feature values152B-C (e.g., each predicting feature values for a respective microservice). In an embodiment, the ML Models150A-C are structured the same and, as discussed further below, are trained to predict different features. Alternatively some, or all, of the ML models150A-C can be structured differently. For example, it may be advantageous to structure the ML models150A-C somewhat differently to improve prediction of different future feature values152A-C. FIG.1Billustrates an example of one of the ML models in the ML architecture ofFIG.1A, according to one embodiment. This example illustrates a microservice application with a number of associated, heterogeneous features. This example relates to an example microservice system that includes four services: service 0, service 1, service 2, and service 3. Numerous features from the microservice nodes110,120,130, and140are used by the ML model150A to predict the predicted features152A: here, the values of “service 3-inactive,” “service 3-error,” and “service 3-http.” FIG.2is a block diagram illustrating an MSA predictor200for predicting MSA features using non-linear causal modeling from diverse data sources, according to one embodiment. The MSA predictor200includes a processor202, a memory210, and network components220. The processor202generally retrieves and executes programming instructions stored in the memory210. The processor202is representative of a single central processing unit (CPU), multiple CPUs, a single CPU having multiple processing cores, graphics processing units (GPUs) having multiple execution paths, and the like. The network components220include the components necessary for the MSA predictor to interface with a suitable communication network (e.g., the Internet, a local area network (LAN) or a wide area network (WAN)). For example, the network components220can include wired, WiFi, or cellular network interface components and associated software. Although the memory210is shown as a single entity, the memory210may include one or more memory devices having blocks of memory associated with physical addresses, such as random access memory (RAM), read only memory (ROM), flash memory, or other types of volatile and/or non-volatile memory. The memory210generally includes program code for performing various functions related to use of the MSA predictor200. The program code is generally described as various functional “applications” or “modules” within the memory210, although alternate implementations may have different functions and/or combinations of functions. Within the memory210, the training service212facilitates training the ML model214. Training is discussed further below with regard toFIGS.3-4. In an embodiment, the ML model214corresponds with any of the ML models150A-C illustrated inFIG.1. Further, in an embodiment the ML model214can be any suitable ML model, including a suitable non-linear neural network. For example, the ML model214can be a multi-variate recurrent neural network. This is merely one example, and any state-preserving neural network may be particularly suitable, including a long short-term memory (LSTM) based network or a multi-layer perceptron network. The prediction service216uses the ML model214(e.g., after training) to infer predicted features. For example, as discussed above in relation toFIG.1and below in relation toFIGS.5-6, the prediction service216can use feature data from multiple microservices to infer predicted values for a given microservice. FIG.3is a flowchart300illustrating training an ML model for predicting MSA features using non-linear causal modeling from diverse data sources, according to one embodiment. At block302a training service (e.g., the training service212illustrated inFIG.2) receives training data. As discussed below with regard toFIG.3, the training data can include collected MSA data, which can be pre-processed prior to training. Further, the training data can include topology graph data relating to the MSA. In addition, as discussed below with regard toFIG.4, in an embodiment the pre-processing and training can be done as batch training (e.g., all data is provided at once) or as streaming training (e.g., where input data is streaming and the model is constantly updated). At block304, the training service performs model fitting with regularization. In an embodiment, the training service trains an ML model (e.g., the ML model214illustrated inFIG.2) using a feature selection mechanism by penalizing input network parameters in a block wise manner. For example, the training service can perform group-sparse regularization, in which all features of a given node (e.g., a given microservice) are forced to be either contributing or not-contributing (e.g., in an all-in or all-out fashion). As another example, the training service can perform within-group sparse regularization, in which some features from a given node (e.g., a given microservice) contribute while other features from the node are allowed to not contribute. In an embodiment, the loss function that we want to minimize assumes the form: Loss(Win[i], Wrest[i]; X1, . . . , Xn) Σ(i,j)λijR(Win[i][j]). In this expression, Win[i] represents the matrix formed by the input parameters of the model for the service i. Win[i][j] is the subset of the parameter matrix concerning the potential causal relationship from node j to node i. R(.) is a regularizer that encourages group-sparsity (e.g., R(Win[i][j]=∥Win[i][j]∥2) or within-group sparsity (e.g., R(Win[i][j])=∥Win[i] [j]∥1). Further, in this example, the parameter, or term, λijis small if the nodes (i,j) are linked via topology (e.g., in the MSA), and large otherwise. In other words, the regularizer acts to identify which input features are relevant to the output predicted features. Setting parameter λijlarge will encourage input feature values for a given output prediction node to be set lower, making it less likely these features will be considered relevant to the prediction. Setting parameter λijsmall will encourage input feature values for a given output prediction node to be set higher, making it more likely these features will be considered relevant to the prediction. In an embodiment, a smaller parameter λijcan be used for nodes that are linked in a topology. This encourages input features from nodes that are linked in a topology, with the output node, to be considered more relevant to the prediction for the output node. Further, in an embodiment, group regularization and “co-training” can be used. As discussed above, in an embodiment, when a fault occurs in a microservice only a subset of nodes are likely to be affected. Each data snapshot may, therefore, only include a subset of the services. Rather than performing causal modeling from each data set in isolation, it may be more effective to perform joint estimation (e.g., borrowing strength across datasets). The latter joint estimation can be performed using an approach known as “co-training”, as described in detail below. For example, for a given target node i, the training service can fetch all data sets or experiments where this node is observed. This can be expressed as: Di(1), . . . , Di(n_i). In an embodiment, the training service can co-train niML models, with common group regularization for weights corresponding to the common candidate source nodes across data sets. This can be expressed as Minimize: Loss(Di(i))+ . . . +Loss(Di(ni))+λΣj(Wji(Di(1)), . . . , Wji(Di(ni)∥2. In this expression, Wji(Di(k)) are ML model parameters corresponding to a node pair (j,i) for a data set Di(k)if j is observed in Di(k). Wji(Di(k)) are null if (j,i) is not observed in the data set Di(k). In an embodiment, this group regularization in co-training enforces consistency of edges across graphs (e.g., across weighted causal graphs as discussed below with regard to block306). For example, if services i and j are casually related in the ML model for data set D(k), then services i and j should also be considered causally related in the ML model for the data set D(l). At block306, the training service generates a weighted causal graph for all microservices. In an embodiment, the training service uses a feature selection mechanism, for example the weighted causal graph can be induced by the sparsity pattern of the input parameter matrices. Further, in an embodiment, the weighted causal graph includes associated causal strength for each causal relationship (e.g., between input features and output predictions) and reflects individual feature strengths for each causal relationship. For example, the weighted causal graph can reflect more than a simple binary indication as to whether a given input feature is relevant to the prediction. The weighted causal graph can reflect the strength of the causal relationship (e.g., between the values in the input matrix Win[i] and the prediction for the service i). For example the causal strength from service j to service i could be based on the norm of the input model coefficients corresponding to each node, e.g., the causal strength from node j to node i could be set to ∥Win[i][j]∥2/maxk,lWin[k][l]∥2, which would be a value between 0 and 1, where 1 would be the strongest strength and 0 would indicate absence of causal relationship from node j to node i. At block308, the training service generates a predictive model for each microservice. For example, as discussed above in relation toFIG.1, in an embodiment multiple ML models150A-C are trained to predict features, one for each node (e.g., for each microservice). In an embodiment, each trained predictive model can predict the future values of its event features based on the past values of the event features at its causing nodes (e.g., as reflected in the weighted causal graph generated at block306). Further, as discussed below with regard toFIG.5, the predictive models can also be used for anomaly detection. FIG.4illustrates training an ML model for predicting MSA features using non-linear causal modeling from diverse data sources, according to one embodiment. At block402, a training service (e.g., the training service212illustrated inFIG.2), or any other suitable software service, collects historical microservices feature data. For example, the service can gather historical log data for microservice nodes (e.g., microservice nodes110,120, and130illustrated inFIG.1) over time. As another example, the service can gather historical metric data for microservice nodes over time. At block404, the service pre-processes the microservices data. For example, the service can create feature vectors reflecting the values of various features, for each node's events, over time. In an embodiment, the pre-processing and training can be done as batch training. In this embodiment, all data is pre-processed at once, and provided to the training service. Alternatively, the pre-processing and training can be done in a streaming manner. In this embodiment, the data is streaming, and is continuously pre-processed and provided to the training service. For example, it can be desirable to take a streaming approach for scalability. The set of training data may be very large, so it may be desirable to pre-process the data, and provide it to the training service, in a streaming manner (e.g., to avoid computation and storage limitations). At block406, the service provides topology graph data to the training service. In an embodiment, the training service can optionally use topology graph data in training (e.g., to assist with regularization as discussed above in relation toFIG.3). For example, a weighted matrix or topology graph reflecting the topology of the MSA can be provided to the training service. At block408, the training service receives the data. For example, the training service receives the pre-processed microservices data. Further, the training service can optionally receive the topology graph data. As discussed above in relation toFIG.3, the training service uses the data to generate the trained ML model214. For example, the training service can generate a trained ML model for each microservice (e.g., to predict feature values for each respective microservice). FIG.5illustrates root cause analysis and anomaly detection for MSAs using a trained ML model, according to one embodiment. At block502, a prediction service (e.g., the prediction service216illustrated inFIG.2) provides MSA training data to predictive model structures510for a given microservice. As discussed above, in an embodiment data reflecting features across multiple different microservices is used to predict values for a given microservice, using a trained ML model. For example, as illustrated inFIG.1, the features112A-K,122A-L, and132A-M are all used by the trained ML model150A to determine predicted future feature values152A. In an embodiment, the feature data can be concatenated together to generate a suitable feature vector. As described above, however, the MSA inference data at block502will likely not reflect feature data for all microservices in the MSA. In an embodiment, when a fault occurs in a microservice (e.g., triggering prediction of feature data for the microservice) only a subset of nodes (e.g., a subset of microservices) are likely to be affected. The MSA training data at block502, therefore, only include a subset of the microservices. In an embodiment the predictive model structures510includes both a causal graph for the microservices (e.g., for all microservice nodes) and a predictive model for the given microservice. For example, as discussed above in relation to block306inFIG.3, in an embodiment a training service can generate a weighted causal graph for all microservices. In an embodiment, the weighted causal graph includes associated causality strength for each causal relationship (e.g., between input features and output predictions) and reflects individual feature strengths for each causal relationship. As discussed above in relation to block308inFIG.3, in an embodiment the training service can generate a predictive ML model for each microservice (e.g., to predict feature values for that microservice). In an embodiment, the prediction service uses one, or both, of the predictive model structures510to perform root cause analysis at block522, anomaly detection at524, or both. In an embodiment, anomaly detection at block524determines when a fault has occurred. This is discussed further below with regard toFIG.6. Root cause analysis at block522identifies the microservice that is likely the root cause of a given fault, either determined by the anomaly detection at524, or possibly given from an external module performing a separate fault detection process. For example, the causal graph512can be used to estimate causal relationships between microservice nodes and identify the likely candidates for the root cause of a fault. In an embodiment, the root cause analysis at block522can proceed by first identifying the set of services that may be an immediate cause of a given fault of interest and output them as likely root causes. In an embodiment, the root cause analysis at block522can also trace back paths in the causal graph and identify the set of direct or indirect causes and output them as likely root causes. In an embodiment, the root cause analysis at block522can further generate a ranking of suspected root cause microservice nodes, for example, by using causal strength of the causal relationship for the root cause candidate in question, or by combining the causal strength of all the causal relationships in the causal path leading to the root cause candidate in question. In an embodiment, the root cause analysis at block522may also use the predictive model at block514through the so-called counter factual reasoning, as described in (Judea Pearl, “Causality: Models, Reasoning and Inference,” Cambridge University Press, 2013) to perform root cause analysis. For example, for each candidate using the predictive model at block514, the degree of its causal association to the fault may be determined by evaluating the difference in the predicted feature values of the faulty service at the time of the fault between what is predicted by the predictive model on the actual observed values of all relevant features prior to the fault versus what is predicted by the same model on the same observed values except the feature values of the candidate in question are replaced by the normal values that are predicted by the same model using the data immediately prior to the time of the candidate. FIG.6is a flowchart600illustrating anomaly detection for MSAs using a trained ML model, according to one embodiment. At block602, a prediction service (e.g., the prediction service216illustrated inFIG.2) identifies a trained model for a given microservice. Further, in an embodiment, the prediction service also identifies a weighted causal graph for all microservices (e.g., as discussed above in relation toFIG.5). At block604, the prediction service concatenates features across microservices. For example, the prediction service can generate a feature vector reflecting feature values across multiple microservice nodes. As discussed above, the prediction service may have access to feature data for only a subset of microservices in the MSA, rather than all microservices. At block606, the prediction service uses a trained ML model to predict features for each microservice. As discussed above in relation toFIG.1, a given ML model (e.g., the ML model150A) can use a feature vector reflecting feature data across multiple microservices (e.g., the features112A-K,122A-L, and132A-M) to generate predicted future feature values for a given microservice (e.g., the predicted features152A). In an embodiment, at block606the prediction service uses the trained ML models to predict features for all microservices in the MSA. Alternatively, the prediction service uses the trained ML models to predict features for a subset of the microservices (e.g., a trained ML model for each respective microservice). At block608, the prediction service computes a measure of the prediction error, for example, the mean of the norms of the difference between the predicted and the actual next event representation over all microservices and uses that to detect anomalies. For example, the prediction service can compute the prediction error using the expression ∥xi+(t+1)−xi(t+1)∥2as the difference norm for each individual microservice i. The prediction service can then use the prediction error to identify likely abnormal microservice logs. The ML model predicts features values for next event at each microservice. The prediction service can then compare the predicted values with actual logged values (e.g., using the mean of norms). The prediction error, such as the magnitude of difference between the predicted feature values and the actual feature values, reflects a likelihood that an anomaly has occurred. FIG.7illustrates use of an SRU architecture, including the computation stages inside an SRU, according to one embodiment. In an embodiment, an SRU is a suitable neural architecture for one or more of the techniques discussed above (e.g., for the ML model214illustrated inFIG.2). An SRU is a lightweight, recurrent neural network architecture that is used for the inference of Granger causality and its strength between multiple services. In particular, the SRU model for the causal inference for a service i, maintains a hidden state vector uiat all times during training, which is essentially an instance of summary statistics. If we assume that, at current time t, we input a vector xtof n elements having as components the concatenation of the numerical representations of the currently emitted events at all services, then SRU outputs xi,t+1+, which is the predicted numerical representation of the event emitted at service i at next time step, t+1. The SRU model learns weight matrices and biases, represented by W's and b's in the sequel, by minimizing a loss function containing distances of predictions xi,t+1+to known values xi,t+1and regularization terms involving weight matrix elements. One of the learnt W's−Win(i), see below, is postprocessed for inferring the strength of the causal influence of all services to service i. More specifically, the SRU first computes a feedback vector ri,tusing the hidden state of the previous time step: ri,t=h(Wr(i)ui,t-1++br(i)) where typically h(.) is an element-wise Rectified Linear Unit (ReLU) operator, h(z)=max(z,0). The feedback vector is subsequently combined with the input vector to produce a recurrent statistics vector: ϕi,t=h(Win(i)xt+Wf(i)ri,t+bin(i)). Recurrent statistics will then generate exponentially weighted moving summary statistics for multiple time scales using a set of fixed weights αj∈[0,1]: ui,tαj=(1−αj)ui,t-1αj+αjϕi,t. These multiple-time-scaled summary statistics are concatenated: ui,t=[(ui,tα1)T(ui,tα2)T. . . (ui,tαm)T]Tand serve as the input for computing the causal feature vector: oi,t=h(Wo(i)ui,t+bo(i)). Finally, the SRU projects causal features to next time step event prediction at service i: xi,t+1+=(wy(i))Toi,t+by(i). We allow the representation xi,tof a current event at service i to be a vector of ni≥1 scalar features. Also, events at different services can have a different number of such scalar features. In the case of p services, Σl=1pn1=n, the strength of causal influence of service j to service i can be expressed as an l2norm computed over the submatrix consisting of njcolumns of learnt Win(i)(with column indices in the range [Σl=1j-1nl+1,Σl=1jnl])—optionally normalized over the max of such norms. In an embodiment,FIG.7summarizes computation stages inside a SRU with p=5, n=15, nl=3 ∀l∈[1,5]. FIG.8is a graph demonstrating the improved accuracy attained by leveraging features in learning a causal graph, according to one embodiment.FIG.8illustrates using an example micro-service application that includes 41 micro-services. In this example application, faults are injected in 16 different services, and 16 corresponding log datasets are generated. The x-axis illustrates these 16 different fault nodes. The y-axis illustrates the F1 score, which is used to measure accuracy. The F1 score balances precision and recall. An F1 score of 1 indicates perfect estimation of the causal graph, and the lowest possible F1 score is 0. Existing approaches that do not consider features use one time series per service corresponding to the absence or presence of an alert. The results of this approach are shown with a line810. As discussed above, in an embodiment the improved techniques discussed herein leverage features. In this example, the improved system uses 3 features per service: ‘http”, “error” and “inactive,” and uses an SRU architecture. The results of the improved system are show with a line820. As can be seen by comparing the line810(e.g., an example of a prior solution that does not consider features) with the line820(e.g., an improved system leveraging features), the feature-based approach consistently outperforms its feature-less counterpart. FIG.9is a graph demonstrating improved accuracy attained by using improved techniques discussed herein as compared to existing methods in the prior art, according to one embodiment.FIG.9again illustrates using an example micro-service application that includes 41 micro-services, in which faults are injected in 16 different services and 16 corresponding log datasets are generated. The x-axis illustrates these 16 different fault nodes. The y-axis illustrates the F1 score, which is used to measure accuracy. The F1 score balances precision and recall. An F1 score of 1 indicates perfect estimation of the causal graph, and the lowest possible F1 score is 0. Line960illustrates use of improved techniques discussed herein, along with use of an SRU architecture. Lines910and950represent two examples of existing techniques based on conditional-independence testing. Line940represents an existing approach using a forward and backward search algorithm to estimate the structure of a proximal graphical event model (PGEM). Lines920and930relate to existing regression-based approaches. As can be seen inFIG.9by comparing the line960, which illustrates use of improved techniques discussed herein along with use of an SRU architecture, with the lines910-950, which illustrate prior approaches, the improved techniques outperform the comparison approaches for most datasets and are competitive in the few other datasets. FIG.10is a table demonstrating improved accuracy attained by using the co-training embodiment in learning a causal graph, according to one embodiment.FIG.10demonstrates the effectiveness of co-training (e.g., as discussed above in relation toFIG.3) for two datasets using an example microservices application: “service_0” and “service_1.” The table inFIG.10exhibits the accuracy of co-training the service_0 and service_1 datasets jointly, contrasted against the accuracy when training on each dataset separately. As can be seen in the table, using co-training improves the accuracy of causal estimation. The descriptions of the various embodiments of the present invention have been presented for purposes of illustration, but are not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein was chosen to best explain the principles of the embodiments, the practical application or technical improvement over technologies found in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein. In the preceding, reference is made to embodiments presented in this disclosure. However, the scope of the present disclosure is not limited to specific described embodiments. Instead, any combination of the features and elements, whether related to different embodiments or not, is contemplated to implement and practice contemplated embodiments. Furthermore, although embodiments disclosed herein may achieve advantages over other possible solutions or over the prior art, whether or not a particular advantage is achieved by a given embodiment is not limiting of the scope of the present disclosure. Thus, the aspects, features, embodiments and advantages discussed herein are merely illustrative and are not considered elements or limitations of the appended claims except where explicitly recited in a claim(s). Likewise, reference to “the invention” shall not be construed as a generalization of any inventive subject matter disclosed herein and shall not be considered to be an element or limitation of the appended claims except where explicitly recited in a claim(s). Aspects of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “circuit,” “module” or “system.” The present invention may be a system, a method, and/or a computer program product at any possible technical detail level of integration. The computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present invention. The computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. A non-exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing. A computer readable storage medium, as used herein, is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire. Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network. The network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. A network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device. Computer readable program instructions for carrying out operations of the present invention may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, configuration data for integrated circuitry, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++, or the like, and procedural programming languages, such as the “C” programming language or similar programming languages. The computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). In some embodiments, electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present invention. Aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions. These computer readable program instructions may be provided to a processor of a computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram block or blocks. The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus or other device to produce a computer implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks. The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the blocks may occur out of the order noted in the Figures. For example, two blocks shown in succession may, in fact, be accomplished as one step, executed concurrently, substantially concurrently, in a partially or wholly temporally overlapping manner, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions. Embodiments of the invention may be provided to end users through a cloud computing infrastructure. Cloud computing generally refers to the provision of scalable computing resources as a service over a network. More formally, cloud computing may be defined as a computing capability that provides an abstraction between the computing resource and its underlying technical architecture (e.g., servers, storage, networks), enabling convenient, on-demand network access to a shared pool of configurable computing resources that can be rapidly provisioned and released with minimal management effort or service provider interaction. Thus, cloud computing allows a user to access virtual computing resources (e.g., storage, data, applications, and even complete virtualized computing systems) in “the cloud,” without regard for the underlying physical systems (or locations of those systems) used to provide the computing resources. Typically, cloud computing resources are provided to a user on a pay-per-use basis, where users are charged only for the computing resources actually used (e.g. an amount of storage space consumed by a user or a number of virtualized systems instantiated by the user). A user can access any of the resources that reside in the cloud at any time, and from anywhere across the Internet. In context of the present invention, a user may access applications (e.g., training service212, ML model214, or prediction service216illustrated inFIG.2) or related data available in the cloud. For example, the training service212or prediction service216could execute on a computing system in the cloud and train the ML model214, or use the ML model214to predict feature values. In such a case, the training service212or prediction service216could store the trained ML model214, or relevant prediction data (e.g., a prediction feature vector) at a storage location in the cloud. Doing so allows a user to access this information from any computing system attached to a network connected to the cloud (e.g., the Internet). While the foregoing is directed to embodiments of the present invention, other and further embodiments of the invention may be devised without departing from the basic scope thereof, and the scope thereof is determined by the claims that follow.
42,910
11860721
DETAILED DESCRIPTION The following detailed description of example implementations refers to the accompanying drawings. The same reference numbers in different drawings may identify the same or similar elements. Entities (e.g., businesses) are adopting full-spectrum DevOps where software products (e.g., applications, microservices, and/or the like) are provided across different cloud environments to capitalize on benefits of the cloud and build a solution that is highly available. With increasing complexity in creating and deploying software products, providing software products across cloud environments helps entities to be more agile and more productive. Managing the software products for customers is vital to ensure that the software products continuously operate without any interruption or failure. In complex architectures built with numerous software products, intensive monitoring is required to trace communications, correlate events, identify root causes, and ensure that a software product is sustainable across any environment. Manually monitoring error logs for software products is laborious, time consuming, and increases down times for the software products. Currently there is no automated monitoring and alerting system for software products that provides personalized recommendations specific to entity needs. Existing solutions utilize models that are trained with generic data that is not specific to and/or not relevant for an entity. The generic training data may include error logs of every software product or service that is mandated for a market, whereas an entity may not require the mandated software product or service. The relevance of the trained models to the entity is questionable, and the accuracies of the trained models cannot be determined. Therefore, current techniques for monitoring and managing software products consume computing resources (e.g., processing resources, memory resources, communication resources, and/or the like), networking resources, and/or the like associated with utilizing generic models to generate non-entity specific software product errors that are not prioritized and are sub-optimal, utilizing a software product that is inoperable, providing incorrect recommendations associated with a software product, losing opportunities for the business based on the inoperable software product, correcting the inoperable software product, and/or the like. Some implementations described herein relate to a monitoring system that utilizes automatic labelling, prioritizing, and root cause analysis machine learning models to determine recommendations for software products. For example, the monitoring system may receive historical software data identifying events and logs associated with software products utilized by an entity and may process the historical software data, with a data labelling model, to generate historical health scores, historical sentiment scores, and historical dissimilarity scores for the software products. The monitoring system may combine the historical health scores, the historical sentiment scores, and the historical dissimilarity scores to determine historical error severity scores for the software products and may train a machine learning model, with the historical software data and the historical error severity scores, to generate a trained machine learning model. The monitoring system may receive software data identifying current logs and events associated with software products utilized by the entity and may process the software data, with the trained machine learning model, to generate error severity scores for the software products. The monitoring system may process the error severity scores, with a prioritization model, to generate prioritized error scores and may process the error severity scores and the prioritized error scores, with a root cause analysis model, to generate root cause data identifying root causes associated with the error severity scores. The monitoring system may perform one or more actions based on the root cause data. In this way, the monitoring system utilizes automatic labelling, prioritizing, and root cause analysis machine learning models to determine recommendations for software products. The monitoring system may personalize automatic data labelling, error classification, and prioritization of errors for software products of an entity. The monitoring system may analyze interactions between the software products, may intelligently identify a most critical software product for the entity at a particular time, and may utilize this information for prioritization of classified errors. The monitoring system may address important issues first, which may help downstream software products to function correctly. This may enable the monitoring system to correctly prioritize an error based on a severity impact on the entity, identify a root cause for the error, and reduce recovery time for a software product experiencing the error. This, in turn, conserves computing resources, networking resources, and/or the like that would otherwise have been consumed in utilizing generic models to generate non-entity specific software product errors that are not prioritized and are sub-optimal, utilizing a software product that is inoperable, providing incorrect recommendations associated with a software product, losing opportunities for the business based on the inoperable software product, correcting the inoperable software product, and/or the like. FIGS.1A-1Fare diagrams of an example100associated with utilizing automatic labelling, prioritizing, and root cause analysis machine learning models to determine recommendations for software products. As shown inFIGS.1A-1F, example100includes user devices and a monitoring system. Each of the user devices may include a desktop computer, a tablet computer, a laptop computer, and/or the like. The monitoring system may include a system that utilizes automatic labelling, prioritizing, and root cause analysis machine learning models to determine recommendations for software products. Further details of the user devices and the monitoring system are provided elsewhere herein. As shown inFIG.1A, and by reference number105, the monitoring system may receive, from the user devices, historical software data identifying events and logs associated with software products utilized by an entity (e.g., and specific to the entity). The user devices may be associated with developers or programmers of software products. The historical software data may include historical data identifying defects associated with the software products, rework effort to fix defects in the software products, emergency tickets associated with the software products, log failures or errors associated with the software products, performance issues associated with the software products, events (e.g., error messages, errors, and/or the like) associated with the software products, clusters associated with the events and/or the logs, containers associated with the events and/or the logs, cloud environments associated with the events and/or the logs, time periods associated with the events and/or the logs, computing resources associated with the events and/or the logs, operational issues associated with the software products, metrics associated with the software products, and/or the like. The monitoring system may continuously receive the historical software data, may periodically receive the historical software data, may receive the historical software data upon request by the monitoring system, and/or the like. In some implementations, the historical software data may be provided to the monitoring system via data shippers (e.g., tools that enable the historical software data, such as log files and metrics, easily and reliably transferred to the monitoring system). In some implementations, the monitoring system may store the historical software data in a data structure (e.g., a database, a table, a list, and/or the like) associated with the monitoring system. The data structure may include a defined size limit and may provide high throughput for insert and retrieve operations. The logs of each software product may be stored in individual collections and may be processed based on insertion order. In this way, any quantity of data load may be handled without any need for increased memory size in the data structure. In some implementations, the monitoring system may check the historical software data for new software products. When a new software product is detected, the monitoring system may cause software data associated with the new software product to be received by the monitoring system along with the historical software data. This may ensure that the software data associated with the new software product is continuously monitored. In some implementations, the logs of different software products may execute on different nodes in a cluster. A list of unique software products executing in a cluster may be generated by the monitoring system. The monitoring system may monitor the list of software products and may receive logs associated with the list of software products. When a new software product is added to the cluster, the monitoring system may provide details of the new software product to the data structure where a new entry may be inserted in a collection that includes the list of unique software products. In this way, all software products are continuously and parallelly monitored in real-time. As shown inFIG.1B, and by reference number110, the monitoring system may process the historical software data, with a data labelling model, to generate historical health scores, historical sentiment scores, and historical dissimilarity scores for the software products. In some implementations, when processing the historical software data, with the data labelling model, to generate the historical health scores, the monitoring system generates the historical health scores based on whether the software products are operational. In some implementations, when processing the historical software data, with the data labelling model, to generate the historical sentiment scores, the monitoring system preprocesses the historical software data to generate preprocessed historical software data and performs a sentiment analysis on the preprocessed historical software data to generate the historical sentiment scores. In some implementations, when processing the historical software data, with the data labelling model, to generate the historical dissimilarity scores, the monitoring system generates the historical dissimilarity scores based on comparing the logs associated with the software products. When preprocessing the historical software data to generate the preprocessed historical software data, the monitoring system may perform, on the historical software data, one or more of tokenization, stop word removal, lemmatization, lowercasing, regular expression, and/or the like on the historical software data to generate the preprocessed historical software data. Tokenization may include breaking down logs into simple units. Stop word removal may include removing words from the logs that do not add value for training a machine learning model. For example, for a log error of “from server (Bad Request): container gitlab is not available,” the words “from” and “is” do not add value for training a machine learning model and may be removed. Lemmatization may include reducing a given word to a root form of the word. Lemmatization may reduce noise and may increase process execution on the historical software data. For example, words such as “logging” or “logged” may be reduced to “log” which is the root form of the words. Lowercasing may include formatting words of the logs to lowercase. Lowercasing may prevent the machine learning model from differently treating a same word with different cases and may prevent sparsity issues. Regular expression may include defining words, that indicate a potential failure, as important expressions with a weight assigned to each expression based on a sentiment of a word. Regular expression may aid in matching or discovering sets of logs that contain such expressions. The health of a software product may ensure that the software product is resilient. When processing the historical software data, with the data labelling model, to generate the historical health scores, the monitoring system may determine whether the software products are operational or nonoperational. If a software product is nonoperational, the monitoring system may rate the software product at a value (e.g., one) and may calculate a weight for the software product based on the value (e.g., a weight of 0.5*1=0.5). The weight for the software product may be high since it is important to ensure that the software products are always operational. When processing the historical software data, with the data labelling model, to generate the historical sentiment scores, the monitoring system may utilize a sentiment analysis to identify sentiments of logs as either positive, negative, or neutral. Prior to identifying the sentiments of the logs, the monitoring system may preprocess the logs to remove noise and transform the logs to avoid inconsistent results, as described above. In some implementations, the monitoring system utilizes natural language processing to determine the sentiments of the logs and weights for the logs. For example, if a log is a highly negative sentiment log, then the monitoring system may assign a larger weight to the log. In some implementations, the monitoring system utilizes subject matter expert inputs to determine weights assigned to the logs. When processing the historical software data, with the data labelling model, to generate the historical dissimilarity scores, the monitoring system may compare the logs with pure logs (e.g., logs that do not include errors) to determine whether the logs are similar to or dissimilar from the pure logs. Based on a degree of dissimilarity, the monitoring system may assign a weight to each log. For example, the monitoring system may assign a larger weight (e.g., a value close to one) to a log with a large measure of dissimilarity. In some implementations, the data labelling model may include supervised learning capabilities that can understand and learn from the historical software data. The data labelling model may initiate multiple classifier processes for each software product, and each classifier process may wait for collection of respective logs and may process the respective logs. The data labelling model may determine sentiments of the logs, may determine how similar or dissimilar the logs are from other normal logs and may determine health of the software products. Using this information, the data labelling model may classify the logs as normal or erroneous. For erroneous logs, the data labelling model may determine software product dependency and may prioritize and provide insights about a software product based on the software product dependency. In some implementations, the data labelling model assigns actions to digital assistants to perform quick self-healing measures for the software product. As further shown inFIG.1B, and by reference number115, the monitoring system may combine the historical health scores, the historical sentiment scores, and the historical dissimilarity scores to determine historical error severity scores for the software products. In some implementations, the monitoring system combines the historical health scores, the historical sentiment scores, and the historical dissimilarity scores to determine the historical error severity scores by adding the historical health scores, the historical sentiment scores, and the historical dissimilarity scores to determine the historical error severity scores. In some implementations, the monitoring system combines the historical health scores, the historical sentiment scores, and the historical dissimilarity scores to determine the historical error severity scores by assigning weights to the historical health scores, the historical sentiment scores, and the historical dissimilarity to generate weighted scores, and combining the weighted scores to determine the historical error severity scores. In some implementations, each of the historical error severity scores is included in one of a first threshold severity range (e.g., a low range of greater than 0.1 and less than or equal to 0.2), a second threshold severity range (e.g., a medium range of greater than 0.2 and less than or equal to 0.5) that is greater than the first threshold severity range, or a third threshold severity range (e.g., a high range of greater than 0.5 and less than or equal to 1.0) that is greater than the second threshold severity range. As shown inFIG.1C, and by reference number120, the monitoring system may train a machine learning model, with the historical software data and the historical error severity scores, to generate a trained machine learning model. For example, the monitoring system may utilize the historical software data and the historical error severity scores as training data for training the machine learning model. In some implementations, once the historical error severity scores are determined, the historical error severity scores may be provided to a subject matter expert for verification. This may increase the accuracy of the training data utilized to train the machine learning model. Once there is sufficient data, the monitoring system may utilize the actual logs, the preprocessed logs, and the historical error severity scores (e.g., after subject matter expert verification) as the training data for training the machine learning model. The machine learning model may undergo a supervised learning and may understand patterns of the historical software data and the historical error severity scores through input-output pairs. Based on the supervised learning, the machine learning model may predict an output based on a new input (e.g., a new log). The monitoring system may continuously evaluate the predictions made by the machine learning model and may implement a feedback loop to train the machine learning model until machine learning model makes accurate predictions. Once the predictions made by the machine learning model satisfy a threshold level of accuracy, the monitoring system may not utilize the data labelling model for new software data. Rather, the monitoring system may process the new software data with the machine learning model. In some implementations, the monitoring system may receive feedback associated with training the machine learning model, with the historical software data and the historical error severity scores, to generate the trained machine learning model and may retrain the trained machine learning model based on the feedback. As shown inFIG.1D, and by reference number125, the monitoring system may receive software data identifying current logs and events associated with software products and may process the software data, with the trained machine learning model, to generate error severity scores for the software products. For example, the monitoring system may receive the software data from the user devices in a manner similar to the manner the historical software data is received. The software data may include data identifying defects associated with the software products, rework effort to fix defects in the software products, emergency tickets associated with the software products, log failures or errors associated with the software products, performance issues associated with the software products, events (e.g., error messages, errors, and/or the like) associated with the software products, clusters associated with the events and/or the logs, containers associated with the events and/or the logs, cloud environments associated with the events and/or the logs, time periods associated with the events and/or the logs, computing resources associated with the events and/or the logs, operational issues associated with the software products, metrics associated with the software products, and/or the like. The monitoring system may continuously receive the software data, may periodically receive the software data, may receive the software data upon request by the monitoring system, and/or the like. In some implementations, the software data may be provided to the monitoring system via the data shippers. In some implementations, the monitoring system may store the software data in the data structure associated with the monitoring system. When processing the software data, with the trained machine learning model, to generate the error severity scores for the software products, the monitoring system may preprocess the software data to generate preprocessed software data, may perform feature extraction on the preprocessed software data to generate values associated with error words in the preprocessed software data, and may process the values associated with the error words, with a decision tree classifier model (e.g., the machine learning model), to generate the error severity scores for the software products. The values associated with the error words may be based on quantities of times the error words occur in the logs. When preprocessing the software data to generate the preprocessed software data, the monitoring system may perform, on the software data, one or more of tokenization, stop word removal, lemmatization, lowercasing, regular expression, and/or the like on the historical software data to generate the preprocessed software data. Tokenization, stop word removal, lemmatization, lowercasing, and regular expression are described above in connection withFIG.1B. As shown inFIG.1E, and by reference number130, the monitoring system may process the error severity scores, with a prioritization model, to generate prioritized error scores and may process the error severity scores and the prioritized error scores, with a root cause analysis model, to generate root cause data identifying root causes associated with the error severity scores. When processing the error severity scores, with the prioritization model, to generate the prioritized error scores, the monitoring system may generate knowledge graphs for the software products and may determine the prioritized error scores based on the knowledge graphs. Generating the prioritized error scores may enable the monitoring system to identify the most important errors from the logs and the software products associated with the most important errors (e.g., and that need to be serviced). For example, if multiple error severity scores are classified as high, prioritizing such scores among the high class may enable the entity to address the most important errors first. In some implementations, the monitoring system may generate prioritized error scores by identifying the most critical software products using a service dependency graph that displays interactions between different software products through knowledge graphs. In a knowledge graph, each software product may be depicted as a node and directed edges may depict interactions and interlinks between the different software products. Based on a quantity of dependent software products, the prioritization model may generate a centrality score (e.g., between zero and one). An overall centrality score and a community-wise subgraph may be created to generate a community-based centrality score that will be different from the overall centrality score of a node. If a centrality score of a software product is relatively greater than the other software products in a cluster, then the software product may be considered a more influential and critical software product. The centrality score may be determined after multiple iterations of executing the prioritization model until the prioritization model arrives at an approximate solution that converges. The prioritized error scores may be customized by assigning weights to the software products that are important based on a business requirement. When processing the error severity scores and the prioritized error scores, with the root cause analysis model, to generate the root cause data, the monitoring system may process training data with the root cause analysis model. The monitoring system may generate the root cause data based on processing the training data, the error severity scores, and the prioritized error scores with the root cause analysis model. Once the error severity scores prioritized, the monitoring system may identify the root causes for the error severity scores through probabilistic approach. The root cause analysis model may be generated using domain knowledge and training data derived from different metrics in a cluster. In some implementations, the root cause analysis model may be structured with domain knowledge that catalogues root causes of the error severity scores as being one or more of an infrastructure error (e.g., any resource metric related to CPU or memory of any cluster component) associated with one of the software products, a dependency error (e.g., any error arising due to abnormal functioning of other software products) associated with one of the software products, an internal error (e.g., any error caused due to misconfiguration, internal bugs, or exceptions) associated with one of the software products, and/or the like. When an event occurs, the monitoring system may identify a single error or multiple errors that have occurred at a point in time. For example, if there is an error (e.g., an “out of memory error exception”), the monitoring system may check for a bottleneck of system resources. In some implementations, the monitoring system may utilize community detection models to identify different communities associated with different software products. The monitoring system may utilize network centrality models to identify a most important software product in a particular community. Such models may identify errors occurring within a cluster and may predict errors that might occur in downstream software products that are dependent on a failing software product. In this way, the monitoring system may identify a root cause of an error, as well as predict future errors. As shown inFIG.1F, and by reference number135, the monitoring system may perform one or more actions based on the root cause data. In some implementations, the one or more actions include the monitoring system generating and providing for display one or more inferences associated with one of the software products. For example, the monitoring system may generate the one or more inferences based on the root cause data, such as an inference narrative indicating that a software product will fail based on the root cause data, an inference narrative indicating that the software product will be successful based on the root cause data, and/or the like. The monitoring system may provide the one or more inferences to a user device, and the user device may display the one or more inferences to a user of the user device. In this way, the monitoring system conserves computing resources, networking resources, and/or the like that would otherwise have been consumed in utilizing generic models to generate non-entity specific software product errors that are not prioritized and are sub-optimal, utilizing a software product that is inoperable, providing incorrect recommendations associated with a software product, losing opportunities for the business based on the inoperable software product, correcting the inoperable software product, and/or the like. In some implementations, the one or more actions include the monitoring system generating and providing for display one or more recommendations associated with one of the software products. For example, the monitoring system may determine one or more recommendations based on the root cause data, such as a recommendation indicating that a software product should be modified, a recommendation indicating that the software product should be disabled, and/or the like. The monitoring system may provide the one or more recommendations to a user device, and the user device may display the one or more recommendations to a user of the user device. In this way, the monitoring system conserves computing resources, networking resources, and/or the like that would otherwise have been consumed in utilizing generic models to generate non-entity specific software product errors that are not prioritized and are sub-optimal, utilizing a software product that is inoperable, and/or the like. In some implementations, the one or more actions include the monitoring system modifying one of the software products based on the root cause data. For example, the monitoring system may determine one or more modifications to a software product based on the root cause data and may implement the one or more modifications to generate a modified software product. The monitoring system may cause the modified software product to be implemented. In this way, the monitoring system conserves computing resources, networking resources, and/or the like that would otherwise have been consumed in utilizing a software product that is inoperable, losing opportunities for the business based on the inoperable software product, and/or the like. In some implementations, the one or more actions include the monitoring system causing one of the software products to be disabled based on the root cause data. For example, the monitoring system may determine that the root cause data indicates that a software product is affecting other more important software products and should be disabled. Based on this determination, the monitoring system may cause the software product to be disabled. In this way, the monitoring system conserves computing resources, networking resources, and/or the like that would otherwise have been consumed in affecting the other more important software products, utilizing a software product that is inoperable, losing opportunities for the business based on the inoperable software product, and/or the like. In some implementations, the one or more actions include the monitoring system causing one or more programmers to modify one of the software products based on the root cause data. For example, the monitoring system may determine one or more modifications to a software product based on the root cause data and may provide the one or more modifications to one or more programmers via user devices. The one or more programmers may utilize the user devices to implement the one or more modifications and to generate a modified software product. The monitoring system may cause the modified software product to be implemented. In this way, the monitoring system conserves computing resources, networking resources, and/or the like that would otherwise have been consumed in utilizing a software product that is inoperable, losing opportunities for the business based on the inoperable software product, and/or the like. In some implementations, the one or more actions include the monitoring system retraining one or more of the data labelling model, the machine learning model, the prioritization model, or the root cause analysis model based on the root cause data. The monitoring system may utilize the root cause data as additional training data for retraining the one or more of the data labelling model, the machine learning model, the prioritization model, or the root cause analysis model, thereby increasing the quantity of training data available for training the one or more of the data labelling model, the machine learning model, the prioritization model, or the root cause analysis model. Accordingly, the monitoring system may conserve computing resources associated with identifying, obtaining, and/or generating historical data for training the one or more of the data labelling model, the machine learning model, the prioritization model, or the root cause analysis model relative to other systems for identifying, obtaining, and/or generating historical data for training machine learning models. The monitoring system may be utilized across any cloud provider, on-premise environment, or multi-cloud environments, with zero dependency on any tools and/or services. The monitoring system is a smart and user-friendly system that may provide real-time updates on a centralized dashboard with visual insights to gain comprehensive understanding of software products and an environment of the software products. The monitoring system may personalize automatic data labelling, error classification, and prioritization of errors. Additionally, the monitoring system may analyze interactions of the software products, may intelligently identify a most critical software product for a client at any point in time and accordingly uses this information for prioritization of errors. This may enable an entity to address important issues first, which may enable downstream software products to function correctly and may enable correct prioritization of events based on severity impacts on the entities. This may ease identification of a root cause for an issue and may decrease software product recovery time. The monitoring system is a cognitive, machine learning-powered system that maintains an integrity of an environment by enhancing software product availability. The monitoring system may efficiently monitor events in a cluster, may correlate behavior of different software products at the time of the event, may train the machine learning model, may provide proactive and reactive insights, may identifies hidden issues, may customize automatic labelling of issues, may provide accurate and precise recommendations to quickly troubleshoot and keep the environment operational, and/or the like. The monitoring system may automatically label data without any manual flagging to identify severities of logs. In this way, the monitoring system optimizes time and effort spent for manual labelling and saves cost by eliminating redundant activities. The monitoring system may automatically prioritize an error by identifying a most critical software product in an environment using a context built through service discovery. The monitoring system may intuitively make contextually aware dependency identification within a cluster and may automatically detect any new software product in the environment without any manual intervention. The monitoring system may monitor the software products in an environment and may identify patterns or behavior (e.g., how frequently the software product is accessed or an impact the software product has on performance of other software products within a cluster) to identify a critical software product. The monitoring system may analyze logs of any software product operating in any domain that works on container technology in the cloud or on-premise. The monitoring system may be utilized with any project landscape that operates at any scale. The monitoring system may provide a personalized machine learning model that is trained with high-quality and relevant data according to client business requirements. The monitoring system utilizes logs specific to the software products executing in a cluster and trains the machine learning model that is exclusively used for a specific entity. The monitoring system may proactively and automatically remediate thereby prolonging availability of the software products, reducing inefficiencies in the environment, saving time and effort of support engineers by increasing their productivity, and/or the like. In this way, the monitoring system utilizes automatic labelling, prioritizing, and root cause analysis machine learning models to determine recommendations for software products. The monitoring system may personalize automatic data labelling, error classification, and prioritization of errors for software products of an entity. The monitoring system may analyze interactions between the software products, may intelligently identify a most critical software product for the entity at a particular time, and may utilize this information for prioritization of classified errors. The monitoring system may address important issues first, which may help downstream software products to function correctly. This may enable the monitoring system to correctly prioritize an error based on a severity impact on the entity, identify a root cause for the error, and reduce recovery time for a software product experiencing the error. This, in turn, conserves computing resources, networking resources, and/or the like that would otherwise have been consumed in utilizing generic models to generate non-entity specific software product errors that are not prioritized and are sub-optimal, utilizing a software product that is inoperable, providing incorrect recommendations associated with a software product, losing opportunities for the business based on the inoperable software product, correcting the inoperable software product, and/or the like. As indicated above,FIGS.1A-1Fare provided as an example. Other examples may differ from what is described with regard toFIGS.1A-1F. The number and arrangement of devices shown inFIGS.1A-1Fare provided as an example. In practice, there may be additional devices, fewer devices, different devices, or differently arranged devices than those shown inFIGS.1A-1F. Furthermore, two or more devices shown inFIGS.1A-1Fmay be implemented within a single device, or a single device shown inFIGS.1A-1Fmay be implemented as multiple, distributed devices. Additionally, or alternatively, a set of devices (e.g., one or more devices) shown inFIGS.1A-1Fmay perform one or more functions described as being performed by another set of devices shown inFIGS.1A-1F. FIG.2is a diagram illustrating an example200of training and using a machine learning model in connection with determining recommendations for software products. The machine learning model training and usage described herein may be performed using a machine learning system. The machine learning system may include or may be included in a computing device, a server, a cloud computing environment, and/or the like, such as the monitoring system described in more detail elsewhere herein. As shown by reference number205, a machine learning model may be trained using a set of observations. The set of observations may be obtained from historical data, such as data gathered during one or more processes described herein. In some implementations, the machine learning system may receive the set of observations (e.g., as input) from the monitoring system, as described elsewhere herein. As shown by reference number210, the set of observations includes a feature set. The feature set may include a set of variables, and a variable may be referred to as a feature. A specific observation may include a set of variable values (or feature values) corresponding to the set of variables. In some implementations, the machine learning system may determine variables for a set of observations and/or variable values for a specific observation based on input received from the monitoring system. For example, the machine learning system may identify a feature set (e.g., one or more features and/or feature values) by extracting the feature set from structured data, by performing natural language processing to extract the feature set from unstructured data, by receiving input from an operator, and/or the like. As an example, a feature set for a set of observations may include a first feature of historical software data, a second feature of historical error severity scores, a third feature of feedback, and so on. As shown, for a first observation, the first feature may have a value of historical software data1, the second feature may have a value of historical error severity scores1, the third feature may have a value of feedback1, and so on. These features and feature values are provided as examples and may differ in other examples. As shown by reference number215, the set of observations may be associated with a target variable. The target variable may represent a variable having a numeric value, may represent a variable having a numeric value that falls within a range of values or has some discrete possible values, may represent a variable that is selectable from one of multiple options (e.g., one of multiple classes, classifications, labels, and/or the like), may represent a variable having a Boolean value, and/or the like. A target variable may be associated with a target variable value, and a target variable value may be specific to an observation. In example200, the target variable is predicted error severity scores, which has a value of predicted error severity scores1for the first observation. The target variable may represent a value that a machine learning model is being trained to predict, and the feature set may represent the variables that are input to a trained machine learning model to predict a value for the target variable. The set of observations may include target variable values so that the machine learning model can be trained to recognize patterns in the feature set that lead to a target variable value. A machine learning model that is trained to predict a target variable value may be referred to as a supervised learning model. In some implementations, the machine learning model may be trained on a set of observations that do not include a target variable. This may be referred to as an unsupervised learning model. In this case, the machine learning model may learn patterns from the set of observations without labeling or supervision, and may provide output that indicates such patterns, such as by using clustering and/or association to identify related groups of items within the set of observations. As shown by reference number220, the machine learning system may train a machine learning model using the set of observations and using one or more machine learning algorithms, such as a regression algorithm, a decision tree algorithm, a neural network algorithm, a k-nearest neighbor algorithm, a support vector machine algorithm, and/or the like. After training, the machine learning system may store the machine learning model as a trained machine learning model225to be used to analyze new observations. As shown by reference number230, the machine learning system may apply the trained machine learning model225to a new observation, such as by receiving a new observation and inputting the new observation to the trained machine learning model225. As shown, the new observation may include a first feature of historical software data X, a second feature of historical error severity scores Y, a third feature of feedback Z, and so on, as an example. The machine learning system may apply the trained machine learning model225to the new observation to generate an output (e.g., a result). The type of output may depend on the type of machine learning model and/or the type of machine learning task being performed. For example, the output may include a predicted value of a target variable, such as when supervised learning is employed. Additionally, or alternatively, the output may include information that identifies a cluster to which the new observation belongs, information that indicates a degree of similarity between the new observation and one or more other observations, and/or the like, such as when unsupervised learning is employed. As an example, the trained machine learning model225may predict a value of predicted error severity scores A for the target variable of the predicted error severity scores for the new observation, as shown by reference number235. Based on this prediction, the machine learning system may provide a first recommendation, may provide output for determination of a first recommendation, may perform a first automated action, may cause a first automated action to be performed (e.g., by instructing another device to perform the automated action), and/or the like. In some implementations, the trained machine learning model225may classify (e.g., cluster) the new observation in a cluster, as shown by reference number240. The observations within a cluster may have a threshold degree of similarity. As an example, if the machine learning system classifies the new observation in a first cluster (e.g., a historical software data cluster), then the machine learning system may provide a first recommendation. Additionally, or alternatively, the machine learning system may perform a first automated action and/or may cause a first automated action to be performed (e.g., by instructing another device to perform the automated action) based on classifying the new observation in the first cluster. As another example, if the machine learning system were to classify the new observation in a second cluster (e.g., a historical error severity scores cluster), then the machine learning system may provide a second (e.g., different) recommendation and/or may perform or cause performance of a second (e.g., different) automated action. In some implementations, the recommendation and/or the automated action associated with the new observation may be based on a target variable value having a particular label (e.g., classification, categorization, and/or the like), may be based on whether a target variable value satisfies one or more thresholds (e.g., whether the target variable value is greater than a threshold, is less than a threshold, is equal to a threshold, falls within a range of threshold values, and/or the like), may be based on a cluster in which the new observation is classified, and/or the like. In this way, the machine learning system may apply a rigorous and automated process to determine recommendations for software products. The machine learning system enables recognition and/or identification of tens, hundreds, thousands, or millions of features and/or feature values for tens, hundreds, thousands, or millions of observations, thereby increasing accuracy and consistency and reducing delay associated with determining recommendations for software products relative to requiring computing resources to be allocated for tens, hundreds, or thousands of operators to manually determine recommendations for software products. As indicated above,FIG.2is provided as an example. Other examples may differ from what is described in connection withFIG.2. FIG.3is a diagram of an example environment300in which systems and/or methods described herein may be implemented. As shown inFIG.3, environment300may include a monitoring system301, which may include one or more elements of and/or may execute within a cloud computing system302. The cloud computing system302may include one or more elements303-313, as described in more detail below. As further shown inFIG.3, environment300may include a network320and/or a user device330. Devices and/or elements of the environment300may interconnect via wired connections and/or wireless connections. The cloud computing system302includes computing hardware303, a resource management component304, a host operating system (OS)305, and/or one or more virtual computing systems306. The resource management component304may perform virtualization (e.g., abstraction) of the computing hardware303to create the one or more virtual computing systems306. Using virtualization, the resource management component304enables a single computing device (e.g., a computer, a server, and/or the like) to operate like multiple computing devices, such as by creating multiple isolated virtual computing systems306from the computing hardware303of the single computing device. In this way, the computing hardware303can operate more efficiently, with lower power consumption, higher reliability, higher availability, higher utilization, greater flexibility, and lower cost than using separate computing devices. The computing hardware303includes hardware and corresponding resources from one or more computing devices. For example, the computing hardware303may include hardware from a single computing device (e.g., a single server) or from multiple computing devices (e.g., multiple servers), such as multiple computing devices in one or more data centers. As shown, the computing hardware303may include one or more processors307, one or more memories308, one or more storage components309, and/or one or more networking components310. Examples of a processor, a memory, a storage component, and a networking component (e.g., a communication component) are described elsewhere herein. The resource management component304includes a virtualization application (e.g., executing on hardware, such as the computing hardware303) capable of virtualizing the computing hardware303to start, stop, and/or manage the one or more virtual computing systems306. For example, the resource management component304may include a hypervisor (e.g., a bare-metal or Type 1 hypervisor, a hosted or Type 2 hypervisor, and/or the like) or a virtual machine monitor, such as when the virtual computing systems306are virtual machines311. Additionally, or alternatively, the resource management component304may include a container manager, such as when the virtual computing systems306are containers312. In some implementations, the resource management component304executes within and/or in coordination with a host operating system305. A virtual computing system306includes a virtual environment that enables cloud-based execution of operations and/or processes described herein using computing hardware303. As shown, a virtual computing system306may include a virtual machine311, a container312, a hybrid environment313that includes a virtual machine and a container, and/or the like. A virtual computing system306may execute one or more applications using a file system that includes binary files, software libraries, and/or other resources required to execute applications on a guest operating system (e.g., within the virtual computing system306) or the host operating system305. Although the monitoring system301may include one or more elements303-313of the cloud computing system302, may execute within the cloud computing system302, and/or may be hosted within the cloud computing system302, in some implementations, the monitoring system301may not be cloud-based (e.g., may be implemented outside of a cloud computing system) or may be partially cloud-based. For example, the monitoring system301may include one or more devices that are not part of the cloud computing system302, such as device400ofFIG.4, which may include a standalone server or another type of computing device. The monitoring system301may perform one or more operations and/or processes described in more detail elsewhere herein. The network320includes one or more wired and/or wireless networks. For example, the network320may include a cellular network, a public land mobile network (PLMN), a local area network (LAN), a wide area network (WAN), a private network, the Internet, and/or the like, and/or a combination of these or other types of networks. The network320enables communication among the devices of environment300. The user device330includes one or more devices capable of receiving, generating, storing, processing, and/or providing information, as described elsewhere herein. The user device330may include a communication device and/or a computing device. For example, the user device330may include a wireless communication device, a user equipment (UE), a mobile phone (e.g., a smart phone or a cell phone, among other examples), a laptop computer, a tablet computer, a handheld computer, a desktop computer, a gaming device, a wearable communication device (e.g., a smart wristwatch or a pair of smart eyeglasses, among other examples), an Internet of Things (IoT) device, or a similar type of device. The user device330may communicate with one or more other devices of the environment300, as described elsewhere herein. The number and arrangement of devices and networks shown inFIG.3are provided as an example. In practice, there may be additional devices and/or networks, fewer devices and/or networks, different devices and/or networks, or differently arranged devices and/or networks than those shown inFIG.3. Furthermore, two or more devices shown inFIG.3may be implemented within a single device, or a single device shown inFIG.3may be implemented as multiple, distributed devices. Additionally, or alternatively, a set of devices (e.g., one or more devices) of the environment300may perform one or more functions described as being performed by another set of devices of environment300. FIG.4is a diagram of example components of a device400, which may correspond to the monitoring system301and/or the user device330. In some implementations, the monitoring system301and/or the user device330may include one or more devices400and/or one or more components of the device400. As shown inFIG.4, the device400may include a bus410, a processor420, a memory430, a storage component440, an input component450, an output component460, and a communication component470. The bus410includes a component that enables wired and/or wireless communication among the components of device400. The processor420includes a central processing unit, a graphics processing unit, a microprocessor, a controller, a microcontroller, a digital signal processor, a field-programmable gate array, an application-specific integrated circuit, and/or another type of processing component. The processor420is implemented in hardware, firmware, or a combination of hardware and software. In some implementations, the processor420includes one or more processors capable of being programmed to perform a function. The memory430includes a random-access memory, a read only memory, and/or another type of memory (e.g., a flash memory, a magnetic memory, and/or an optical memory). The storage component440stores information and/or software related to the operation of the device400. For example, the storage component440may include a hard disk drive, a magnetic disk drive, an optical disk drive, a solid-state disk drive, a compact disc, a digital versatile disc, and/or another type of non-transitory computer-readable medium. The input component450enables the device400to receive input, such as user input and/or sensed inputs. For example, the input component450may include a touch screen, a keyboard, a keypad, a mouse, a button, a microphone, a switch, a sensor, a global positioning system component, an accelerometer, a gyroscope, an actuator, and/or the like. The output component460enables the device400to provide output, such as via a display, a speaker, and/or one or more light-emitting diodes. The communication component470enables the device400to communicate with other devices, such as via a wired connection and/or a wireless connection. For example, the communication component470may include a receiver, a transmitter, a transceiver, a modem, a network interface card, an antenna, and/or the like. The device400may perform one or more processes described herein. For example, a non-transitory computer-readable medium (e.g., the memory430and/or the storage component440) may store a set of instructions (e.g., one or more instructions, code, software code, program code, and/or the like) for execution by the processor420. The processor420may execute the set of instructions to perform one or more processes described herein. In some implementations, execution of the set of instructions, by one or more processors420, causes the one or more processors420and/or the device400to perform one or more processes described herein. In some implementations, hardwired circuitry may be used instead of or in combination with the instructions to perform one or more processes described herein. Thus, implementations described herein are not limited to any specific combination of hardware circuitry and software. The number and arrangement of components shown inFIG.4are provided as an example. The device400may include additional components, fewer components, different components, or differently arranged components than those shown inFIG.4. Additionally, or alternatively, a set of components (e.g., one or more components) of the device400may perform one or more functions described as being performed by another set of components of the device400. FIG.5is a flowchart of an example process500for utilizes automatic labelling, prioritizing, and root cause analysis machine learning models to determine recommendations for software products. In some implementations, one or more process blocks ofFIG.5may be performed by a device (e.g., the monitoring system301). In some implementations, one or more process blocks ofFIG.5may be performed by another device or a group of devices separate from or including the device, such as a user device (e.g., the user device330). Additionally, or alternatively, one or more process blocks ofFIG.5may be performed by one or more components of the device400, such as the processor420, the memory430, the storage component440, the input component450, the output component460, and/or the communication component470. As shown inFIG.5, process500may include receiving historical software data identifying events and logs associated with software products utilized by an entity (block510). For example, the device may receive historical software data identifying events and logs associated with software products utilized by an entity, as described above. As further shown inFIG.5, process500may include processing the historical software data, with a data labelling model, to generate historical health scores, historical sentiment scores, and historical dissimilarity scores for the software products (block520). For example, the device may process the historical software data, with a data labelling model, to generate historical health scores, historical sentiment scores, and historical dissimilarity scores for the software products, as described above. In some implementations, processing the historical software data, with the data labelling model, to generate the historical health scores, the historical sentiment scores, and the historical dissimilarity scores includes generating the historical health scores based on whether the software products are operational, preprocessing the historical software data to generate preprocessed historical software data, performing a sentiment analysis on the preprocessed historical software data to generate the historical sentiment scores, and generating the historical dissimilarity scores based on comparing the logs associated with the software products. In some implementations, preprocessing the historical software data to generate the preprocessed historical software data includes one or more of performing tokenization on the historical software data to generate the preprocessed historical software data, performing stop word removal on the historical software data to generate the preprocessed historical software data, performing lemmatization on the historical software data to generate the preprocessed historical software data, performing lowercasing on the historical software data to generate the preprocessed historical software data, or performing regular expression on the historical software data to generate the preprocessed historical software data. As further shown inFIG.5, process500may include combining the historical health scores, the historical sentiment scores, and the historical dissimilarity scores to determine historical error severity scores for the software products (block530). For example, the device may combine the historical health scores, the historical sentiment scores, and the historical dissimilarity scores to determine historical error severity scores for the software products, as described above. In some implementations, combining the historical health scores, the historical sentiment scores, and the historical dissimilarity scores to determine the historical error severity scores includes adding the historical health scores, the historical sentiment scores, and the historical dissimilarity scores to determine the historical error severity scores. In some implementations, combining the historical health scores, the historical sentiment scores, and the historical dissimilarity scores to determine the historical error severity scores includes assigning weights to the historical health scores, the historical sentiment scores, and the historical dissimilarity to generate weighted scores, and combining the weighted scores to determine the historical error severity scores. In some implementations, each of the historical error severity scores is included in one of a first threshold severity range, a second threshold severity range that is greater than the first threshold severity range, or a third threshold severity range that is greater than the second threshold severity range. As further shown inFIG.5, process500may include training a machine learning model, with the historical software data and the historical error severity scores, to generate a trained machine learning model (block540). For example, the device may train a machine learning model, with the historical software data and the historical error severity scores, to generate a trained machine learning model, as described above. As further shown inFIG.5, process500may include receiving software data identifying current logs and events associated with software products utilized by the entity (block550). For example, the device may receive software data identifying current logs and events associated with software products utilized by the entity, as described above. As further shown inFIG.5, process500may include processing the software data, with the trained machine learning model, to generate error severity scores for the software products (block560). For example, the device may process the software data, with the trained machine learning model, to generate error severity scores for the software products, as described above. In some implementations, processing the software data, with the trained machine learning model, to generate the error severity scores for the software products includes preprocessing the software data to generate preprocessed software data, performing feature extraction on the preprocessed software data to generate values associated with error words in the preprocessed software data, and processing the values associated with the error words, with a decision tree classifier model, to generate the error severity scores for the software products. In some implementations, preprocessing the software data to generate the preprocessed software data includes one or more of performing tokenization on the software data to generate the preprocessed software data, performing stop word removal on the software data to generate the preprocessed software data, performing lemmatization on the software data to generate the preprocessed software data, performing lowercasing on the software data to generate the preprocessed software data, or performing regular expression on the software data to generate the preprocessed software data. As further shown inFIG.5, process500may include processing the error severity scores, with a prioritization model, to generate prioritized error scores (block570). For example, the device may process the error severity scores, with a prioritization model, to generate prioritized error scores, as described above. In some implementations, processing the error severity scores, with the prioritization model, to generate the prioritized error scores includes generating knowledge graphs for the software products, and determining the prioritized error scores based on the knowledge graphs. As further shown inFIG.5, process500may include processing the error severity scores and the prioritized error scores, with a root cause analysis model, to generate root cause data identifying root causes associated with the error severity scores (block580). For example, the device may process the error severity scores and the prioritized error scores, with a root cause analysis model, to generate root cause data identifying root causes associated with the error severity scores, as described above. In some implementations, processing the error severity scores and the prioritized error scores, with the root cause analysis model, to generate root cause data includes processing training data with the root cause analysis model, wherein the training data includes a knowledge graph and metrics associated with a root cause cluster, and generating the root cause data based on processing the training data, the error severity scores, and the prioritized error scores with the root cause analysis model. In some implementations, each of the root causes includes one or more of an infrastructure error associated with one of the software products, a dependency error associated with one of the software products, or an internal error associated with one of the software products. As further shown inFIG.5, process500may include performing one or more actions based on the root cause data (block590). For example, the device may perform one or more actions based on the root cause data, as described above. In some implementations, performing the one or more actions based on the root cause data includes one or more of generating and providing for display one or more inferences associated with one of the software products, generating and providing for display one or more recommendations associated with one of the software products, modifying one of the software products based on the root cause data, causing one of the software products to be disabled based on the root cause data, causing one or more programmers to modify one of the software products based on the root cause data, or retraining one or more of the machine learning model, the prioritization model, or the root cause analysis model based on the root cause data. Process500may include additional implementations, such as any single implementation or any combination of implementations described below and/or in connection with one or more other processes described elsewhere herein. In some implementations, process500includes receiving feedback associated with training the machine learning model, with the historical software data and the historical error severity scores, to generate the trained machine learning model, and retraining the trained machine learning model based on the feedback. AlthoughFIG.5shows example blocks of process500, in some implementations, process500may include additional blocks, fewer blocks, different blocks, or differently arranged blocks than those depicted inFIG.5. Additionally, or alternatively, two or more of the blocks of process500may be performed in parallel. The foregoing disclosure provides illustration and description but is not intended to be exhaustive or to limit the implementations to the precise form disclosed. Modifications may be made in light of the above disclosure or may be acquired from practice of the implementations. As used herein, the term “component” is intended to be broadly construed as hardware, firmware, or a combination of hardware and software. It will be apparent that systems and/or methods described herein may be implemented in different forms of hardware, firmware, and/or a combination of hardware and software. The actual specialized control hardware or software code used to implement these systems and/or methods is not limiting of the implementations. Thus, the operation and behavior of the systems and/or methods are described herein without reference to specific software code—it being understood that software and hardware can be used to implement the systems and/or methods based on the description herein. As used herein, satisfying a threshold may, depending on the context, refer to a value being greater than the threshold, greater than or equal to the threshold, less than the threshold, less than or equal to the threshold, equal to the threshold, and/or the like, depending on the context. Although particular combinations of features are recited in the claims and/or disclosed in the specification, these combinations are not intended to limit the disclosure of various implementations. In fact, many of these features may be combined in ways not specifically recited in the claims and/or disclosed in the specification. Although each dependent claim listed below may directly depend on only one claim, the disclosure of various implementations includes each dependent claim in combination with every other claim in the claim set. No element, act, or instruction used herein should be construed as critical or essential unless explicitly described as such. Also, as used herein, the articles “a” and “an” are intended to include one or more items and may be used interchangeably with “one or more.” Further, as used herein, the article “the” is intended to include one or more items referenced in connection with the article “the” and may be used interchangeably with “the one or more.” Furthermore, as used herein, the term “set” is intended to include one or more items (e.g., related items, unrelated items, a combination of related and unrelated items, and/or the like), and may be used interchangeably with “one or more.” Where only one item is intended, the phrase “only one” or similar language is used. Also, as used herein, the terms “has,” “have,” “having,” or the like are intended to be open-ended terms. Further, the phrase “based on” is intended to mean “based, at least in part, on” unless explicitly stated otherwise. Also, as used herein, the term “or” is intended to be inclusive when used in a series and may be used interchangeably with “and/or,” unless explicitly stated otherwise (e.g., if used in combination with “either” or “only one of”). In the preceding specification, various example embodiments have been described with reference to the accompanying drawings. It will, however, be evident that various modifications and changes may be made thereto, and additional embodiments may be implemented, without departing from the broader scope of the invention as set forth in the claims that follow. The specification and drawings are accordingly to be regarded in an illustrative rather than restrictive sense.
70,756
11860722
DETAILED DESCRIPTION Various detailed embodiments of the present disclosure, taken in conjunction with the accompanying figures, are disclosed herein; however, it is to be understood that the disclosed embodiments are merely illustrative. In addition, each of the examples given in connection with the various embodiments of the present disclosure is intended to be illustrative, and not restrictive. Throughout the specification, the following terms take the meanings explicitly associated herein, unless the context clearly dictates otherwise. The phrases “in one embodiment” and “in some embodiments” as used herein do not necessarily refer to the same embodiment(s), though it may. Furthermore, the phrases “in another embodiment” and “in some other embodiments” as used herein do not necessarily refer to a different embodiment, although it may. Thus, as described below, various embodiments may be readily combined, without departing from the scope or spirit of the present disclosure. In addition, the term “based on” is not exclusive and allows for being based on additional factors not described, unless the context clearly dictates otherwise. In addition, throughout the specification, the meaning of “a,” “an,” and “the” include plural references. The meaning of “in” includes “in” and “on.” It is understood that at least one aspect/functionality of various embodiments described herein can be performed in real-time and/or dynamically. As used herein, the term “real-time” is directed to an event/action that can occur instantaneously or almost instantaneously in time when another event/action has occurred. For example, the “real-time processing,” “real-time computation,” and “real-time execution” all pertain to the performance of a computation during the actual time that the related physical process (e.g., a user interacting with an application on a mobile device) occurs, in order that results of the computation can be used in guiding the physical process. As used herein, the term “dynamically” and term “automatically,” and their logical and/or linguistic relatives and/or derivatives, mean that certain events and/or actions can be triggered and/or occur without any human intervention. In some embodiments, events and/or actions in accordance with the present disclosure can be in real-time and/or based on a predetermined periodicity of at least one of: nanosecond, several nanoseconds, millisecond, several milliseconds, second, several seconds, minute, several minutes, hourly, several hours, daily, several days, weekly, monthly, etc. As used herein, the term “runtime” corresponds to any behavior that is dynamically determined during an execution of a software application or at least a portion of software application. In some embodiments, exemplary inventive, specially programmed computing systems/platforms with associated devices are configured to operate in the distributed network environment, communicating with one another over one or more suitable data communication networks (e.g., the Internet, satellite, etc.) and utilizing one or more suitable data communication protocols/modes such as, without limitation, IPX/SPX, X.25, AX.25, AppleTalk™, TCP/IP (e.g., HTTP), Bluetooth™, near-field wireless communication (NFC), RFID, Narrow Band Internet of Things (NBIOT), 3G, 4G, 5G, GSM, GPRS, WiFi, WiMax, CDMA, satellite, ZigBee, and other suitable communication modes. Described herein are methods, systems, computer readable media, etc. for performing root cause analysis of computing incidents using machine learning. Various embodiments described herein include technical aspects that perform root cause analysis of incidents in ways that may reduce the number of and/or severity of actual incidents of downtime or interrupted service of computing applications, devices, and/or services; may reduce the number of computing devices used to perform a root cause analysis of an incident; and/or may reduce the time it takes to perform a root cause analysis of a computing incident. The various embodiments described herein further include technical aspects that reduce the number of clicks, actions, or interactions that may be taken by users of client devices in performing root cause analysis of an incident. The methods, systems, computer readable media, etc. for performing root cause analysis of computing incidents using machine learning described herein may include using incident data associated with historical incidents of downtime or interrupted service of computing applications, devices, and/or services. Such historical incident data may include information about each historical incident, such as when an incident took place; what applications, devices, and/or services were affected; how many users and/or locations were impacted by the incident; when the incident was corrected; and/or any other data related to the historical incident. Root cause analysis of computing incidents using machine learning described herein may further include using root cause data of historical incidents. Root cause data may include information about the root cause of each of various historical incidents. A root cause of a historical incident may indicate a condition, problem, and/or other aspect of computing applications, devices, and/or services that caused a historical incident. Furthermore, root cause analysis of computing incidents using machine learning described herein may further include using action data related to historical incidents. Action data may include information related to what action or actions were taken to correct a particular historical incident. The action or actions taken may be specific actions taken to remedy downtime or interrupted service, and/or may be specific actions taken to prevent an incident similar to a given historical incident from occurring again. The historical incident data, the root cause data, and/or the action data may be input into an untrained machine learning algorithm to train the machine learning algorithm. The machine learning algorithm may be configured to recognize patterns in the various data inputted. With sufficient input data related to the historical incidents, the machine learning algorithm may yield a trained model that is be able to receive incident data related to a new incident of downtime or interrupted service of a computing application, device, and/or service. Based on the incident data related to the new incident, the trained model may predict and/or determine various data related to the new incident, such as root cause data and/or action data for the new incident. In this way, the trained model may use historical data related to historical incidents to output data about new incidents, and such output data may include root cause information about what caused an incident and/or action data about action to take to correct the incident and/or prevent similar incidents from happening again. Such methods and systems as described herein may reduce the number of and/or severity of actual incidents of downtime or interrupted service of computing applications, devices, and/or services. For example, the technical aspects of training a machine learning algorithm to receive new incident data and output root cause and/or action data for the new incidents may result in faster determinations of root cause and/or actions to take than if new incidents were analyzed manually. Accordingly, the root causes and/or actions to take for various new incidents may be identified more quickly. This may have several advantages. For example, identifying a root cause for an incident may be useful to determine actions for preventing future incidents. The sooner such a root cause can be identified, the sooner preventative actions related to that root cause may be implemented, thereby preventing additional incidents related to the same or a similar root cause. Because additional incidents may be prevented using they systems and methods described herein, the technical aspects of quickly identifying root causes of incidents provides a technical solution for reducing the number of incidents of downtime and/or interrupted service to computing applications, devices, and/or services. Another advantage includes determining the action data for a new incident more quickly than a manual analysis of a new incident. The action data may include specific actions for correcting the new incident itself and/or for preventing similar future incidents. Because a trained model as described herein may quickly determine actions for addressing and/or preventing an incident, those actions may be more quickly implemented. Therefore, the methods and systems described herein provide technical solutions that may reduce the number of incidents that occur, may reduce the amount of time used to respond to an incident, and/or may reduce the impact of an incident. For example, in various embodiments, incident data may be input a trained model as described herein while a new incident in ongoing or after the new incident has been corrected. If new incident data is input while the incident is ongoing, the action data output by the trained model may be used to take action and correct the incident while the new incident is ongoing. Such actions may reduce the severity of an ongoing incident by quickly addressing and correcting the incident. Even if the new incident data is only input into the trained model after the incident has been corrected, the action data may include actions to take that will prevent a future incident. Implementing such actions to prevent future incidents may also be time sensitive. In other words, computing applications, devices, and/or systems may still have improved functionality based on the systems and methods herein because the occurrence of new incidents may be reduced by more quickly identifying actions that will prevent future incidents and implementing those actions. As such, the systems and methods described herein provide technical solutions to the technical problems presented by downtime and/or interrupted service incidents by reducing the number and/or severity of such incidents. The systems and methods described herein may also reduce the amount of time it takes to perform a root cause analysis and may increase the accuracy of root cause analyses performed. For example, an automated root cause analysis as described herein may occur more quickly than the same root cause analysis performed manually by one or more human users. Furthermore, the automated root cause analysis described herein may be more accurate than those performed by human users. For example, in a large organization, many different individuals may be involved with performing root cause analyses for varying incidents. Based on the level of expertise of those individuals in performing root cause analysis and/or other human factors, different individuals could have different outcomes even if performing a root cause analysis on the same incident. The systems and methods described herein advantageously provide for technical solutions to such imprecise root cause analysis, as the automated systems and methods described herein may perform root cause analysis more predictably and/or uniformly than if root cause analyses were performed by multiple different people for different incidents. This may increase the accuracy of such root cause analyses and thereby improve the functioning of the various computing applications, devices, and/services for which root cause analysis of incidents is performed. The systems and methods described herein may also reduce the number of computing devices used to perform a root cause analysis of an incident. For example, in a manual root cause analysis, multiple individuals using multiple client devices may be used to perform a root cause analysis, review and revise the root cause analysis, finalize the root cause analysis, disseminate the root cause analysis and any action data derived therefrom, etc. Using the methods and systems described herein, the root cause analysis of a particular incident may be performed on one device or a limited number of devices as compared to the number of devices used in manual root cause analysis process. In this way, various embodiments may use decreased computing resources to perform a root cause analysis and disseminate the results thereof. In other words, various steps of a manual root cause analysis may be consolidated on one or a limited number of devices (e.g., determining action data and root cause data, disseminating action data to applicable persons), and some steps of the manual root cause analysis may be omitted completely (e.g., preparation or editing of multiple drafts or iterations of a root cause analysis by different users). In this way, the systems and methods described herein may improve the functioning of an organization's computing devices, systems, etc. by reducing the number of devices required to perform a root cause analysis on a new incident. The embodiments described herein may therefore also reduce the number of clicks, touches, or other interactions taken by one or more users in an organization for a root cause analysis to be performed. As such, the methods and systems described herein features provide improved ease of use for a user. For example, instead of manually reviewing voluminous incident data related to a new incident and manually inputting root cause data and action data, a user may receive a report, document, etc. detailing an incident and including root cause and/or action data automatically determined by a trained model. Such embodiments drastically decrease the number of interactions required from a user in order to perform a root cause analysis. Similarly, disseminating the root cause analysis to interested parties/users may also be performed automatically. For example, a trained model may determine automatically who to send a completed root cause analysis to. Such a determination may additionally or alternatively include determining parties to whom actions for preventing or correcting incidents may be assigned. In other words, the system may not only determine an action to take based on a new incident, but may also assign that action to a particular user or group of users and may communicate that action to the user or group of users automatically. Thus, the automatic dissemination of a root cause analysis and/or data related to the root cause analysis may further reduce the number of clicks, touches, or other interactions taken by users The root cause analysis systems and methods described herein therefore solve technological problems that exist with current root cause analysis systems. The current system of manually performing root cause analyses may limit the effectiveness of such analyses (e.g., if preventative actions are not implemented in time to prevent a new incident, if corrective actions are not taken in a timely manner such that incidents are amplified or propagate). By using the technological solutions necessarily rooted in computer technology described herein to improve root cause analysis, the technical functioning of devices used to perform root cause analysis and devices, applications, and/or services impacted by incidents may improve. Therefore, based at least in part the problems and solutions described herein, at least some embodiments of the present disclosure therefore result in improved functioning of electronic computing devices, network resources, and/or back end servers (including cloud computing resources). The methods and systems described herein also represent demonstrable technological improvements over prior root cause analysis systems. In other words, the embodiments herein provide for a particular manner of training a machine learning algorithm for performing root cause analysis related to incidents for computing applications, devices, and/or services, rather than using conventional manual methods for performing such analysis. The material disclosed herein may be implemented in software or firmware or a combination of them or as instructions stored on a machine-readable medium, which may be read and executed by one or more processors. A machine-readable medium may include any medium and/or mechanism for storing or transmitting information in a form readable by a machine (e.g., a computing device). For example, a machine-readable medium may include read only memory (ROM); random access memory (RAM); magnetic disk storage media; optical storage media; flash memory devices; electrical, optical, acoustical or other forms of propagated signals (e.g., carrier waves, infrared signals, digital signals, etc.), and others. The aforementioned examples are, of course, illustrative and not restrictive. As used herein, the term “user” shall have a meaning of at least one user. In some embodiments, the terms “user”, “subscriber” “consumer” or “customer” should be understood to refer to a user of an application or applications as described herein and/or a consumer of data supplied by a data provider. By way of example, and not limitation, the terms “user” or “subscriber” can refer to a person who receives data provided by the data or service provider over the Internet in a browser session, or can refer to an automated software application which receives the data and stores or processes the data. FIG.1is a block diagram depicting a computer-based system and platform in accordance with one or more embodiments of the present disclosure. However, not all of these components may be required to practice one or more embodiments, and variations in the arrangement and type of the components may be made without departing from the spirit or scope of various embodiments of the present disclosure. In some embodiments, the exemplary inventive computing devices and/or the exemplary inventive computing components of the exemplary computer-based system/platform100may be configured to manage a large number of members and/or concurrent transactions, as detailed herein. In some embodiments, the exemplary computer-based system/platform100may be based on a scalable computer and/or network architecture that incorporates varies strategies for assessing the data, caching, searching, and/or database connection pooling. An example of the scalable architecture is an architecture that is capable of operating multiple servers. In some embodiments, referring toFIG.1, members102-104(e.g., clients) of the exemplary computer-based system/platform100may include virtually any computing device capable of receiving and sending a message over a network (e.g., cloud network), such as network105, to and from another computing device, such as servers106and107, each other, and the like. In some embodiments, the member devices102-104may be personal computers, multiprocessor systems, microprocessor-based or programmable consumer electronics, network PCs, and the like. In some embodiments, one or more member devices within member devices102-104may include computing devices that typically connect using a wireless communications medium such as cell phones, smart phones, pagers, walkie talkies, radio frequency (RF) devices, infrared (IR) devices, CBs, integrated devices combining one or more of the preceding devices, or virtually any mobile computing device, and the like. In some embodiments, one or more member devices within member devices102-104may be devices that are capable of connecting using a wired or wireless communication medium such as a PDA, POCKET PC, wearable computer, a laptop, tablet, desktop computer, a netbook, a video game device, a pager, a smart phone, an ultra-mobile personal computer (UMPC), and/or any other device that is equipped to communicate over a wired and/or wireless communication medium (e.g., NFC, RFID, NBIOT, 3G, 4G, 5G, GSM, GPRS, WiFi, WiMax, CDMA, satellite, ZigBee, etc.). In some embodiments, one or more member devices within member devices102-104may include may run one or more applications, such as Internet browsers, mobile applications, voice calls, video games, videoconferencing, and email, among others. In some embodiments, one or more member devices within member devices102-104may be configured to receive and to send web pages, and the like. In some embodiments, an exemplary specifically programmed browser application of the present disclosure may be configured to receive and display graphics, text, multimedia, and the like, employing virtually any web based language, including, but not limited to Standard Generalized Markup Language (SMGL), such as HyperText Markup Language (HTML), a wireless application protocol (WAP), a Handheld Device Markup Language (HDML), such as Wireless Markup Language (WML), WMLScript, XMIL, JavaScript, and the like. In some embodiments, a member device within member devices102-104may be specifically programmed by either Java, .Net, QT, C, C++ and/or other suitable programming language. In some embodiments, one or more member devices within member devices102-104may be specifically programmed include or execute an application to perform a variety of possible tasks, such as, without limitation, messaging functionality, browsing, searching, playing, streaming or displaying various forms of content, including locally stored or uploaded messages, images and/or video, and/or games. In some embodiments, the exemplary network105may provide network access, data transport and/or other services to any computing device coupled to it. In some embodiments, the exemplary network105may include and implement at least one specialized network architecture that may be based at least in part on one or more standards set by, for example, without limitation, Global System for Mobile communication (GSM) Association, the Internet Engineering Task Force (IETF), and the Worldwide Interoperability for Microwave Access (WiMAX) forum. In some embodiments, the exemplary network105may implement one or more of a GSM architecture, a General Packet Radio Service (GPRS) architecture, a Universal Mobile Telecommunications System (UMTS) architecture, and an evolution of UMTS referred to as Long Term Evolution (LTE). In some embodiments, the exemplary network105may include and implement, as an alternative or in conjunction with one or more of the above, a WiMAX architecture defined by the WiMAX forum. In some embodiments and, optionally, in combination of any embodiment described above or below, the exemplary network105may also include, for instance, at least one of a local area network (LAN), a wide area network (WAN), the Internet, a virtual LAN (VLAN), an enterprise LAN, a layer 3 virtual private network (VPN), an enterprise IP network, or any combination thereof. In some embodiments and, optionally, in combination of any embodiment described above or below, at least one computer network communication over the exemplary network105may be transmitted based at least in part on one of more communication modes such as but not limited to: NFC, RFID, Narrow Band Internet of Things (NBIOT), ZigBee, 3G, 4G, 5G, GSM, GPRS, WiFi, WiMax, CDMA, satellite and any combination thereof. In some embodiments, the exemplary network105may also include mass storage, such as network attached storage (NAS), a storage area network (SAN), a content delivery network (CDN) or other forms of computer or machine readable media. In some embodiments, the exemplary server106or the exemplary server107may be a web server (or a series of servers) running a network operating system, examples of which may include but are not limited to Microsoft Windows Server, Novell NetWare, or Linux. In some embodiments, the exemplary server106or the exemplary server107may be used for and/or provide cloud and/or network computing. Although not shown inFIG.1, in some embodiments, the exemplary server106or the exemplary server107may have connections to external systems like email, SMS messaging, text messaging, ad content providers, etc. Any of the features of the exemplary server106may be also implemented in the exemplary server107and vice versa. In some embodiments, one or more of the exemplary servers106and107may be specifically programmed to perform, in non-limiting example, as authentication servers, search servers, email servers, social networking services servers, SMS servers, IM servers, MMS servers, exchange servers, photo-sharing services servers, advertisement providing servers, financial/banking-related services servers, travel services servers, or any similarly suitable service-base servers for users of the member computing devices101-104. In some embodiments and, optionally, in combination of any embodiment described above or below, for example, one or more exemplary computing member devices102-104, the exemplary server106, and/or the exemplary server107may include a specifically programmed software module that may be configured to send, process, and receive information using a scripting language, a remote procedure call, an email, a tweet, Short Message Service (SMS), Multimedia Message Service (MMS), instant messaging (IM), internet relay chat (TRC), mIRC, Jabber, an application programming interface, Simple Object Access Protocol (SOAP) methods, Common Object Request Broker Architecture (CORBA), HTTP (Hypertext Transfer Protocol), REST (Representational State Transfer), or any combination thereof. FIG.2depicts a block diagram of another exemplary computer-based system/platform200in accordance with one or more embodiments of the present disclosure. However, not all of these components may be required to practice one or more embodiments, and variations in the arrangement and type of the components may be made without departing from the spirit or scope of various embodiments of the present disclosure. In some embodiments, the member computing devices202a,202bthrough202nshown each at least includes a computer-readable medium, such as a random-access memory (RAM)208coupled to a processor210or FLASH memory. In some embodiments, the processor210may execute computer-executable program instructions stored in memory208. In some embodiments, the processor210may include a microprocessor, an ASIC, and/or a state machine. In some embodiments, the processor210may include, or may be in communication with, media, for example computer-readable media, which stores instructions that, when executed by the processor210, may cause the processor210to perform one or more steps described herein. In some embodiments, examples of computer-readable media may include, but are not limited to, an electronic, optical, magnetic, or other storage or transmission device capable of providing a processor, such as the processor210of client202a, with computer-readable instructions. In some embodiments, other examples of suitable media may include, but are not limited to, a floppy disk, CD-ROM, DVD, magnetic disk, memory chip, ROM, RAM, an ASIC, a configured processor, all optical media, all magnetic tape or other magnetic media, or any other medium from which a computer processor can read instructions. Also, various other forms of computer-readable media may transmit or carry instructions to a computer, including a router, private or public network, or other transmission device or channel, both wired and wireless. In some embodiments, the instructions may comprise code from any computer-programming language, including, for example, C, C++, Visual Basic, Java, Python, Perl, JavaScript, and etc. In some embodiments, member computing devices202athrough202nmay also comprise a number of external or internal devices such as a mouse, a CD-ROM, DVD, a physical or virtual keyboard, a display, or other input or output devices. In some embodiments, examples of member computing devices202athrough202n(e.g., clients) may be any type of processor-based platforms that are connected to a network206such as, without limitation, personal computers, digital assistants, personal digital assistants, smart phones, pagers, digital tablets, laptop computers, Internet appliances, and other processor-based devices. In some embodiments, member computing devices202athrough202nmay be specifically programmed with one or more application programs in accordance with one or more principles/methodologies detailed herein. In some embodiments, member computing devices202athrough202nmay operate on any operating system capable of supporting a browser or browser-enabled application, such as Microsoft™ Windows™, and/or Linux. In some embodiments, member computing devices202athrough202nshown may include, for example, personal computers executing a browser application program such as Microsoft Corporation's Internet Explorer™, Apple Computer, Inc.'s Safari™, Mozilla Firefox, and/or Opera. In some embodiments, through the member computing client devices202athrough202n, users212athrough212n, may communicate over the exemplary network206with each other and/or with other systems and/or devices coupled to the network206. As shown inFIG.2, exemplary server devices204and213may be also coupled to the network206. In some embodiments, one or more member computing devices202athrough202nmay be mobile clients. In some embodiments, at least one database of exemplary databases207and215may be any type of database, including a database managed by a database management system (DBMS). In some embodiments, an exemplary DBMS-managed database may be specifically programmed as an engine that controls organization, storage, management, and/or retrieval of data in the respective database. In some embodiments, the exemplary DBMS-managed database may be specifically programmed to provide the ability to query, backup and replicate, enforce rules, provide security, compute, perform change and access logging, and/or automate optimization. In some embodiments, the exemplary DBMS-managed database may be chosen from Oracle database, IBM DB2, Adaptive Server Enterprise, FileMaker, Microsoft Access, Microsoft SQL Server, MySQL, PostgreSQL, and a NoSQL implementation. In some embodiments, the exemplary DBMS-managed database may be specifically programmed to define each respective schema of each database in the exemplary DBMS, according to a particular database model of the present disclosure which may include a hierarchical model, network model, relational model, object model, or some other suitable organization that may result in one or more applicable data structures that may include fields, records, files, and/or objects. In some embodiments, the exemplary DBMS-managed database may be specifically programmed to include metadata about the data that is stored. As also shown inFIGS.2and3, some embodiments of the disclosed technology may also include and/or involve one or more cloud components225, which are shown grouped together in the drawing for sake of illustration, though may be distributed in various ways as known in the art. Cloud components225may include one or more cloud services such as software applications (e.g., queue, etc.), one or more cloud platforms (e.g., a Web front-end, etc.), cloud infrastructure (e.g., virtual machines, etc.), and/or cloud storage (e.g., cloud databases, etc.). According to some embodiments shown by way of one example inFIG.4, the exemplary inventive computer-based systems/platforms, the exemplary inventive computer-based devices, components and media, and/or the exemplary inventive computer-implemented methods of the present disclosure may be specifically configured to operate in or with cloud computing/architecture such as, but not limiting to: infrastructure a service (IaaS)410, platform as a service (PaaS)408, and/or software as a service (SaaS)406.FIGS.3and4illustrate schematics of exemplary implementations of the cloud computing/architecture(s) in which the exemplary inventive computer-based systems/platforms, the exemplary inventive computer-implemented methods, and/or the exemplary inventive computer-based devices, components and/or media of the present disclosure may be specifically configured to operate. In various embodiments, different aspects described with respect toFIGS.1-4may be used. For example, incidents may occur on one or more of the client devices102,103,104, or202athrough202n; the networks105or206; the server devices106,107,204, or213; the network databases207or215; and/or the one or more cloud components225. In addition, historical and/or new incidents as described herein may occur related to software applications, hardware components, software services, or any other aspect of the devices or components shown inFIGS.1-4. One or more of the client devices102,103,104, or202athrough202nmay also be used to receive information related to incidents, such as an automated root cause analysis relating to a new incident and performed by a trained model. One or more of the client devices102,103,104, or202athrough202nmay also be used to make manual changes to an automatically generated root cause analysis. Such changes may be used to further train or refine a machine learning algorithm or model, whether that machine learning algorithm is already a fully trained model or not. For example, a trained model may be used to perform an automated root cause analysis. The root cause analysis may be sent to one of the client devices102,103,104, or202athrough202n. The user may input into one of the client devices102,103,104, or202athrough202nchanges for that root cause analysis or and of the data output related to that root cause analysis. Those changes may then be input back into the trained model to further train or refine the machine learning algorithm. In this way, the machine learning algorithm may continuously adapt. Any of the client devices102,103,104, or202athrough202n; the networks105or206; the server devices106,107,204, or213; the network databases207or215; and/or the one or more cloud components225may also be used to store and/or train a machine learning algorithm used to perform root cause analysis for incidents as described herein. For example, an algorithm may be located on a server or cloud server, and historical incident data may also be located on the same or different servers or cloud servers. The historical incident data may then be input into an untrained machine learning algorithm until the machine learning algorithm is trained. The trained model may also be stored on a server or cloud server, and may receive new incident data from any of the client devices102,103,104, or202athrough202n; the networks105or206; the server devices106,107,204, or213; the network databases207or215; and/or the one or more cloud components225. In other words, the client devices shown inFIGS.1-4may be used to implement the various methods and systems described herein. FIG.5is a flowchart illustrating a process500for performing root cause analysis of computing incidents using machine learning in accordance with one or more embodiments of the present disclosure. The process500includes operations that may be performed, for example, by one or more of the device or components shown in and discussed above with respect toFIGS.1-4. In the process500, various inputs may be received at an untrained machine learning algorithm504. In particular, the untrained machine learning algorithm504receives historical incident data502, root cause data506, action data508, change causation data510, and responsible person or team information512. Each of these inputs may be used to train the untrained machine learning algorithm504to create a trained model514. The trained model514may be trained to recognize patterns related to computing incidents based on the various data and information input into the untrained machine learning algorithm504. For example, the historical incident data502may be automatically gathered objective data related to a computing incident of downtime or interrupted service for one or more computing devices, applications, or services. Some or all of the root cause data506, the action data508, the change causation data510, or the responsible person or team information512relating to the historical incidents in the historical incident data502may have been generated manually by one or more users performing a manual root cause analysis relating to the historical incidents. Accordingly, the untrained machine learning algorithm504may receive automatically generated data relating to historical incidents (e.g., the historical incident data502) and may receive manually generated data relating to historical incidents (e.g., the root cause data506, the action data508, the change causation data510, the responsible person or team information512). Thus, the untrained machine learning algorithm504may be trained to recognize patterns, correlations, etc. between the historical incident data502and the other types of data input into the untrained machine learning algorithm504. In this way, the trained model514may receive new incident data516including automatically collected objective data relating to new incidents and may generate and output any of root cause data, action data, change causation data, and/or responsible person/team information for the new incident at518. In other words, the trained model514has learned how to generate additional data for a new incident based on all the data inputs received at the untrained machine learning model504. Any data similar to any of the root cause data506, the action data508, the change causation data510, and/or the responsible person or team information512as described herein that is input to train a machine learning algorithm may be generated by a trained model upon input of new incident data. In other words, some or all the same types of data input to train an algorithm may be output by the algorithm based on new incident data. Further examples of such data types are described below. The data points in each of the data types input into an untrained machine learning algorithm may correlate to one or more of the data points in any category of data type. As such, the machine learning algorithm, once trained, may be able to determine and generate any of the data types input to train the machine learning algorithm. In other words, a trained model may recognize correlations between data of disparate types relating to historical incidents, so that more accurate data may be generated for future incidents. In various embodiments, the historical incident data502may include various other types of data or information. For example, each of the historical incidents may be associated with a unique incident number. The timing information of the incident data may include other types of data such as how long (or start/stop times) of when an overall system was impacted by an incident, how long (or start/stop times) an end user or customer was impacted by an incident, when an incident was fully or partially detected, and/or when an incident was fully or partially resolved. Another example of historical incident data may be a type of product or service the incident affected. For example, organizations may develop their own proprietary systems that communicate with one another using proprietary protocols, such as an application programming interface (API) or application service version (ASV). The historical incident data may further indicate a product or service affected, including an API or ASV affected if applicable. In another example, the identification information of the application, device, or service may include information about a technology layer where the incident occurred. Other information included in the incident data may include a written description of an incident, a person, business unit, or group of users affected by the incident, or any other information related to an incident. The incident data may further include information related to an incident. For example, if a new version of an application or service was rolled out around the incident, the procedures and steps (e.g., quality control steps) taken with regard to that new version rollout may be included in the incident data. Such data may correlate to and/or indicate that the incident was caused by the change, if, for example, a quality control step required for a particular rollout was omitted. Further incident data may indicate a state of a historical incident. For example, whether the incident has been resolved or not may be included in the incident data. The state of action items related to an incident may also be included (e.g., has action been completed, when was action completed). In this way, the machine learning algorithm may be able to determine correlations between recurring incidents with an application, service, or device may be due to action items not being complete. In addition, the machine learning algorithm may learn to give less weight to action items that have been completed but have not yet resolved an incident based on the state information of the incident. In other words, the system may not generate recommendations of actions that appear not to have solved a given issue. Similarly, the incident data may include a manually entered state field for actions indicating whether the actions were helpful in resolving an incident, which may be used by a machine learning algorithm in a similar manner. In various embodiments, the root cause data506may include various other types of data or information. For example, root cause data may be categorized into broader categories and sub-categories. As such, the root cause may be determined to be of a certain broad type in a higher division of classification, and a subdivision of that broader classification may indicate a more specific root cause that is of the type indicated by the broader, higher level classification. Similar to the categorization of a root cause, tags or themes may also be applied to an incident or root cause to categorize an incident or its root cause. For example, themes or tags that may be applied to an incident or root cause may be related to automatic scaling of an application, service, or device rollout; third-party vendor involved; internal communication issues; inadequate response to an alert or action; large data transfers; failed monitoring of systems; a recovery time objective (RTO) miss; testing or validation issue; geographical latency issues; etc. As such, a trained model may also be able to assign tags or themes to new incidents. The root cause data506may include other information or data in various embodiments. For example, an impact or severity level rating (e.g., high, medium, low) may be included with a root cause. In various embodiments, the action data508may include various other types of data or information. For example, an action may be categorized as critical or not critical (or may be categorized to be a degree of critical such as high, medium, low). The action data may further include a due date for any actions, and any particular items, services, applications, devices, etc. that the action may be applicable to. The action data may also include a written description of the description of the action to be performed, and may include a group or person to which a particular action is assigned. In various embodiments, the change causation data510may include various other types of data or information. For example, the change causation data may include a binary (e.g., yes or no) indication of whether an incident was considered to have been caused by or related to a change in a device, application, or service. The change causation data may also indicate which particular change or changes to a device, application, or service is considered to have caused or be related to the incident. The change causation data may also include category for change that caused or is related to the incident. In other words, categories or tags may be used to more broadly characterize types of changes, and that data may further be used to train a machine learning algorithm, as it may correlate to some type of data desired. In various embodiments, the responsible person or team information512may include various other types of data or information. For example, the responsible person or team may be a group of employees within a business, a person or persons in a business, or both (e.g., two responsible people within a responsible business group may be designated). In various embodiments, multiple responsible persons/groups may be assigned to different actions. For example, a corrective action may be assigned to a first person or team and a preventative action may be assigned to a different person or team. In another example, a person or team may be assigned to actually complete a task or action item, while another person or team may be assigned as being responsible for ensuring that the task or action item actually gets completed. At an operation520, a user may make manual adjustments to any of the outputted information from the trained model514. For example, a user may adjust who the responsible person or team is for taking a follow up action related to a new incident. As such, the manual adjustment may be made to the responsible person or team information of the output at518. A user may also manually change any of the other data output at518. That information may be input back into the trained model514, so that the machine learning algorithm may continue to iterate and learn more to generate more accurate or desirable data when new incident data is input into the trained model514. At an operation522, the system may also transmit a message to a responsible person or team based on the information generated at518for the new incident. Such a message may include any of the data generated at518. For example, the message may include action data related to the new incident, which may include one or more specific actions to be taken by the responsible person or team determined at518. In this way, the determination of corrective and/or preventative actions related to a new incident, the determination of who should be responsible for taking those corrective and/or preventative actions, and transmission of a message notifying those responsible for the action(s) may all be automatically performed by the systems and methods described herein. FIG.6is a flowchart illustrating another process600for performing root cause analysis of computing incidents using machine learning in accordance with one or more embodiments of the present disclosure. The process600includes operations that may be performed, for example, using the various components shown in and discussed above with respect toFIGS.1-4. At an operation602, incident data associated with a plurality of historical incidents of downtime or interrupted service of one or more of a first computing application, a first computing device, or a first computing service is received at one or more computing devices where an untrained machine learning algorithm is stored and/or being trained. In some embodiments, a processor of a computing device may utilize the incident data associated with the plurality of historical incidents to train the machine learning algorithm. The incident data, for each of the plurality of historical incidents, may include identification information of which one or more of the first computing application, the first computing device, or the first computing service were affected by the downtime or the interrupted service. In other words, the incident data includes information related to what computing applications, devices, or services were actually affected for each of the plurality of historical incidents. This may be useful for training the machine learning algorithm because different types of output data (e.g., action data) may be correlated to a type or identity of application, device, or service affected by an incident. The incident data, for each of the plurality of historical incidents, may further include timing information relating to when one or more of the first computing application, the first computing device, or the first computing service were affected by the downtime or the interrupted service. The timing information may include a time of day and date when an incident began, a time of day and date when an incident ended, a duration of an incident, or other time related metrics related to incident. For example, other time related metrics may indicate how services, devices, or applications that are not completely unavailable may have delayed service, and how much those delays are for users attempting to use those services, devices, or applications. The incident data, for each of the plurality of historical incidents, may further include version history information indicating one or more changes to the first computing application, the first computing device, or the first computing service which might have caused or were affected by the downtime or the interrupted service. The version history information may be a complete or partial list of version history information for a device, service, or application. The version history information may include a current version number, previous version number, time at which the current version was put into place, etc. The version history information may also include a change log or other information indicating what was changed in a respective version (in either a current version or previous versions). This information may be included in the incident data because some incidents and the data generated relating to the incidents may correlate to whether the computing device, service, or application recently underwent a change, what the change was related to, etc. The incident data may further include, for example, impact data indicating at least one of a number of sites, number of devices, or number of users impacted by each of the plurality of historical incidents. This may indicate the scope of an incident. Such data may further indicate how critical those devices, sites, or users impacted by the incident was. Timing information associated with the devices, sites, or users may also be included. For example, an incident may affect more devices, sites, or users over time, so the impact data may include an indication of when specific devices, sites, and/or users were affected and for how long. Similarly, the impact data may include location or other identification data indicating identity of and/or location of affected devices, sites, and/or users. At an operation604, root cause data indicating a cause of each of the plurality of historical incidents is received at one or more computing devices where an untrained machine learning algorithm is stored and/or being trained. In some embodiments, a processor of a computing device may input the root cause data associated with the plurality of historical incidents into the machine learning algorithm for training. As described herein, the root cause data may indicate what is believed to be one or more actual causes of each of the plurality of historical incidents. This may be input to train the machine learning algorithm so that the machine learning algorithm may learn to determine root causes for future incidents based on new incident data input into a trained model. At an operation606, action data indicating at least one corrective action, at least one preventative action, or both, taken or to be taken in response to each of the plurality of historical incidents is received at one or more computing devices where the untrained machine learning algorithm is stored and/or being trained. In some embodiments, a processor of a computing device may input the action data associated with the plurality of historical incidents into the machine learning algorithm for training. As described herein, the action data may result in one or more preventative or correction actions taken to address each one of the plurality of historical incidents. This action data may be input to train the machine learning algorithm so that the machine learning algorithm may produce a trained model that is configured to determine recommended preventative or corrective actions for future incidents based on new incident data input. At an operation608, a machine learning algorithm is trained using the incident data, the root cause data, and the action data, such that the training of the machine learning algorithm produces a trained model. Other types of data than incident data, root cause data, and action data may additionally or alternatively be received and used to train the machine learning algorithm as described herein. For example, change causation data indicating whether a historical incident is related to a change to the first computing application, the first computing device, or the first computing service affected may be input and used for training. In other words, a manually entered indication that an incident was related to a change to a device, application, or service may also be input to train the machine learning algorithm (and may also be output by a trained model based on received new incident data). As another example, responsible person or team information indicating who is responsible for taking the corrective action or the preventative action for each of the plurality of historical incidents may be received and used to train the machine learning algorithm. In other words, past manually entered persons or teams assigned to take corrective or preventative actions in response to incidents may be input to train the algorithm. As such, once trained, the trained model may also generate automated recommendations for persons or teams to be assigned specific actions from the action data generated for a new incident. In various examples, different type of machine learning algorithms, artificial intelligence (AI) algorithms, neural networks, etc. may be used as the untrained machine learning algorithm or as part of the untrained machine learning algorithm. Such algorithms may be used to produce a trained model as described herein. Example machine learning algorithms and/or methods including association rules (AR), collaborative filtering (CF), content-based filtering, tree based algorithms, bagging techniques including random forest and boosting, clustering, classification, etc., or any combination thereof may be used together with artificial neural networks embedded to produce a model for performing root cause analysis as described herein. At an operation610, new incident data associated with a new incident of the downtime or the interrupted service of one or more of a second computing application, a second computing device, or a second computing service is received. In some embodiments, a processor of a computing device may utilize the trained model to analyze the new incident data and generate additional data related to the new incident based on what was learned from the incident data, the root cause data, and the action data associated with the historical incidents. In various embodiments, the new incident data related to the new incident may include data similar to the incident data associated with the historical incidents. For example, the new incident data may include new identification information of which one or more of the second computing application, the second computing device, or the second computing service were affected by the downtime or the interrupted service. In this way, the devices, applications, and/or services affected by the new incident may be identified. In addition, the new incident data may include new timing information relating to when one or more of the second computing application, the second computing device, or the second computing service were affected by the downtime or the interrupted service. Thus, the trained model may receive information related to exactly when the new incident occurred, as well as any other timing related information associated with the new incident collected. The new incident data further may further include new version history information indicating one or more changes to the second computing application, the second computing device, or the second computing service which might have caused or were affected by the downtime or the interrupted service. In this way, the trained model may also consider how the version history of a device, application, or service may have impacted or caused the incident. The second computing application, the second computing device, or the second computing service may or may not be the same as the first computing application the first computing device, or the first computing service related to the historical incidents used to train the machine learning algorithm. In other words, a new incident may be related to a same application, device, or service for which historical incidents have already occurred, or the new incident may be related to a different application, device, or service than those for which historical incident data was used to train the machine learning algorithm. As such, even if a particular application, service, or device has not experienced an incident before, a processor of a computing device may utilize a trained model to generate data (e.g., action data, root cause data, responsible person/team information) relating to the new incident. In particular, at an operation612, a processor of a computing device may utilize the trained model to analyze the new incident data to determine a root cause of the new incident as well as a new corrective action, a new preventative action, or both, for the new incident. As such, machine learning and artificial intelligence may be used to perform root cause analysis for computing incidents. At least some aspects of the present disclosure will now be described with reference to the following numbered clauses.1. A method comprising:receiving, by one or more processors, incident data associated with a plurality of historical incidents of downtime or interrupted service of one or more of a first computing application, a first computing device, or a first computing service, wherein the incident data, for each of the plurality of historical incidents, comprises:identification information of which one or more of the first computing application, the first computing device, or the first computing service were affected by the downtime or the interrupted service;timing information relating to when one or more of the first computing application, the first computing device, or the first computing service were affected by the downtime or the interrupted service; andversion history information indicating one or more changes to the first computing application, the first computing device, or the first computing service which might have caused or were affected by the downtime or the interrupted service;receiving, by the one or more processors, root cause data indicating a cause of each of the plurality of historical incidents;receiving, by the one or more processors, action data indicating at least one corrective action, at least one preventative action, or both, taken or to be taken in response to each of the plurality of historical incidents;training, by the one or more processors, a machine learning algorithm using the incident data, the root cause data, and the action data such that the training of the machine learning algorithm creates a trained model;receiving, by the one or more processors, new incident data associated with a new incident of the downtime or the interrupted service of one or more of a second computing application, a second computing device, or a second computing service; andanalyzing, by the one or more processor, using the trained model, the new incident data to determine:a root cause of the new incident, anda new corrective action, a new preventative action, or both, for the new incident.2. The method of clause 1, wherein the new incident data comprises:new identification information of which one or more of the second computing application, the second computing device, or the second computing service were affected by the downtime or the interrupted service;new timing information relating to when one or more of the second computing application, the second computing device, or the second computing service were affected by the downtime or the interrupted service; andnew version history information indicating one or more changes to the second computing application, the second computing device, or the second computing service which might have caused or were affected by the downtime or the interrupted service.3. The method of clause 1, wherein the second computing application, the second computing device, or the second computing service affected by the new incident is one or more of the first computing application, the first computing device, or the first computing service related to the plurality of historical incidents.4. The method of clause 1, wherein the second computing application, the second computing device, or the second computing service affected by the new incident is different from the first computing application, the first computing device, or the first computing service related to the plurality of historical incidents.5. The method of clause 1, further comprising:receiving, by the one or more processors, change causation data indicating whether each of the plurality of historical incidents is related to a first change to the first computing application, the first computing device, or the first computing service affected; andwherein the training of the machine learning algorithm further comprises training the machine learning algorithm using the change causation data.6. The method of clause 5, wherein the analyzing of the new incident data using the trained model further comprises determining whether the new incident is related to a second change to the second computing application, the second computing device, or the second computing service.7. The method of clause 6, wherein:the new incident data comprises new version history information indicating one or more changes to the second computing application, the second computing device, or the second computing service which might have caused or were affected by the downtime or the interrupted service; andthe determining whether the new incident is related to the second change is based at least in part on the new version history information.8. The method of clause 1, further comprising:receiving, by the one or more processors, responsible person or team information indicating who is responsible for taking the corrective action or the preventative action for each of the plurality of historical incidents; andwherein the training of the machine learning algorithm further comprises training the machine learning algorithm using the responsible person or team information.9. The method of clause 8, wherein the analyzing of the new incident data using the trained model further comprises determining a new responsible person or team responsible for taking the new corrective or the new preventative action for the new incident.10. A system comprising:a memory; andat least one processor coupled to the memory, the processor configured to:receive incident data associated with a plurality of historical incidents of downtime or interrupted service of one or more of a first computing application, a first computing device, or a first computing service, wherein the incident data, for each of the plurality of historical incidents, comprises:identification information of which one or more of the first computing application, the first computing device, or the first computing service were affected by the downtime or the interrupted service;timing information relating to when one or more of the first computing application, the first computing device, or the first computing service were affected by the downtime or the interrupted service; andversion history information indicating one or more changes to the first computing application, the first computing device, or the first computing service which might have caused or were affected by the downtime or the interrupted service;receive root cause data indicating a cause of each of the plurality of historical incidents;receive action data indicating at least one corrective action, at least one preventative action, or both, taken or to be taken in response to each of the plurality of historical incidents; andtrain a machine learning algorithm using the incident data, the root cause data, and the action data such that the training of the machine learning algorithm creates a trained model configured to determine a root cause and a new corrective action, a new preventive action, or both, for a new incident of the downtime or the interrupted service of one or more of a second computing application, a second computing device, or a second computing service.11. The system of clause 10, wherein the second computing application, the second computing device, or the second computing service affected by the new incident is one or more of the first computing application, the first computing device, or the first computing service related to the plurality of historical incidents.12. The system of clause 10, wherein the second computing application, the second computing device, or the second computing service affected by the new incident is different from the first computing application, the first computing device, or the first computing service related to the plurality of historical incidents.13. The system of clause 10, wherein the root cause data and the action data are manually entered by one or more users for each of the plurality of historical incidents.14. The system of clause 13, wherein the incident data is automatically gathered for each of the plurality of historical incidents.15. The system of clause 14, wherein the incident data further comprises for each of the plurality of historical incidents: impact data indicating at least one of a number of sites, number of devices, or number of users impacted by each of the plurality of historical incidents.16. A non-transitory computer readable medium having instructions stored thereon that, upon execution by a computing device, cause the computing device to perform operations comprising:receiving new incident data associated with a new incident of downtime or interrupted service of one or more of a first computing application, first computing device, or first computing service; andanalyzing, using a trained model, the new incident data to determine:a root cause of the new incident, anda new corrective action, a new preventative action, or both, for the new incident;wherein the trained model has been trained using:historical incident data associated with a plurality of historical incidents of the downtime or the interrupted service of one or more of a second computing application, a second computing device, or a second computing service;root cause data indicating a cause of each of the plurality of historical incidents; andaction data indicating at least one corrective action, at least one preventative action, or both, taken or to be taken in response to each of the plurality of historical incidents.17. The non-transitory computer readable medium of clause 16, wherein the historical incident data, for each of the plurality of historical incidents, comprises:identification information of which one or more of the second computing application, the second computing device, or the second computing service were affected by the downtime or the interrupted service;timing information relating to when one or more of the second computing application, the second computing device, or the second computing service were affected by the downtime or the interrupted service; andversion history information indicating one or more changes to the second computing application, the second computing device, or the second computing service which might have caused or were affected by the downtime or the interrupted service.18. The non-transitory computer readable medium of clause 17, wherein the analyzing of the new incident data using the trained model further comprises determining whether the new incident is related to a change to the second computing application, the second computing device, or the second computing service based at least in part on the version history information.19. The non-transitory computer readable medium of clause 16, wherein the analyzing of the new incident data using the trained model further comprises determining a person or team responsible for taking the new corrective action or the new preventative action for the new incident.20. The non-transitory computer readable medium of clause 16, wherein the analyzing of the new incident data using the trained model further comprises determining one or more persons or teams impacted by the new incident. As used herein, the terms “computer engine” and “engine” identify at least one software component and/or a combination of at least one software component and at least one hardware component which are designed/programmed/configured to manage/control other software and/or hardware components (such as the libraries, software development kits (SDKs), objects, etc.). Examples of hardware elements may include processors, microprocessors, circuits, circuit elements (e.g., transistors, resistors, capacitors, inductors, and so forth), integrated circuits, application specific integrated circuits (ASIC), programmable logic devices (PLD), digital signal processors (DSP), field programmable gate array (FPGA), logic gates, registers, semiconductor device, chips, microchips, chip sets, and so forth. In some embodiments, the one or more processors may be implemented as a Complex Instruction Set Computer (CISC) or Reduced Instruction Set Computer (RISC) processors; x86 instruction set compatible processors, multi-core, or any other microprocessor or central processing unit (CPU). In various implementations, the one or more processors may be dual-core processor(s), dual-core mobile processor(s), and so forth. Examples of software may include software components, programs, applications, computer programs, application programs, system programs, machine programs, operating system software, middleware, firmware, software modules, routines, subroutines, functions, methods, procedures, software interfaces, application program interfaces (API), instruction sets, computing code, computer code, code segments, computer code segments, words, values, symbols, or any combination thereof. Determining whether an embodiment is implemented using hardware elements and/or software elements may vary in accordance with any number of factors, such as desired computational rate, power levels, heat tolerances, processing cycle budget, input data rates, output data rates, memory resources, data bus speeds and other design or performance constraints. One or more aspects of at least one embodiment may be implemented by representative instructions stored on a machine-readable medium which represents various logic within the processor, which when read by a machine causes the machine to fabricate logic to perform the techniques described herein. Such representations, known as “IP cores” may be stored on a tangible, machine readable medium and supplied to various customers or manufacturing facilities to load into the fabrication machines that make the logic or processor. Of note, various embodiments described herein may, of course, be implemented using any appropriate hardware and/or computing software languages (e.g., C++, Objective-C, Swift, Java, JavaScript, Python, Perl, QT, etc.). In some embodiments, one or more of exemplary inventive computer-based systems/platforms, exemplary inventive computer-based devices, and/or exemplary inventive computer-based components of the present disclosure may include or be incorporated, partially or entirely into at least one personal computer (PC), laptop computer, ultra-laptop computer, tablet, touch pad, portable computer, handheld computer, palmtop computer, personal digital assistant (PDA), cellular telephone, combination cellular telephone/PDA, television, smart device (e.g., smart phone, smart tablet or smart television), mobile internet device (MID), messaging device, data communication device, and so forth. As used herein, the term “server” should be understood to refer to a service point which provides processing, database, and communication facilities. By way of example, and not limitation, the term “server” can refer to a single, physical processor with associated communications and data storage and database facilities, or it can refer to a networked or clustered complex of processors and associated network and storage devices, as well as operating software and one or more database systems and application software that support the services provided by the server. Cloud components (e.g.,FIGS.3and4) and cloud servers are examples. In some embodiments, as detailed herein, one or more of the computer-based systems of the present disclosure may obtain, manipulate, transfer, store, transform, generate, and/or output any digital object and/or data unit (e.g., from inside and/or outside of a particular application) that can be in any suitable form such as, without limitation, a file, a contact, a task, an email, a message, a map, an entire application (e.g., a calculator), data points, and other suitable data. In some embodiments, as detailed herein, one or more of the computer-based systems of the present disclosure may be implemented across one or more of various computer platforms such as, but not limited to: (1) Linux™, (2) Microsoft Windows™, (3) OS X (Mac OS), (4) Solaris™, (5) UNIX™ (6) VMWare™, (7) Android™, (8) Java Platforms™, (9) Open Web Platform, (10) Kubernetes or other suitable computer platforms. In some embodiments, illustrative computer-based systems or platforms of the present disclosure may be configured to utilize hardwired circuitry that may be used in place of or in combination with software instructions to implement features consistent with principles of the disclosure. Thus, implementations consistent with principles of the disclosure are not limited to any specific combination of hardware circuitry and software. For example, various embodiments may be embodied in many different ways as a software component such as, without limitation, a stand-alone software package, a combination of software packages, or it may be a software package incorporated as a “tool” in a larger software product. For example, exemplary software specifically programmed in accordance with one or more principles of the present disclosure may be downloadable from a network, for example, a website, as a stand-alone product or as an add-in package for installation in an existing software application. For example, exemplary software specifically programmed in accordance with one or more principles of the present disclosure may also be available as a client-server software application, or as a web-enabled software application. For example, exemplary software specifically programmed in accordance with one or more principles of the present disclosure may also be embodied as a software package installed on a hardware device. In some embodiments, illustrative computer-based systems or platforms of the present disclosure may be configured to handle numerous concurrent users that may be, but is not limited to, at least 100 (e.g., but not limited to, 100-999), at least 1,000 (e.g., but not limited to, 1,000-9,999), at least 10,000 (e.g., but not limited to, 10,000-99,999), at least 100,000 (e.g., but not limited to, 100,000-999,999), at least 1,000,000 (e.g., but not limited to, 1,000,000-9,999,999), at least 10,000,000 (e.g., but not limited to, 10,000,000-99,999,999), at least 100,000,000 (e.g., but not limited to, 100,000,000-999,999,999), at least 1,000,000,000 (e.g., but not limited to, 1,000,000,000-999,999,999,999), and so on. In some embodiments, exemplary inventive computer-based systems/platforms, exemplary inventive computer-based devices, and/or exemplary inventive computer-based components of the present disclosure may be configured to output to distinct, specifically programmed graphical user interface implementations of the present disclosure (e.g., a desktop, a web app., etc.). In various implementations of the present disclosure, a final output may be displayed on a displaying screen which may be, without limitation, a screen of a computer, a screen of a mobile device, or the like. In various implementations, the display may be a holographic display. In various implementations, the display may be a transparent surface that may receive a visual projection. Such projections may convey various forms of information, images, and/or objects. For example, such projections may be a visual overlay for a mobile augmented reality (MAR) application. In some embodiments, exemplary inventive computer-based systems/platforms, exemplary inventive computer-based devices, and/or exemplary inventive computer-based components of the present disclosure may be configured to be utilized in various applications which may include, but not limited to, gaming, mobile-device games, video chats, video conferences, live video streaming, video streaming and/or augmented reality applications, mobile-device messenger applications, and others similarly suitable computer-device applications. As used herein, the term “mobile electronic device,” or the like, may refer to any portable electronic device that may or may not be enabled with location tracking functionality (e.g., MAC address, Internet Protocol (IP) address, or the like). For example, a mobile electronic device can include, but is not limited to, a mobile phone, Personal Digital Assistant (PDA), Blackberry™ Pager, Smartphone, or any other reasonable mobile electronic device. As used herein, the terms “proximity detection,” “locating,” “location data,” “location information,” and “location tracking” refer to any form of location tracking technology or locating method that can be used to provide a location of, for example, a particular computing device/system/platform of the present disclosure and/or any associated computing devices, based at least in part on one or more of the following techniques/devices, without limitation: accelerometer(s), gyroscope(s), Global Positioning Systems (GPS); GPS accessed using Bluetooth™; GPS accessed using any reasonable form of wireless and/or non-wireless communication; WiFi™ server location data; Bluetooth™ based location data; triangulation such as, but not limited to, network based triangulation, WiFi™ server information based triangulation, Bluetooth™ server information based triangulation; Cell Identification based triangulation, Enhanced Cell Identification based triangulation, Uplink-Time difference of arrival (U-TDOA) based triangulation, Time of arrival (TOA) based triangulation, Angle of arrival (AOA) based triangulation; techniques and systems using a geographic coordinate system such as, but not limited to, longitudinal and latitudinal based, geodesic height based, Cartesian coordinates based; Radio Frequency Identification such as, but not limited to, Long range RFID, Short range RFID; using any form of RFID tag such as, but not limited to active RFID tags, passive RFID tags, battery assisted passive RFID tags; or any other reasonable way to determine location. For ease, at times the above variations are not listed or are only partially listed; this is in no way meant to be a limitation. In some embodiments, the exemplary inventive computer-based systems/platforms, the exemplary inventive computer-based devices, and/or the exemplary inventive computer-based components of the present disclosure may be configured to securely store and/or transmit data by utilizing one or more of encryption techniques (e.g., private/public key pair, Triple Data Encryption Standard (3DES), block cipher algorithms (e.g., IDEA, RC2, RC5, CAST and Skipjack), cryptographic hash algorithms (e.g., MD5, RIPEMD-160, RTR0, SHA-1, SHA-2, Tiger (TTH), WHIRLPOOL, RNGs). Publications cited throughout this document are hereby incorporated by reference in their entirety. While one or more embodiments of the present disclosure have been described, it is understood that these embodiments are illustrative only, and not restrictive, and that many modifications may become apparent to those of ordinary skill in the art, including that various embodiments of the inventive methodologies, the inventive systems/platforms, and the inventive devices described herein can be utilized in any combination with each other. Further still, the various steps may be carried out in any desired order (and any desired steps may be added and/or any desired steps may be eliminated).
81,019
11860723
DETAILED DESCRIPTION OF THE DRAWINGS In the following description, for the purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the disclosure. It will be appreciated, however, by those having skill in the art, that the disclosure may be practiced without these specific details or with an equivalent arrangement. In other cases, well-known structures and devices are shown in block diagram form to avoid unnecessarily obscuring the disclosure. FIG.1shows an example computing system100for using predicted data to parallelize processing requests. The system100may include a processing system102, a requesting system104, and/or a validation system106. The processing system102may include a communication subsystem112, a machine learning (ML) subsystem114, and/or a notification subsystem116. The communication subsystem112may receive processing requests from the requesting system104. A processing request may comprise any type of electronic communication passed to computers. A processing request may be part of a batch of processing requests. The processing request may include a time stamp indicating a deadline by which the processing request should be validated or performed. The processing requests may be transactions between two entities (e.g., credit card transactions between retail stores and customers, transactions between banks and customers, etc.). The plurality of processing requests received from the requesting system104may be scheduled to be sent in a first batch for batch processing validation at the validation system106. The validation system106may be required to validate processing requests before they are performed or stored in a database, for example, to make sure there are no errors in the requests. However, this may add additional wait time for the validation system106to verify the processing requests. To increase efficiency and parallelization, the processing system102may generate data or modified processing requests that are predicted to pass validation and correct any errors in the processing requests. The processing system102may determine, via a first model (e.g., a machine learning model as described in connection withFIGS.1-3), an error in a first processing request of the plurality of processing requests received from the requesting system104. The error may indicate that an account type does not match a transaction type. The error may indicate that a dollar amount or an account number is incorrect. In some embodiments, the error may be with the batch of processing requests as a whole. For example, the error may indicate that a volume of transactions is above a threshold. The volume of transactions for a time period (e.g., month, quarter, etc.) may be higher than a corresponding time period of a previous year. The processing system102may input the plurality of processing requests into an anomaly detection model and in response to inputting the plurality of processing requests into the anomaly detection model, the processing system102may generate, via the anomaly detection model, output indicating that a volume of processing requests received from the first computing system satisfies a threshold. For example, the volume of transactions between a retail store and customers may satisfy the threshold because the volume is more than twice as high as the same month from a year earlier. The anomaly detection model may be Bayesian network, a hidden Markov model, or a variety of other models including a machine learning model as described below in connection withFIG.3. The anomaly detection model may take as input processing requests and output an indication of whether each processing request is an anomaly or not. The output may comprise a score for each processing request. If the score is greater than a threshold score (e.g., greater than 0.7, 0.8, etc.), then the processing request may be classified as an anomaly. If the score is less than the threshold score, then the processing request may be classified as not an anomaly. The processing system102may send an alert to the requesting system104after an error is detected. For example, the alert may indicate that the volume of processing requests satisfies the threshold. The processing system102may generate a modified processing request, for example, in response to determining the error in the first processing request. The processing system102may use a second model (e.g., a machine learning model as described in connection withFIGS.1-3) to generate the modified processing request. The second model may be the same model or a different model as the first model used to determine that an error is in a processing request. The second model may be a machine learning model that has been trained to generate modified processing requests that correct errors in processing requests. For example, the machine learning model may be a supervised model that has been trained on historical data comprising processing requests, errors detected in the processing requests, and corrections made to the processing requests. In some embodiments, the processing system102may use a rules-based model to generate a modified processing request. The processing system102may determine a rule associated with the error (e.g., via regular expression matching) and generate, based on the rule, a modified processing request (e.g., by replacing an incorrect value with a value indicated by the rule). In some embodiments, the processing system102may determine a historical processing request and may replace data of a first field in the first processing request with data of a second field in the historical processing request to generate the modified processing request. In some embodiments, the processing system102may limit the number of processing requests that may be modified for a given batch of processing requests. The number of processing requests may be limited based on a probability that the validation system106will accept a corresponding batch of processing requests (e.g., that include the modified processing requests). The probability of the validation system106rejecting the batch may increase, for example, as the number of modified processing requests in a batch increase. The processing system102may determine a probability of the validation system106rejecting a batch of processing requests based on historical rejections from the validation system. For example, the processing system102may determine the number of modified processing requests in each batch and calculate a probability of rejection based on the number of modified processing requests. The processing system102may determine to limit the number of modified processing requests in a given batch such that the probability of rejection is below a threshold probability. For example, the processing system102may determine that including more than ten modified processing requests in a batch increases the probability of rejection beyond ten percent and thus may determine to include up to nine modified processing requests in a batch. The processing system102may replace the first processing request in the plurality of processing requests with the modified processing request. The processing system102may transmit the plurality of processing requests (e.g., with the modified processing request) to the validation system106for batch processing validation. The validation system106may determine whether there are any errors in the batch and/or whether the processing requests may be performed. In some embodiments, the processing system102may determine that the validation system106does not receive processing requests during a time period. For example, the validation system may be offline during certain hours of the day (e.g., from 6 PM to 8 AM of the following day). In response to determining that the validation system106does not receive processing requests during the time period, the processing system may transmit the modified processing request to the requesting system104before the time period is over, and may transmit the modified processing request (e.g., included in a batch of processing requests) to the validation system106after the time period is over. While waiting to receive a first validation result from the validation system106, the processing system102may transmit the modified processing request to the requesting system104for acceptance. The modified processing request may be the same as the modified processing request that is sent with a batch of processing requests to the validation system106. Alternatively, the modified processing request sent to the requesting system104may be different from the modified processing request sent to the validation system106. For example, the processing system102may send only the modification made (e.g., a change made to the account type) and an ID number of the original processing request so that the requesting system104can approve of the modification. The requesting system104may be able to confirm whether the modification made to the first processing request is correct or not. The requesting system104may be able to confirm that the data generated by a machine learning model is correct. If the modified processing request is incorrect, the requesting system104may send a corrected processing request to the processing system102. The requesting system104may send additional data indicating why the modified processing request was incorrect. For example, the additional data may indicate why one account type should be used over another account type in the modified processing request. The processing system102may use the corrected processing request and/or the additional data to train a machine learning model that was used to generate the modified processing request to improve precision and/or recall for future modified processing requests. The processing system102may receive validation results from the validation system106. The validation results may indicate that the plurality of processing requests in the first batch are error free. Alternatively, the validation results may indicate that errors were found in the batch of processing requests. The validation results may indicate the errors and requirements for fixing the errors. The processing system102may use the validation results to correct the errors. For example, the processing system102may input the validation results into a machine learning model to generate corrections for the errors. Additionally or alternatively, the processing system102may send the validation results to the requesting system104so that the requesting system104can correct the errors. In some embodiments, the modified processing request may fail to pass a validation test. The processing system102may receive error data indicating the failure. Based on inputting the error data into a machine learning model (e.g., a machine learning model described below in connection withFIGS.2-3), the processing system102may generate a second modified processing request. The processing system102may send the second modified processing request to the requesting system104and the validation system106for validation. The processing system102may receive, from the requesting system104, an indication that the modified processing request (e.g., the modified processing request that was sent with other processing requests to the validation system106) has been accepted and/or is correct. In response to receiving an indication from the requesting system104that the modified processing request is correct, the processing system102may perform the processing requests. For example, the processing system may finalize one or more transactions between parties indicated in the processing requests. Additionally or alternatively, the processing system102may store the processing requests (e.g., with corrected data) in a database. In some embodiments, the processing system102may receive, from the requesting system104, an indication that the modified processing request is not correct. In response, the processing system102may generate an additional modified processing request with corrected data and send the additional modified processing request to the validation system106. The processing system102may determine, based on information received from the requesting system104and after sending the plurality of processing requests to the validation system106, that the modified processing request is incorrect. In response, the processing system102may generate, based on the information received from the requesting system104, an additional modified processing request. The processing system102may send the additional modified processing request to the validation system106. The processing system102may use a reinforcement learning model to perform one or more actions described above. The machine learning model may take as input any of the data described above (e.g., error data, processing request data, validation data, etc.) and may output an action for the processing system102to perform. The machine learning model may implement a reinforcement learning policy that includes a set of actions, a set of rewards, and/or a state. For example,FIG.2shows an example reinforcement learning policy200that may be implemented or used by the machine learning model. The reinforcement learning policy200may include an action set210that indicates the actions that the machine learning model may use (e.g., the machine learning model may output an action selected from the action set210). For example, the actions that the machine learning model may select from may include generating a modified processing request (e.g., to correct a detected error as described above), waiting for the requesting system104to accept a modified processing request (e.g., if a probability or confidence level associated with a modified processing request fails to satisfy a threshold) before sending the modified processing request to the validation system106, sending the modified processing request for batch validation, or a variety of other actions. The reinforcement learning policy200may include a reward set (e.g., value set)220that indicates the rewards that the machine learning model obtains (e.g., as the result of the sequence of multiple actions from the action set210). The reward set220may indicate that a reward is received and the amount of the reward (e.g., 100 points) if the action is completed before the deadline. The reward set220may indicate that the amount of a reward that is received is based on the amount of time that transpires between acceptance of a modified processing request by the requesting system104and validation of a corresponding batch of processing requests by the validation system106. For example, a greater reward may be received if the time between receiving an indication that the requesting system104has accepted a modified processing request and receiving an indication that a corresponding batch has been validated is 30 seconds than if the time between receiving each indication is 12 hours. The machine learning model may implement a loss function that optimizes for the maximum reward based on the reward set220. For example, the machine learning model may be trained to select actions that lead to higher rewards (e.g., that lead to quicker acceptance of modified processing requests by the validation system106and the requesting system104). The reinforcement learning policy200may include a state230that indicates the environment or state that the machine learning model is operating in. The machine learning model may output a selection of an action based on the current state. The state230may be updated at a predetermined frequency (e.g., daily, every 2 hours, or any other frequency). The machine learning model may output an action in response to each update of the state. For example, if the state is updated at the beginning of each day, the machine learning model may output an action to take based on the action set and/or one or more weights that have been trained/adjusted in the machine learning model. The state may include an indication of outstanding processing requests, the number of errors detected in a batch of processing requests, or a variety of other information. One or more machine learning models implemented/used by the ML subsystem114may include a Q-learning network (e.g., a deep Q-learning network) that implements the reinforcement learning policy200. The requesting system104may be any computing device, including, but not limited to, a laptop computer, a tablet computer, a hand-held computer, smartphone, other computer equipment (e.g., a server or virtual server), including “smart,” wireless, wearable, and/or mobile devices. The processing system102may include one or more computing devices described above and/or may include any type of mobile terminal, fixed terminal, or other device. For example, the processing system102may be implemented as a cloud-computing system and may feature one or more component devices. A person skilled in the art would understand that system100is not limited to the devices shown inFIG.1. Users may, for example, utilize one or more other devices to interact with devices, one or more servers, or other components of system100. A person skilled in the art would also understand that while one or more operations are described herein as being performed by particular components of the system100, those operations may, in some embodiments, be performed by other components of the system100. As an example, while one or more operations are described herein as being performed by components of the processing system102, those operations may be performed by components of the requesting system104, and/or validation system106. In some embodiments, the various computers and systems described herein may include one or more computing devices that are programmed to perform the described functions. One or more components of the processing system102, requesting system104, and/or validation system106, may receive content and/or data via input/output (hereinafter “I/O”) paths. The one or more components of the processing system102, the requesting system104, and/or the validation system106may include processors and/or control circuitry to send and receive commands, requests, and other suitable data using the I/O paths. The control circuitry may include any suitable processing, storage, and/or I/O circuitry. Each of these devices may include a user input interface and/or user output interface (e.g., a display) for use in receiving and displaying data. It should be noted that in some embodiments, the processing system102, the requesting system104, and/or the validation system106may have neither user input interfaces nor displays and may instead receive and display content using another device (e.g., a dedicated display device such as a computer screen and/or a dedicated input device such as a remote control, mouse, voice input, etc.). One or more components and/or devices in the system100may include electronic storages. The electronic storages may include non-transitory storage media that electronically stores information. The electronic storage media of the electronic storages may include one or both of (a) system storage that is provided integrally (e.g., substantially non-removable) with servers or client devices or (ii) removable storage that is removably connectable to the servers or client devices via, for example, a port (e.g., a Universal Serial Bus (USB) port, a firewire port, etc.) or a drive (e.g., a disk drive, etc.). The electronic storages may include one or more of optically readable storage media (e.g., optical discs, etc.), magnetically readable storage media (e.g., magnetic tape, magnetic hard drive, floppy drive, etc.), electrical charge-based storage media (e.g., EEPROM, random access memory (RAM), etc.), solid-state storage media (e.g., flash drive, etc.), and/or other electronically readable storage media. The electronic storages may include one or more virtual storage resources (e.g., cloud storage, a virtual private network, and/or other virtual storage resources). The electronic storages may store software algorithms, information determined by the processors, information obtained from servers, information obtained from client devices, or other information that enables the functionality as described herein. FIG.1also includes a network150. The network150may be the Internet, a mobile phone network, a mobile voice or data network (e.g., a 5G or LTE network), a cable network, a satellite network, a combination of these networks, or other types of communications networks or combinations of communications networks. The devices inFIG.1(e.g., processing system102, the requesting system104, and/or the validation system106) may communicate (e.g., with each other or other computing systems not shown inFIG.1) via the network150using one or more communications paths, such as a satellite path, a fiber-optic path, a cable path, a path that supports Internet communications (e.g., IPTV), free-space connections (e.g., for broadcast or other wireless signals), or any other suitable wired or wireless communications path or combination of such paths. The devices inFIG.1may include additional communication paths linking hardware, software, and/or firmware components operating together. For example, the processing system102, any component of the processing system (e.g., the communication subsystem112, the ML subsystem114, and/or the notification subsystem116), the requesting system104, and/or the validation system106may be implemented by one or more computing platforms. One or more machine learning models discussed above may be implemented (e.g., in part), for example, as shown inFIGS.1-3. With respect toFIG.3, machine learning model342may take inputs344and provide outputs346. In one use case, outputs346may be fed back to machine learning model342as input to train machine learning model342(e.g., alone or in conjunction with user indications of the accuracy of outputs346, labels associated with the inputs, or with other reference feedback information). In another use case, machine learning model342may update its configurations (e.g., weights, biases, or other parameters) based on its assessment of its prediction (e.g., outputs346) and reference feedback information (e.g., user indication of accuracy, reference labels, or other information). In another example use case, machine learning model342is a neural network and connection weights may be adjusted to reconcile differences between the neural network's prediction and the reference feedback. In a further use case, one or more neurons (or nodes) of the neural network may require that their respective errors are sent backward through the neural network to them to facilitate the update process (e.g., backpropagation of error). Updates to the connection weights may, for example, be reflective of the magnitude of error propagated backward after a forward pass has been completed. In this way, for example, the machine learning model342may be trained to generate results (e.g., modified processing requests, predicted actions as part of a reinforcement learning model, etc.) with better recall and/or precision. In some embodiments, the machine learning model342may include an artificial neural network. In some embodiments, machine learning model342may include an input layer and one or more hidden layers. Each neural unit of the machine learning model may be connected with one or more other neural units of the machine learning model342. Such connections can be enforcing or inhibitory in their effect on the activation state of connected neural units. Each individual neural unit may have a summation function which combines the values of all of its inputs together. Each connection (or the neural unit itself) may have a threshold function that a signal must surpass before it propagates to other neural units. The machine learning model342may be self-learning and/or trained, rather than explicitly programmed, and may perform significantly better in certain areas of problem solving, as compared to computer programs that do not use machine learning. During training, an output layer of the machine learning model342may correspond to a classification, and an input known to correspond to that classification may be input into an input layer of the machine learning model during training. During testing, an input without a known classification may be input into the input layer, and a determined classification may be output. For example, the classification may be an indication of whether an action is predicted to be completed by a corresponding deadline or not. The machine learning model342trained by the machine learning subsystem114may include one or more embedding layers at which information or data (e.g., any data or information discussed above in connection withFIGS.1-3) is converted into one or more vector representations. The one or more vector representations of the message may be pooled at one or more subsequent layers to convert the one or more vector representations into a single vector representation. The machine learning model342may be structured as a factorization machine model. The machine learning model342may be a non-linear model and/or supervised learning model that can perform classification and/or regression. For example, the machine learning model342may be a general-purpose supervised learning algorithm that the system uses for both classification and regression tasks. Alternatively, the machine learning model342may include a Bayesian model configured to perform variational inference, for example, to predict whether an action will be completed by the deadline, and/or a communication protocol to use for sending a message (e.g., a reminder message). FIG.4shows an example flowchart of the actions involved in parallelizing processing requests. For example, process400may represent the actions taken by one or more devices shown inFIGS.1-3and described above. At405, processing system102(e.g., using one or more components in system100(FIG.1) and/or computer system500via network interface540(FIG.5)) receives processing requests. The processing requests may comprise transaction data between the requesting system104and a customer (e.g., credit card transactions, bank transactions, etc.). The processing requests may be scheduled to be sent in a first batch for batch processing validation at a second computing system (e.g. the validation system106). For example, the processing requests may include credit card transactions between a retailer and customers. The credit card transactions may be sent by a computing system of the retailer to the processing system102. At410, processing system102(e.g., using one or more components in system100(FIG.1) and/or computing system500via one or more processors510a-510nand system memory520(FIG.5)) determines an error in the received processing requests. For example, the error may be an incorrect payment amount, an incorrect account number, or a variety of other errors. The error may be detected using a machine learning model. By detecting errors, the processing system102may be able to increase the efficiency of the transaction settlement process because the errors may be corrected before validation as described in more detail below. At415, processing system102(e.g., using one or more components in system100(FIG.1) and/or computing system500via one or more processors510a-510n, I/O interface550, and/or system memory520(FIG.5)) generates a modified processing request. The modified processing request may be generated in response to determining the error in the first processing request. The modified processing request may be generated via a machine learning model trained to generate modified processing requests that correct errors in processing requests. For example, a machine learning model may be used to generate output to correct a transaction amount field that was determined to be erroneous at410. By generating corrections for the errors, the processing system102may enable the validation system106to validate the transactions in one validation check without additional back and forth between the validation system106and the requesting system104. At420, processing system102(e.g., using one or more components in system100(FIG.1) and/or computing system500via one or more processors510a-510n(FIG.5)) replaces a processing request with the modified processing request generated at415. For example, a processing request with an incorrect account type may be replaced with a processing request that has an account type that is predicted to be correct. At425, processing system102(e.g., using one or more components in system100(FIG.1) and/or computing system500(FIG.5)) transmits the processing requests for batch processing validation. The transmitted processing requests may include the modified processing request. For example, the processing system102may send a batch of processing requests (e.g., comprising each processing request received in the last 24 hours) to the validation system106. One or more of the processing requests may contain data generated (e.g., by a machine learning model) to correct an error in a corresponding processing request. At430, processing system102(e.g., using one or more components in system100(FIG.1) and/or computing system500via the network interface540(FIG.5)) transmits the modified processing request for acceptance by the requesting system104while waiting to receive validation results from the validation system106. For example, the modified processing request generated at415may be sent to the requesting system104. The requesting system104may have additional data (e.g., time of sale, credit card number, name of customer, etc.) that can be used to determine whether the modification made to the processing request is correct. At435, processing system102(e.g., using one or more components in system100(FIG.1) and/or computing system500(FIG.5)) may receive batch processing validation results from the validation system106. The validation results may indicate that the processing requests transmitted, by the processing system102and to the validation system106, do not contain any errors. For example, the validation results may indicate that account type in the modified processing request is correct. At440, processing system102(e.g., using one or more components in system100(FIG.1) and/or computing system500(FIG.5)) receives, from the requesting system104, an indication that the modified processing request is correct. Alternatively, the requesting system104may indicate that the modified processing request is incorrect. The requesting system104may send an additional modified processing request that corrects the error in the original processing request. At445, processing system102(e.g., using one or more components in system100(FIG.1) and/or computing system500(FIG.5)) generates authentication results for the processing requests. The authentication results may indicate that the processing requests may be stored in a database. The processing system102may perform the processing requests, for example, in response to receiving indications from the validation system106and the requesting system104that the processing requests (e.g., including any modified processing requests) are correct. It is contemplated that the actions or descriptions ofFIG.4may be used with any other embodiment of this disclosure. In addition, the actions and descriptions described in relation toFIG.4may be done in alternative orders or in parallel to further the purposes of this disclosure. For example, each of these actions may be performed in any order, in parallel, or simultaneously to reduce lag or increase the speed of the system or method. Furthermore, it should be noted that any of the devices or equipment discussed in relation toFIGS.1-3could be used to perform one or more of the actions inFIG.4. FIG.5is a diagram that illustrates an exemplary computing system500in accordance with embodiments of the present technique. Various portions of systems and methods described herein may include or be executed on one or more computer systems similar to computing system500. Further, processes and modules described herein may be executed by one or more processing systems similar to that of computing system500. Computing system500may include one or more processors (e.g., processors510a-510n) coupled to system memory520, an I/O device interface530, and a network interface540via an I/O interface550. A processor may include a single processor or a plurality of processors (e.g., distributed processors). A processor may be any suitable processor capable of executing or otherwise performing instructions. A processor may include a central processing unit (CPU) that carries out program instructions to perform the arithmetical, logical, and I/O operations of computing system500. A processor may execute code (e.g., processor firmware, a protocol stack, a database management system, an operating system, or a combination thereof) that creates an execution environment for program instructions. A processor may include a programmable processor. A processor may include general or special purpose microprocessors. A processor may receive instructions and data from a memory (e.g., system memory520). Computing system500may be a units-processor system including one processor (e.g., processor510a), or a multi-processor system including any number of suitable processors (e.g.,510a-510n). Multiple processors may be employed to provide for parallel or sequential execution of one or more portions of the techniques described herein. Processes, such as logic flows, described herein may be performed by one or more programmable processors executing one or more computer programs to perform functions by operating on input data and generating corresponding output. Processes described herein may be performed by, and apparatus can also be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application specific integrated circuit). Computing system500may include a plurality of computing devices (e.g., distributed computer systems) to implement various processing functions. I/O device interface530may provide an interface for connection of one or more I/O devices560to computer system500. I/O devices may include devices that receive input (e.g., from a user) or output information (e.g., to a user). I/O devices560may include, for example, graphical user interface presented on displays (e.g., a cathode ray tube (CRT) or liquid crystal display (LCD) monitor), pointing devices (e.g., a computer mouse or trackball), keyboards, keypads, touchpads, scanning devices, voice recognition devices, gesture recognition devices, printers, audio speakers, microphones, cameras, or the like. I/O devices560may be connected to computer system500through a wired or wireless connection. I/O devices560may be connected to computer system500from a remote location. I/O devices560located on a remote computer system, for example, may be connected to computer system500via a network and network interface540. Network interface540may include a network adapter that provides for connection of computer system500to a network. Network interface may540may facilitate data exchange between computer system500and other devices connected to the network. Network interface540may support wired or wireless communication. The network may include an electronic communication network, such as the Internet, a local area network (LAN), a wide area network (WAN), a cellular communications network, or the like. System memory520may be configured to store program instructions570or data580. Program instructions570may be executable by a processor (e.g., one or more of processors510a-510n) to implement one or more embodiments of the present techniques. Instructions570may include modules of computer program instructions for implementing one or more techniques described herein with regard to various processing modules. Program instructions may include a computer program (which in certain forms is known as a program, software, software application, script, or code). A computer program may be written in a programming language, including compiled or interpreted languages, or declarative or procedural languages. A computer program may include a unit suitable for use in a computing environment, including as a stand-alone program, a module, a component, or a subroutine. A computer program may or may not correspond to a file in a file system. A program may be stored in a portion of a file that holds other programs or data (e.g., one or more scripts stored in a markup language document), in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub programs, or portions of code). A computer program may be deployed to be executed on one or more computer processors located locally at one site or distributed across multiple remote sites and interconnected by a communication network. System memory520may include a tangible program carrier having program instructions stored thereon. A tangible program carrier may include a non-transitory computer readable storage medium. A non-transitory computer readable storage medium may include a machine-readable storage device, a machine-readable storage substrate, a memory device, or any combination thereof. Non-transitory computer readable storage medium may include non-volatile memory (e.g., flash memory, ROM, PROM, EPROM, EEPROM memory), volatile memory (e.g., RAM, static random access memory (SRAM), synchronous dynamic RAM (SDRAM)), bulk storage memory (e.g., CD-ROM and/or DVD-ROM, hard-drives), or the like. System memory520may include a non-transitory computer readable storage medium that may have program instructions stored thereon that are executable by a computer processor (e.g., one or more of processors510a-510n) to cause the subject matter and the functional operations described herein. A memory (e.g., system memory520) may include a single memory device and/or a plurality of memory devices (e.g., distributed memory devices). I/O interface550may be configured to coordinate I/O traffic between processors510a-510n, system memory520, network interface540, I/O devices560, and/or other peripheral devices. I/O interface550may perform protocol, timing, or other data transformations to convert data signals from one component (e.g., system memory520) into a format suitable for use by another component (e.g., processors510a-510n). I/O interface550may include support for devices attached through various types of peripheral buses, such as a variant of the Peripheral Component Interconnect (PCI) bus standard or the USB standard. Embodiments of the techniques described herein may be implemented using a single instance of computer system500or multiple computer systems500configured to host different portions or instances of embodiments. Multiple computer systems500may provide for parallel or sequential processing/execution of one or more portions of the techniques described herein. Those skilled in the art will appreciate that computer system500is merely illustrative and is not intended to limit the scope of the techniques described herein. Computer system500may include any combination of devices or software that may perform or otherwise provide for the performance of the techniques described herein. For example, computer system500may include or be a combination of a cloud-computing system, a data center, a server rack, a server, a virtual server, a desktop computer, a laptop computer, a tablet computer, a server device, a client device, a mobile telephone, a personal digital assistant (PDA), a mobile audio or video player, a game console, a vehicle-mounted computer, or a Global Positioning System (GPS), or the like. Computer system500may also be connected to other devices that are not illustrated and/or may operate as a stand-alone system. In addition, the functionality provided by the illustrated components may in some embodiments be combined in fewer components or distributed in additional components. Similarly, in some embodiments, the functionality of some of the illustrated components may not be provided or other additional functionality may be available. Those skilled in the art will also appreciate that while various items are illustrated as being stored in memory or on storage while being used, these items or portions of them may be transferred between memory and other storage devices for purposes of memory management and data integrity. In some embodiments, some or all of the software components may execute in memory on another device and communicate with the illustrated computer system via inter-computer communication. Some or all of the system components or data structures may also be stored (e.g., as instructions or structured data) on a computer-accessible medium or a portable article to be read by an appropriate drive, various examples of which are described above. In some embodiments, instructions stored on a computer-accessible medium separate from computer system500may be transmitted to computer system500via transmission media or signals such as electrical, electromagnetic, or digital signals, conveyed via a communication medium such as a network or a wireless link. Various embodiments may further include receiving, sending, or storing instructions or data implemented in accordance with the foregoing description upon a computer-accessible medium. Accordingly, the present disclosure may be practiced with other computer system configurations. In block diagrams, illustrated components are depicted as discrete functional blocks, but embodiments are not limited to systems in which the functionality described herein is organized as illustrated. The functionality provided by each of the components may be provided by software or hardware modules that are differently organized than is presently depicted, for example such software or hardware may be intermingled, conjoined, replicated, broken up, distributed (e.g., within a data center or geographically), or otherwise differently organized. The functionality described herein may be provided by one or more processors of one or more computers executing code stored on a tangible, non-transitory, machine-readable medium. In some cases, third-party content delivery networks may host some or all of the information conveyed over networks, in which case, to the extent information (e.g., content) is said to be supplied or otherwise provided, the information may be provided by sending instructions to retrieve that information from a content delivery network. Due to costs constraints, some features disclosed herein may not be presently claimed and may be claimed in later filings, such as continuation applications or by amending the present claims. Similarly, due to space constraints, neither the Abstract nor the Summary section of the present document should be taken as containing a comprehensive listing of all such disclosures or all aspects of such disclosures. It should be understood that the description and the drawings are not intended to limit the disclosure to the particular form disclosed, but to the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the present disclosure as defined by the appended claims. Further modifications and alternative embodiments of various aspects of the disclosure will be apparent to those skilled in the art in view of this description. Accordingly, this description and the drawings are to be construed as illustrative only and are for the purpose of teaching those skilled in the art the general manner of carrying out the disclosure. It is to be understood that the forms of the disclosure shown and described herein are to be taken as examples of embodiments. Elements and materials may be substituted for those illustrated and described herein, parts and processes may be reversed or omitted, and certain features of the disclosure may be utilized independently, all as would be apparent to one skilled in the art after having the benefit of this description of the disclosure. Changes may be made in the elements described herein without departing from the spirit and scope of the disclosure as described in the following claims. Headings used herein are for organizational purposes only and are not meant to be used to limit the scope of the description. As used throughout this application, the word “may” is used in a permissive sense (i.e., meaning having the potential to), rather than the mandatory sense (i.e., meaning must). The words “include”, “including”, and “includes” and the like mean including, but not limited to. As used throughout this application, the singular forms “a,” “an,” and “the” include plural referents unless the content explicitly indicates otherwise. Thus, for example, reference to “an element” or “a element” includes a combination of two or more elements, notwithstanding use of other terms and phrases for one or more elements, such as “one or more.” The term “or” is, unless indicated otherwise, non-exclusive, i.e., encompassing both “and” and “or.” Terms describing conditional relationships, e.g., “in response to X, Y,” “upon X, Y,” “if X, Y,” “when X, Y,” and the like, encompass causal relationships in which the antecedent is a necessary causal condition, the antecedent is a sufficient causal condition, or the antecedent is a contributory causal condition of the consequent, e.g., “state X occurs upon condition Y obtaining” is generic to “X occurs solely upon Y” and “X occurs upon Y and Z.” Such conditional relationships are not limited to consequences that instantly follow the antecedent obtaining, as some consequences may be delayed, and in conditional statements, antecedents are connected to their consequents, e.g., the antecedent is relevant to the likelihood of the consequent occurring. Statements in which a plurality of attributes or functions are mapped to a plurality of objects (e.g., one or more processors performing actions A, B, C, and D) encompasses both all such attributes or functions being mapped to all such objects and subsets of the attributes or functions being mapped to subsets of the attributes or functions (e.g., both all processors each performing actions A-D, and a case in which processor 1 performs action A, processor 2 performs action B and part of action C, and processor 3 performs part of action C and action D), unless otherwise indicated. Further, unless otherwise indicated, statements that one value or action is “based on” another condition or value encompass both instances in which the condition or value is the sole factor and instances in which the condition or value is one factor among a plurality of factors. The term “each” is not limited to “each and every” unless indicated otherwise. Unless specifically stated otherwise, as apparent from the discussion, it is appreciated that throughout this specification discussions utilizing terms such as “processing,” “computing,” “calculating,” “determining” or the like refer to actions or processes of a specific apparatus, such as a special purpose computer or a similar special purpose electronic processing/computing device. The above-described embodiments of the present disclosure are presented for purposes of illustration and not of limitation, and the present disclosure is limited only by the claims which follow. Furthermore, it should be noted that the features and limitations described in any one embodiment may be applied to any other embodiment herein, and flowcharts or examples relating to one embodiment may be combined with any other embodiment in a suitable manner, done in different orders, or done in parallel. In addition, the systems and methods described herein may be performed in real time. It should also be noted that the systems and/or methods described above may be applied to, or used in accordance with, other systems and/or methods. The present techniques will be better understood with reference to the following enumerated embodiments:1. A method comprising: receiving, from a first computing system, a plurality of processing requests; determining, via a first model, an error in a first processing request of the plurality of processing requests; in response to determining the error in the first processing request, generating a modified processing request using a second model; replacing the first processing request in the plurality of processing requests with the modified processing request; transmitting the plurality of processing requests to the second computing system for batch processing validation; while waiting to receive a first validation result of the plurality of processing requests from the second computing system, transmitting the modified processing request to the first computing system for acceptance; receiving, from the second computing system, the first validation result; receiving, from the first computing system, a second approval, wherein the second approval indicates that the modified processing request is accepted; and in response to receiving the second approval, authenticating the plurality of processing requests.2. The method of any of the preceding embodiments, wherein determining an error comprises: generating, via the first model, a probability indicating whether the first processing request comprises an error; and based on determining that the probability satisfies a threshold, determining an error exists in the first processing request.3. The method of any of the preceding embodiments, further comprising: receiving error data indicating that a second plurality of processing requests did not pass the verification test; based on inputting the error data into a machine learning model, generating a second modified processing request; and sending the second modified processing request to the first computing system and the second computing system.4. The method of any of the preceding embodiments, wherein generating modified processing requests comprises: determining a rule associated with the error; and generating, based on the rule, a modified processing request.5. The method of any of the preceding embodiments, wherein determining an error in the first processing request comprises: inputting the plurality of processing requests into an anomaly detection model; and in response to inputting the plurality of processing requests into the anomaly detection model, generating, via the anomaly detection model, output indicating that a volume of processing requests received from the first computing system satisfies a threshold.6. The method of any of the preceding embodiments, wherein the sending the modified processing request to the first computing system comprises sending an alert indicating that the volume of processing requests satisfies the threshold.7. The method of any of the preceding embodiments, wherein determining one or more errors in the plurality of processing requests comprises: inputting the plurality of processing requests into an anomaly detection model; and in response to inputting the plurality of processing requests into the anomaly detection model, generating, via the anomaly detection model, output indicating that an account indicated by the processing requests does not match other data of the processing requests.8. The method of any of the preceding embodiments, wherein generating modified processing requests comprises: determining a historical processing request generated during a month of a previous year; and replacing data of a first field in the processing requests with data of a second field in the historical processing request.9. The method of any of the preceding embodiments, further comprising: determining that the second computing system does not receive processing transactions during a time period, wherein the modified processing request is transmitted to the first computing system before the time period is over, and wherein the plurality of processing requests are transmitted to the second computing system after the time period is over.10. The method of any of the preceding embodiments, further comprising: determining, based on information received from the first computing system and after sending the plurality of processing requests to the second computing system, that the modified processing request is incorrect; generating, based on the information received from the first computing system, additional modified processing requests; and sending the additional modified processing requests to the second computing system.11. A tangible, non-transitory, machine-readable medium storing instructions that, when executed by a data processing apparatus, cause the data processing apparatus to perform operations comprising those of any of embodiments 1-10.12. A system comprising: one or more processors; and memory storing instructions that, when executed by the processors, cause the processors to effectuate operations comprising those of any of embodiments 1-10.13. A system comprising means for performing any of embodiments 1-10.
53,483
11860724
In the figures, like reference numerals refer to the same figure elements. DETAILED DESCRIPTION The following description is presented to enable any person skilled in the art to make and use the invention, and is provided in the context of a particular application and its requirements. Various modifications to the disclosed embodiments will be readily apparent to those skilled in the art, and the general principles defined herein may be applied to other embodiments and applications without departing from the spirit and scope of the present invention. Thus, the present invention is not limited to the embodiments shown, but is to be accorded the widest scope consistent with the claims. Overview The Internet is the delivery medium for a variety of applications running on physical and virtual devices. Such applications have brought with them an increasing demand for bandwidth. As a result, equipment vendors race to build switches capable of performing various functions. However, the resultant complexity of the switch can increase the difficulty of detecting an error in the switch. Furthermore, a network may include a number of such complex switches. In addition, the network may include different types of switches. Each type of switch may have different capabilities and functionalities. As a result, if an event (e.g., an anomaly or an error) occurs, an administrator may need to troubleshoot each switch in the network individually. One common element among different types of switches in the network can be the log files maintained by the switches. The log file in each switch can record the events associated with the switch. Such events can include the configuration changes, operations, and applications of the switch. However, using the log file to monitor the switches can be challenging because the size of a log file can be large. For example, the size of a log file in a switch can grow to a substantial size (e.g., in the order of gigabytes) in a week. In addition, for multiple switches in the network, the combined size of the log files can scale up to petabytes of data. As a result, debugging issues from log files can be inefficient and time-consuming. One embodiment of the present invention provides an event analysis system in a respective switch in a network. During operation, the system can determine an event description associated with the switch from an event log of the switch. The event description can correspond to an entry in a table in a switch configuration database of the switch. A respective database in the switch can be a relational database. The system can then obtain an event log segment, which is a portion of the event log, comprising the event description based on a range of entries. Subsequently, the system can apply a pattern recognition technique on the event log segment based on the entry in the switch configuration database to determine one or more patterns corresponding to an event associated with the event description. The switch can then apply a machine learning technique using the one or more patterns to determine a recovery action for mitigating the event. In a variation on this embodiment, the system can maintain a number of recovery actions in an action database in the switch. A respective recovery action can correspond to one or more patterns. In a further variation, the system can apply the machine learning technique to the action database using the one or more patterns to determine the recovery action from the action database. In a variation on this embodiment, the machine learning technique is a pre-trained model loaded on the switch. In a variation on this embodiment, the system can evaluate the recovery action in a shadow database and determine whether the recovery action generates a conflict in the switch configuration database based on the evaluation. The shadow database can be a copy of the switch configuration database and not used to determine the configuration of the switch. In a variation on this embodiment, the system can determine whether the event is a critical issue and execute the recovery action on the switch if the event is a non-critical issue. In a further variation, the system can obtain a confirmation from a user prior to executing the recovery action if the event is a critical issue. In a variation on this embodiment, the system can determine a set of feature groups of the switch and a trigger indicating the event based on monitoring of the set of feature groups. A respective feature group can include one or more related features of the switch. The trigger can correspond to a feature group associated with the event. In a further variation, the system can match a set of patterns defined for the feature group with event descriptions in the event log segment using the pattern recognition technique. In a variation on this embodiment, the range of entries can include a first range of entries prior to a log entry and a second range of entries subsequent to the log entry. The log entry can include the event description in the event log. The embodiments described herein solve the problem of event detection and recovery in a switch in a network by (i) efficiently identifying an event (e.g., an anomaly) by determining a pattern in a relevant log segment, and (ii) using an artificial intelligence (AI) model to determine a recovery action corresponding to the event. In this way, a respective switch in a network may automatically detect an event and a corresponding recovery action. If the event is non-critical, the switch can then execute the recovery action, thereby facilitating self-healing in the network. With existing technologies, a respective switch can be equipped with one or more monitoring agents. Each monitoring agent can be a hardware module or a software application that is capable of monitoring a specific feature of the switch. For example, a monitoring agent can monitor (e.g., configuration, operations, and stability) of the routing operations, and another monitoring agent can monitor the minimum spanning tree protocol (MSTP). A respective monitoring agent may identify an event associated with the corresponding feature of the switch. However, since the switch can have a large number of features and be deployed in a variety of scenarios, events reported by the monitoring agents of the switch can be diverse and large in number. Resolving a network event (e.g., an issue arising from an anomalous event) may include identifying the event and quickly resolving the issue that caused the event to reduce the impact. An administrator manually identifying the resources and actions to resolve an event reported by the monitoring agents can be error-prone and time-consuming. To further streamline the process, the administrator may utilize the event logs of the switch. The administrator may manually inspect the entries in the event log files using keyword search and rule matching to identify events. However, with a large-scale deployment (e.g., in a large and distributed network), identifying an event with manual inspection may be infeasible. Furthermore, even with efficient identification of an event, resolving the issue associated with the event can still be challenging. As a result, an administrator may not be able to proactively prepare for an anomalous event and determine corresponding recovery actions. To solve this problem, a respective switch in a network can be equipped with an event analysis system that can efficiently identify issues associated with an event and perform a corrective recovery action. The recovery action can include one or more operations or configuration changes. The system can combine the analysis of an event log file, pattern recognition in the entries (or logs) of the event log file, and the notifications from monitoring agents to facilitate self-healing for a corresponding switch and its associated links. The respective instances of the system on the switches of the network may synchronize among each other and operate in conjunction with each other to facilitate a self-healing network. The switch can maintain a switch database for storing configuration, status, and statistics associated with the switch in different tables/columns of the database. To efficiently determine an event, the system may maintain an additional column in each configuration table of the database. This additional column can be referred to as a feature column. For each class or category of configuration (e.g., routing, high availability, etc.), the system can include one or more keywords in the feature column. The keywords may correspond to the events reported in the event log (i.e., the keywords can appear in the event log entries or logs). During operation, monitoring agents on a respective switch in a network can monitor the configuration and status of the switch. Monitoring all features of a switch can be inefficient. Therefore, instead of monitoring all features of the switch, the system can group the features. For example, routing-related features and operations, such as Link Layer Discovery Protocol (LLDP), Spanning Tree Protocol (STP), virtual local area network (VLAN), and Address Resolution Protocol (ARP), can be in a routing feature group. Upon detecting an event for a feature group, a monitoring agent can generate a trigger for the system. In some embodiments, the monitoring agent can change the status of the feature column associated with the feature group. The status change of the feature group can operate as the trigger for the feature group. The status of the feature column can include a keyword that may match an event entry in the event log file. The system can then identify the event entry in the event log file (e.g., based on the keyword or a timestamp of the event). The system can then determine a surrounding window of the event entry for the event log file. The system can determine the surrounding window based on a range of entries (e.g., a pre-defined range) indicated by an upper threshold and a lower threshold. The range can be defined for each feature or a group of features. The upper threshold can be a temporal range prior to a timestamp associated with the entry, and the lower threshold can be a temporal range subsequent to the timestamp. Furthermore, the upper threshold can indicate a number of entries before the event entry, and the lower threshold can indicate a number of entries after the event entry. The system can then obtain an event log segment (e.g., a portion of the event log file) corresponding to the surrounding window, which can correspond to the upper and lower thresholds, from the event log file. Upon obtaining the event log segment, the system can then perform pattern recognition on the event log segment. The system can maintain a set of patterns, which can be defined for the feature group. In some embodiments, the system can use Rosie Pattern Language (RPL) based on the set of patterns. The set of patterns can be the pre-defined network-related patterns or templates available in RPL. In addition, the system can also maintain definitions for extended patterns specific to the software and hardware of a respective switch. Upon identifying one or more patterns in the event log segment, the system can determine a self-recovery action based on the identified patterns. The system may maintain a mapping between a respective of the set of patterns and a corresponding recovery action. The switch can be equipped with an action database that can store the mapping. The action database can be a relational database. The switch database and the action database can be the same type of database (e.g., run on a same database management system (DBMS)) but each database can have a unique schema. These databases can also be different types of databases. Since a single pattern may have multiple recovery actions, the system can use an AI model (e.g., a machine-learning-based model) to match the pattern with a recovery action. To ensure a light-weight execution on a switch, the AI model can be a pre-trained AI model. The AI model can obtain the determined patterns as inputs and determine the recovery action. If the event is a non-critical event, the system can execute the recovery action to mitigate the event. In some embodiments, the system can validate the recovery action on a shadow database, which is distinct from the switch database and the action database, prior to the execution. The shadow database can be a copy of the switch database. By incorporating (or applying) the recovery action into the shadow database, the system can determine whether the recovery action causes a conflict with the configuration of any other element of the switch. If the validation is successful, the system can execute the recovery action. The system can also facilitate a client-server model where the client can be executed on a respective switch of the network, and the server can reside on a network manager or a cloud platform. The AI model can be trained by the server at the network manager based on entries in the event logs of the switches in the network. Upon training, the AI model can be loaded on the switches of the network. If a new event occurs, the AI model deployed in the switches may not recognize the corresponding pattern. Consequently, the system on the switch can send the event and patterns to the server as a new data point. The server can retrain the AI model using the new data points. Periodically, the sever can send the updated AI model to the switches. Using the updated AI model on the switches, the system can recognize the most recent set of patterns. If the event is a critical event, the system can send the event and the recovery action to the network manager. The server instance of the system can present the event and the recovery action to an administrator. The administrator may determine whether the recovery action is an appropriate solution for the event. If so, the administrator may approve the recovery action. Upon receiving the approval, the system can execute the recovery action on the switch. It should be noted that the system can validate the recovery action on the shadow database for a critical event as well. If the validation for a recovery action (e.g., for both critical and non-critical events) is not successful, the system can send the recovery action to the network manager. In this disclosure, the term “switch” is used in a generic sense, and it can refer to any standalone or fabric switch operating in any network layer. “Switch” should not be interpreted as limiting embodiments of the present invention to layer-2 networks. Any device that can forward traffic to an external device or another switch can be referred to as a “switch.” Any physical or virtual device (e.g., a virtual machine/switch operating on a computing device) that can forward traffic to an end device can be referred to as a “switch.” Examples of a “switch” include, but are not limited to, a layer-2 switch, a layer-3 router, a routing switch, a component of a Gen-Z network, or a fabric switch comprising a plurality of similar or heterogeneous smaller physical and/or virtual switches. The term “packet” refers to a group of bits that can be transported together across a network. “Packet” should not be interpreted as limiting embodiments of the present invention to layer-3 networks. “Packet” can be replaced by other terminologies referring to a group of bits, such as “message,” “frame,” “cell,” “datagram,” or “transaction.” Network Architecture FIG.1illustrates an exemplary event analysis system that facilitates a self-healing network, in accordance with an embodiment of the present application. As illustrated inFIG.1, a network100includes switches101,102,103,104, and105. In some embodiments, network100is a Gen-Z network, and a respective switch of network100, such as switch102, is a Gen-Z component. A Gen-Z network can be a memory-semantic fabric that can be used to communicate to the devices in a computing environment. By unifying the communication paths and simplifying software through simple memory-semantics, Gen-Z components can facilitate high-performance solutions for complex systems. Under such a scenario, communication among the switches in network100is based on memory-semantic fabric. In some further embodiments, network100is an Ethernet and/or IP network, and a respective switch of network100, such as switch102, is an Ethernet switch and/or IP router. Under such a scenario, communication among the switches in network100is based on Ethernet and/or IP. With existing technologies, a respective switch can be equipped with one or more monitoring agents, each of which can monitor an individual feature or a feature group of the switch. For example, a monitoring agent140that can monitor a feature group of switch102. Monitoring agent140may identify an event150associated with the corresponding feature of switch102. However, since switch102can have a large number of features and be deployed in a variety of scenarios in network100(e.g., as an aggregate, edge, or core switch), events reported by monitoring agents of switch102can be diverse and large in number. Resolving an event in network100may include identifying the issue causing the event and quickly resolving the issue to reduce the impact. An administrator manually identifying the resources and actions to resolve an event reported by monitoring agent140can be error-prone and time-consuming. To further streamline the process, the administrator may utilize the event logs or entry in event log file120of switch102. The administrator may manually inspect the entries in event log file120using keyword search and rule matching to identify the issue associated with event150. However, with a large-scale deployment in network100, identifying the issue with manual inspection may be infeasible. Furthermore, even with efficient identification of an issue, resolving the issue can still be challenging. As a result, an administrator may not be able to proactively prepare for an event and determine corresponding recovery actions for the issue causing event150. To solve this problem, a respective switch in network100can be equipped with an event analysis system110that can efficiently identify one or more issues causing event150and perform a corrective recovery action. In this disclosure, system110can refer to an instance of system110that operates on switch102. System110can combine the analysis of event log file120, pattern recognition in the entries of event log file120, and the notifications from monitoring agents, such as monitoring agent140, to facilitate self-healing for switch102and associated links in network100. The instances of system110on the switches of network100may operate in conjunction with each other to facilitate self-healing in network100. Switch102can maintain a switch database126for storing configuration, status, and statistics associated with switch102in different tables/columns of database126. To efficiently determine an event, system110may maintain an additional feature column in each configuration table. A configuration table can store configuration information of switch102. For each feature group, the feature column can include one or more keywords, which may correspond to the events reported in event log file120. During operation, upon detecting event150for the corresponding feature group, monitoring agent140can generate a trigger142for system110. Monitoring agent140can change the status of the feature column associated with the feature group. The status change of the feature group can operate as trigger142for the feature group. An event engine112of system110can manage trigger142. An alert module122can determine trigger142based on database126. A heartbeat module124, which can periodically monitor event log file120, can identify an event entry corresponding to trigger142and determine an event log segment from event log file120based on the event entry. Upon obtaining the event log segment, pattern engine114of system110can perform pattern recognition on the event log segment based on the keywords of the feature column entry and determine a pattern associated with event150. System110may maintain a mapping between a respective of a set of patterns associated with the typical network issues/events and a corresponding recovery action. Switch102can be equipped with an action database128that can store the mapping. Since a single pattern may have multiple recovery actions, an action engine116of system110can use an AI model to match the determined pattern with a recovery action144in the mapping in action database128. Recovery action144can include one or more operations or configuration changes. If event150is a non-critical event (e.g., related to a non-critical issue), a recovery engine118of system110can execute recovery action144to mitigate the impact of event150. To ensure that recovery action144does not cause a conflict with the rest of the switch configuration, recovery engine118can validate recovery action144on a shadow database130prior to the execution. In some embodiments, system110can also facilitate a client-server model. The client instance of system110can be executed on a respective switch of network100, and the server instance can run on a network manager160. Network manager160can be located in network100or can be deployed in the cloud (e.g., accessible via the Internet), and facilitate network configurations and management for a respective switch in network100. If event150is a critical event (e.g., related to a critical issue), recovery engine118can send the event and recovery action144to network manager160. The server instance of system110can present the event and recovery action144to an administrator. Then the administrator may determine whether recovery action144is an appropriate solution for event150. The AI model can be trained by the server instance of system110at network manager160based on entries in the event logs of the switches in network100. Upon training, the AI model can be loaded on the switches of network100, such as switch102. If event150is a new event (e.g., an event that the AI model has not been trained with), the AI model deployed in switch102may not recognize the corresponding pattern. Consequently, the instance of system110on switch102can send information associated with event150to the server instance of system110as a new data point. The server instance of system110can retrain the AI model using the new data points. Periodically, the server instance of system110can send the updated AI model to the switches of network100. As a result, the AI models of the client instances of system110on the switches can recognize the most recent set of patterns. FIG.2Aillustrates exemplary feature groups and corresponding pattern definitions for an event analysis system, in accordance with an embodiment of the present application. A respective switch in network100, such as switch102, can support a number of features202. For example, switch102can support routing-related features and operations, such as LLDP, STP (and any variants thereof), VLAN, and ARP. Switch102can also support a proprietary protocol for supporting specialized operations. Features202can also include one or more policies and access control list (ACL) that may provide security and permission for a respective user. Monitoring each feature of switch102can be challenging. Therefore, system110can group relevant features into a feature group and monitor that feature group. As a result, features202of switch102can be grouped into a set of feature groups204. The routing related features and operations in features202can be grouped into a routing feature group212. A feature group can include both standard and proprietary features of a switch. For example, a high availability feature group214can include a standard feature, such as synchronization, and a proprietary feature. Policies and ACLs defined for switch102can be grouped into a security feature group216. It should be noted that switch102may include more (or less) features and feature groups shown inFIG.2A. To perform pattern recognition on an event log segment, pattern engine114can maintain a set of pattern definitions206, which can include one or more pattern definitions for each of feature groups204(denoted with dotted and dashed lines). A pattern definition can represent one or more events that can occur at switch102. For example, a link failure or a port flap event can correspond to a pattern definition indicating packet drops. In this way, a pattern definition can indicate what event may have occurred at switch102. Pattern definitions206can be pre-defined by an administrator or a pattern recognition technique used by pattern engine114. If pattern engine114uses RPL, pattern definitions206can be the pre-defined network-related patterns available in RPL. In addition, the system can also maintain definitions for extended patterns specific to the software and hardware of a respective switch, such as the proprietary feature in features202. In some embodiments, pattern engine114may learn “normal” patterns when switch102performs without an anomalous event. These patterns can generate a baseline pattern definition in pattern definitions206(e.g., in addition to the RPL patterns). Monitoring agent140can periodically provide log information to pattern engine114. By performing pattern recognition in the log information, pattern engine114can determine a baseline pattern for switch102. This baseline pattern can allow pattern engine114to determine an anomalous pattern in an event log entry. Patten engine114can also use Splunk pattern matching technique using regular expressions. If features202of switch102are defined using regular expressions, pattern engine114can utilize Splunk to match a feature with a corresponding log entry. FIG.2Billustrates an exemplary anomaly detection process of an event analysis system, in accordance with an embodiment of the present application. Switch102can maintain switch database126for storing configuration, status, and statistics associated with switch102in different tables, such as table240. Database126can be a relational database running on a DBMS. Table240can include a number of columns242that may store the configuration associated with a feature group250. To efficiently determine an event, system110may maintain an additional column244in table240. This additional column244can be referred to as a feature column. For feature group250, system110can include a keyword232in feature column242. Keyword232may correspond to the events reported in event log file120(i.e., keyword232can appear in the event log entries). Event log file120can include a number of entries221,222,223,224,225, and226. A respective entry in event log file120can include one or more of: a timestamp (e.g., date and time), a type of action (e.g., switch102's action or a neighbor's action), a relevant application (e.g., a protocol daemon), event information (e.g., an identifier and an indicator whether the entry is an informational or error entry), and an event description. For example, upon receiving a Network Time Protocol (NTP) packet, event log file120can create the following entry222: 2019-08-19T05:45:65.747267+00:00 switch ntpd[45584]: Event|1102|LOG_INFO|AMM|1/5|NTP server 1 package received Monitoring agent140can be monitoring the features in feature group250in switch102. Upon detecting event150for feature group250, monitoring agent140can generate trigger142for system110. Monitoring agent140can change the status of the feature column associated with feature group250. The status change of feature group250can operate as trigger142for feature group250. Since the status of feature column244can include keyword232that may match an event entry in event log file120, system110can then identify event entry224(denoted with dashed lines) in event log file120. System110may identify entry224based on keyword232and a timestamp of event150. For example, if switch102has received a large number of NTP packets within a short period, the NTP daemon of switch102can have a high processor utilization. Flooding switch102with request traffic may overwhelm switch102and can cause blocking of essential traffic. Such an anomalous security pattern can be indicative of an NTP amplification attack, which can result in a corresponding entry in event log file120. Entries223,224, and225can then respectively be: 2019-08-19T05:46:02.788056+00:00 switch system[45584]: Event|760|LOG_ERR|AMM|-|High CPU utilization by daemon 12mac-mgrd 2019-08-19T05:46:32.984564+00:00 switch policyd[26099]: Event|6901|LOG_INFO|AMM|-|Action triggered by monitoring agent 2019-08-19T05:46:37.986937+00:00 switch policyd[26099]: Event|5507|LOG_INFO|AMM|-|Configuration change system110 System110can then determine a surrounding window of event entry224in event log file120. System110can determine the surrounding window based on a range of entries (e.g., a pre-defined range) indicated by an upper threshold and a lower threshold. The range can be specific to feature group250. The upper threshold can be a temporal range prior to a timestamp associated with entry224, and the lower threshold can be a temporal range subsequent to the timestamp. Furthermore, the upper threshold can indicate a number of entries before entry224, and the lower threshold can indicate a number of entries after entry224. For example, the upper and lower thresholds for feature group250can be 2 and 1, respectively. Alternatively, the upper and lower thresholds for feature group250can be 10 seconds and 5 seconds, respectively. System110can then obtain an event log segment220(e.g., a portion of event log file120) corresponding to the surrounding window from event log file120. Upon obtaining event log segment220, pattern engine114can perform pattern recognition on event log segment220. System110can maintain a set of pattern definitions208, which can be defined for feature group250and can be included in pattern definitions208. In some embodiments, pattern engine114can use a pattern recognition technique, such as RPL and Splunk, based on pattern definitions208. Pattern definitions208can be the pre-defined network-related patterns available in the pattern recognition technique. Pattern definitions208can also be the definitions for extended patterns specific to the software and hardware of switch102. Upon identifying a pattern234in event log segment220, pattern engine114can determine a self-recovery action based on pattern234. FIG.3illustrates an exemplary self-healing process facilitated by an event analysis system, in accordance with an embodiment of the present application. System110may maintain a mapping between a respective of the set of patterns and a corresponding recovery action. Switch102can be equipped with action database128that can store the mapping in table260, which can include a pattern column262and a corresponding recovery action column264. Since pattern234may have multiple recovery actions, action engine116can apply an AI model310(e.g., a machine-learning-based model) on table260. AI model310may match pattern234with recovery action144. To ensure a light-weight execution on switch102, AI model310can be a pre-trained AI model. AI model310can pattern234as an input and determine recovery action144(e.g., pattern234and action144correspond to input and output layers, respectively, of AI model310). When recovery action144is determined for pattern234, recovery engine118can determine whether event150is a critical event by determining whether event150is related to a critical issue. For example, an issue associated with NTP can be a non-critical issue, while an issue associated with STP can be a critical issue. If event150is a non-critical event, recovery engine118can execute recovery action144to mitigate event150. For example, if pattern234corresponds to an NTP amplification attack, recovery action144can indicate that switch102should block the server sending the NTP packets. Upon execution of recovery action144at switch102, the corresponding entry in event log file120can be: 2019-08-19T05:46:46.984564+00:00 switch system110[26099]: Event|6901|LOG_INFO|AMM|-|block NTP server In some embodiments, recovery engine118can validate recovery action144on shadow database130prior to the execution of recovery action144. Shadow database130can be a shadow copy of switch database126. A shadow copy can be a copy that is not directly used to configure or operate switch102. By incorporating (or applying) recovery action144into shadow database130, recovery engine118can determine whether recovery action144causes a conflict with the configuration of any other element of switch102. If the validation is successful, recovery engine118can execute recovery action144. It should be noted that recovery engine118can validate recovery action144on shadow database130even if event150is a critical event. If the validation for recovery action144, regardless of whether event150is a critical or non-critical event, is not successful, recovery engine118can seek further verification of recovery action144(e.g., from a network manager160). System110can also facilitate a client-server model where the instance of system110on switch102can be the client instance, and a server instance of system110can operate on network manager160. If event150is a critical event or recovery action144may cause a conflict on switch102, recovery engine118can send event150and recovery action144to network manager160. The server instance of system110can present event150and recovery action144to an administrator. The administrator may determine whether recovery action144is an appropriate solution for event150. If so, the administrator may approve recovery action144. Upon receiving the approval, recovery engine118can execute recovery action144on switch102and commit the corresponding configuration changes to switch database126. Operations FIG.4Apresents a flowchart illustrating the process of an event analysis system extracting an event log segment for analysis, in accordance with an embodiment of the present application. During operation, the system can determine an event trigger for an event (operation402) and identify an entry corresponding to the event in the event log file (operation404). The system can then determine an event log segment based on the upper and lower thresholds associated with the entry (operation406) and extract the event log segment from the event log file (operation408). FIG.4Bpresents a flowchart illustrating the self-healing process of an event analysis system, in accordance with an embodiment of the present application. During operation, the system can obtain a keyword from the feature column of the switch database (operation452). The system can then apply a pattern recognition technique on the event log segment based on the obtained keyword and one or more pattern definitions (operation454). Subsequently, the system determines a pattern based on the pattern recognition technique (operation456). The system can then apply an AI model, based on the determined pattern, to the recovery actions available in the action database (operation458). Subsequently, the system can determine a recovery action based on the AI model (operation460). FIG.5presents a flowchart illustrating the process of an event analysis system validating a recovery action for the self-healing process, in accordance with an embodiment of the present application. During operation, the system can apply the recovery action on the shadow database of the switch (operation502) and check for a conflict with the existing configuration (operation504). In this way, the system can validate before committing the recovery action to the switch. The switch can then determine whether a conflict is detected (operation506). If a conflict is detected, the system can notify the server instance of the system on the network manager regarding the conflict (operation512). On the other hand, if a conflict is not detected, the system can check whether the event is a critical event (operation508). If the event is a critical event, the system can send the recovery action to the server instance of the system on the network manager for further processing (operation514). If the event is not a critical event, the system can apply the recovery operation on the switch, thereby committing the corresponding changes to the switch database (operation510). Exemplary Switch System FIG.6illustrates an exemplary switch equipped with an event analysis system, in accordance with an embodiment of the present application. In this example, a switch600includes a number of communication ports602, a packet processor610, an analysis logic block630, and a storage device650. Switch600can also include switch hardware (e.g., processing hardware of switch600, such as its ASIC chips), which includes information based on which switch600processes packets (e.g., determines output ports for packets). Packet processor610extracts and processes header information from the received frames. Packet processor610can identify a switch identifier (e.g., a MAC address and/or an IP address) associated with the switch in the header of a packet. Communication ports602can include inter-switch communication channels for communication with other switches and/or user devices. The communication channels can be implemented via a regular communication port and based on any open or proprietary format. Communication ports602can include one or more Ethernet ports capable of receiving frames encapsulated in an Ethernet header. Communication ports602can also include one or more IP ports capable of receiving IP packets. An IP port is capable of receiving an IP packet and can be configured with an IP address. Packet processor610can process Ethernet frames and/or IP packets. Switch600can maintain a switch database652, an action database654, and a shadow database656(e.g., in storage device650). These databases can be relational databases, each of which can have a unique schema, and may run on one or more DBMS instances. Analysis logic block630can include one or more of: a pattern logic block632, and an action logic block634, a recovery logic block636, and a validation logic block636. During operation, analysis logic block630can receive a trigger indicating an event associated with switch600based on a status change in a feature column in switch database652. Analysis logic block630can determine an entry indicating the event in the event log file of switch600and obtain an event log segment based on the entry. Pattern logic block632can then perform pattern recognition on the event log segment to determine one or more patterns corresponding to the event. Subsequently, action logic block634can use an AI model to determine a recovery action from a set of recovery actions maintained in action database654. The AI model can use the one or more patterns and the corresponding feature column to determine the recovery action. Validation logic block368can validate the recovery action on shadow database656, which can be a shadow copy of switch database654. Upon successful validation, if the event is a non-critical event, recovery logic block636can apply the recovery action on switch600, thereby committing the corresponding configuration changes to switch database652. On the other hand, if the validation is unsuccessful or the event is a critical event, recovery logic block636can send information indicating the recovery action to a network manager. The data structures and code described in this detailed description are typically stored on a computer-readable storage medium, which may be any device or medium that can store code and/or data for use by a computer system. The computer-readable storage medium includes, but is not limited to, volatile memory, non-volatile memory, magnetic and optical storage devices such as disks, magnetic tape, CDs (compact discs), DVDs (digital versatile discs or digital video discs), or other media capable of storing computer-readable media now known or later developed. The methods and processes described in the detailed description section can be embodied as code and/or data, which can be stored in a computer-readable storage medium as described above. When a computer system reads and executes the code and/or data stored on the computer-readable storage medium, the computer system performs the methods and processes embodied as data structures and code and stored within the computer-readable storage medium. The methods and processes described herein can be executed by and/or included in hardware modules or apparatus. These modules or apparatus may include, but are not limited to, an application-specific integrated circuit (ASIC) chip, a field-programmable gate array (FPGA), a dedicated or shared processor that executes a particular software module or a piece of code at a particular time, and/or other programmable-logic devices now known or later developed. When the hardware modules or apparatus are activated, they perform the methods and processes included within them. The foregoing descriptions of embodiments of the present invention have been presented only for purposes of illustration and description. They are not intended to be exhaustive or to limit this disclosure. Accordingly, many modifications and variations will be apparent to practitioners skilled in the art. The scope of the present invention is defined by the appended claims.
41,662
11860725
DETAILED DESCRIPTION Overview Aspects of the present invention pertain to a failure recommendation system that predicts recommendations to rectify a failure in the execution of a CLI script. A command line interface processes commands to a CLI-based application through lines of text that are embedded in a CLI script. The CLI has a specific syntax for each command which consists of a command, zero or more subcommands, zero or more parameters with and without parameter values. The failure recommendation system uses machine learning models to predict one or more recommendations to remedy a failed command A recommendation includes an example command that is likely succeed at performing the intended operation. Machine learning provides a mechanism to learn failure-success patterns from previous failed attempts in order to more accurately predict the example command likely to succeed a failed command A conditional probability model is built on the assumption that if a command fails, the next successfully-executed command remedied the failure provided that a set of conditions is satisfied. The conditional probability model learns the probability of a successfully-executed command based on failure-success pairs obtained from historical usage data, such as the CLI telemetry data. A command may fail for a variety of reasons including wrong parameter combinations, error in the syntax of the command, errors in the parameter names or parameter values, wrong assumptions about the state of a resource, and so forth. In order to predict a meaningful recommendation, the failure recommendation system has to account for the various causes of a failed command. For this reason, the conditional probability model considers the failure type of a failure in order to more accurately predict an example command. The failure type is predicted using a classifier, such as a random forest classifier, based on the most frequently-occurring types of failures found in the CLI telemetry data. In addition, the failure recommendation system uses a parameter value type classifier to predict a data type of parameter value for those commands requiring a parameter. A recommendation is more useful if actual values are used for the parameter values rather than placeholders. Parameter values are learned from publicly-accessible data and not from the telemetry data of the CLI. Knowledge of the data type of a parameter value improves the search for the correct parameter value when a parameter has multiple types of parameter values. Attention now turns to a further discussion of the system, devices, components, and methods utilized in inferring failure recommendations. System FIG.1illustrates an exemplary system100in which various aspects of the invention may be practiced. The system100includes a user device102coupled to a cloud service104through a network. The user device102hosts a command line interface108coupled to a CLI-based application110of the cloud service104. The CLI-based application110receives commands, such as a CLI script112, initiated from the user device102. In one aspect, the CLI-based application110may be a cloud management and deployment application. The CLI108may be a shell program that is executed through a web browser or rich client application106. The CLI108enables a user (e.g. developer, customer) of the user device102to access resources on the CLI-based application110through CLI commands. In one aspect, the CLI commands are entered into a command prompt or input field of the CLI108and transformed into Representational State Transfer (REST) Application Programming Interfaces (API)s. The REST APIs are service endpoints that support a set of HTTP operations or methods to create, retrieve, update, delete or access resources on the cloud service. A user of the user device102may use a CLI script112to request an operation to be performed by the CLI-based application110. A CLI script112is an ordered sequence of one or more commands CLI commands can vary in complexity depending on their usage and the parameters required to execute the CLI commands Some CLI commands may require one or more input parameters which may be derived from the output of previously-executed commands. The CLI script112includes the sequence of commands needed to perform an operation in a specific order with the correct number of parameters and parameter values. An exemplary CLI is the Azure® command line interface for the Microsoft® Azure® cloud computing service. This cloud computing service provides various services, such as software-as-a-service (Saas), platform-as-a-service (PaaS), and infrastructure-as-a-service (IaaS) to build, test, deploy, and manage services and applications in addition to providing different programming tools. It should be noted that the techniques described herein are not limited to this particular CLI or to a particular configuration of a CLI. At times a command may fail for various reasons. A command may utilize the wrong parameter combinations, there may be errors in the parameter names or parameter values, or there may be wrong assumptions made about the state of a resource. An error message114is generated by the CLI-based application110and passed onto the failure recovery system116which generates recommendations to assist the user. For example, as shown inFIG.1, a user may initiate a CLI script containing the text string “az storage create”124which is rejected by the CLI-based application110as not being recognized. The failure recovery system116receives the error message114from the CLI-based application110and generates a response118to the user which includes an error message120and a recommendation122to assist the user in remedying the failure. The failure recovery system116utilizes several machine learning models to predict a recommendation to assist the user in correcting their failure. In one aspect, the failure recovery system116uses a conditional probability model126constructed as a lookup table, a failure type classifier128, and a parameter value data type classifier130. The conditional probability model126is built on the assumption that if a command fails, the CLI script would contain a sequence of commands to successfully remedy the failure. An analysis of these failures provides insight into the patterns used to remedy a failure. A statistical analysis of these patterns significantly improves the probability that a recommended command is more likely to correct a failure. A conditional probability model exploits this property by determining a probability of a successful command following an immediately preceding failed command. The failure type classifier128is used to determine the type of the failure and the parameter value type classifier130is used to determine the data type of a parameter value when a parameter is used with a particular command. FIG.2illustrates an exemplary configuration of the components and process used to train the conditional probability model200. A data collection component204generates a training dataset205from CLI telemetry data202. The CLI telemetry data202is obtained from monitoring the production usage of the CLI-based application. In a typical month, CLI users execute millions of commands to create, delete, and manage resources. Some of these commands run successfully while others result in failure. The CLI telemetry data202is a repository of such commands along with the list of parameters used with each command, the success status of the command execution, a short description of a failure, an error message and the exception type when there is a failure. The CLI telemetry data202does not include parameter values due to privacy concerns. The model generation component206uses the training dataset205to train a conditional probability model207. A conditional probability model207represents statistically the relationship between the input data and the output data by modeling the conditional probability distribution of the outputs given the inputs. The conditional probability model assumes that each state is dependent on a previous state. The dependency is given by a conditional probability P(xt|xt−1, . . . , xt−n), where xtis the state of the model at time t and n is the order of the conditional probability model. In a first order model, a state is dependent only on the immediately preceding state. The transition probabilities are generated from the unique set of failure-success pairs detected in the training data. A transition probability may be computed as P(xt|xt−1, . . . , xt−n)=Nt/Ntotal, where n is the order of the model, Ntis the number of times xtcomes after Xt−1, Ntotal=total number of success commands that come after xt−1. The conditional probability model is used to generate a lookup table208. The lookup table is accessed by CLI version, failure type and failed command Each entry in the table represents the features associated with a unique failed/success command pair. An entry includes the CLI version210, a failure type212, a failed command214, the corresponding successful command216, the parameter sets218required by the command and the frequency of the failed/success command pair220. In table208, there are three entries for the failure type, missed required subcommand, which are organized by increasing frequency. Each entry contains the failed command, vm, and the associated successful command (e.g., vm create, vm delete, vm show) and parameter sets that include parameter values (e.g., {-N parmv1, parmv2, -RG parmv3 . . . parmv6}, {-GG parmv7, -J parmv8 . . . parmv11}). FIG.3represents an exemplary configuration of the components used in training the failure type classifier300. A data extraction component304extracts error messages306from the CLI telemetry data302from which failure types are generated to classify the various types of failures that frequently occur. The error messages306include the failed command and the reason for the failure. A pre-processing component308processes the error messages306to convert the text in the error messages310into lower-case characters and to remove all special characters and common stop words. The resulting words are then lemmalized using WordNet lemmanizer Lemmatization refers to the removal of inflectional endings and returning the base of the word to its lemma. The processed error messages310are then used to train a bag-of-words (BOW) model312to generate embeddings314. The BOW model312is an encoder that learns to produce a numerical representation of the words of an error message by representing their frequency of occurrence in the training dataset without regards to the semantic relationship between the words. The classifier training component316receives an embedding314and an associated label (e.g., failure type)318and trains a classifier320to associate a failure type when given a particular error message. Exemplary failure types include UnknownSubcommand, UnableToParseCommandlnput, MissingRequiredParameters, etc. For example, an UnknownSubcommand failure type includes situations when an unknown subcommand is used with a command (e.g., az storage create). The UnableToParseCommandlnput failure type pertains to the situation where the CLI is unable to parse the command input (e.g., az-s). In the case of az storage create, the subcommand create is not part of the az storage group and in the case of az-s, there is no command. In one aspect, the failure type classifier320is a random forest classifier. However, it should be understood that the disclosure is not limited to this particular classifier and that other type of classifiers may be used as well, such as logistic regression or neural network-based classifiers. A random forest classifier M consists of a fixed number of decision trees, T, that vote to predict a classification on unseen data Each decision tree consists of a root node, multiple internal nodes referred to as split nodes, and multiple leaf nodes. Each root and split node of each tree performs a binary test on each input training data or feature vector and performs a binary test and based on the result, directs the data to the left or right child node. The leaf nodes store a probability distribution. Each node in a decision tree i provides a probability pi(y|x) for each y∈L, which is obtained during training the random forest, where y is a label out of the available labels L, and x represents a feature vector of n features. The label is the data type. The final classification is obtained from a vote of all the trees, T, and the resulting label is assigned according to the following equation: M⁡(x)=argmaxy∈L⁢1T⁢∑i=1T⁢pi⁡(y❘x). This method of combining trees is an ensemble method. The individual decision trees are weak learners and the ensemble produces a strong learner. Decision trees can suffer from over-fitting which leads to poor generalization and a higher error rate. An ensemble of decision trees, such as a random forest, improves generalization. A more detailed description is provided below. FIG.4represents an exemplary configuration of components and process400used to train a parameter value type classifier. In order to generate parameter values for each of the command/parameter pairs, parameter values are obtained from usage examples from publicly-accessible sources. The usage examples may come from publicly-accessible source code repositories, such as GitHub, from online documentation, and from websites containing command usage examples, such as Stackoverflow.com and other knowledge market websites402. A usage example contains a command, a set of parameters, and parameter values for each of the parameters in a parameter set. A web crawler404is used to obtain publicly-accessible usage examples of the CLI commands which include a command, its parameters and parameter values. The examples are then encoded into an embedding412using a bag-of-words model410. The embeddings412and an associated label416are used by the classifier training component414to train the parameter value type classifier418. For some commands, there may be multiple parameter values for a parameter and for other commands, there may not be any parameter values for a parameter. For those commands where there are multiple values for a parameter, the parameter value type classifier418is used to determine the data type of the parameter value associated with a paired command/parameter in order to select the correct parameter value from the multiple values. For those commands, where there are no known parameter values, the parameter value type classifier418generates a parameter value consistent with the predicted parameter value data type. There may be multiple parameter values for a particular parameter data type. In order to identify the most appropriate parameter value, a data type format is used. The data type format is the format of the text string corresponding to the most appropriate parameter value. For example, for the data type, IP Address, the lookup table may contain the parameter values “MyIPAddress”, “$ip”, “0.0.0.0”. The data type format for an IP Address data type may indicate four integer values separated by periods or eight integer values separated by a colon. In this case, the parameter value “0.0.0.0” is selected. By way of another example, a date/time format may be one of “mm/dd/yy”, “2020/mm”, or “yy/dd/mm” which is used to find the date of a parameter value having a date data type. The data type format for a particular parameter value data type is stored in the data type format database422. The data type format is pre-configured and may be set in advance to a particular format to achieve an intended objective or may be derived by the parameter value analyzer420. The data type format is represented by a regular expression that can specify a single value, a range of values, or a particular character string. The parameter value analyzer420generates the data type format based on the frequency that a data format is found in the web examples 408. For example, for a parameter data type that is an integer, the value ‘0’ is found in 90% of the web examples and the value ‘1’ is found in 10% of the web examples. The parameter value analyzer402may generate a regular expression that indicates a single value of ‘1’ as the integer parameter value for the example. The parameter value analyzer402may also generate a regular expression that indicates a range of values. A range of values, such as (−1, 0, 1) may also be used to select a parameter value where the range of values is derived from the frequency of usage in the web examples. FIG.5represents an exemplary configuration of the components and process used to predict the failure recovery recommendations. The failure recovery system500receives an error message502from a failed execution of a CLI script. The error message502is processed by the pre-processing component504, as noted above, and the processed text is input into the bag-of-words model506to generate embeddings508for the error message502. The embeddings of the error message508are input into the failure type classifier522to predict a failure type. The recommendation module532searches the conditional probability lookup table528using the failed command526, the CLI version525, and the failure type524to obtain the top recommendations including the top three most frequent parameters sets for a successful command. In the case where a parameter value is required for a parameter used in a recommendation, the recommendation module532uses the parameter value type classifier510given a success command and parameter name534to obtain a parameter value data type512, and a corresponding data format516from the data type format database514to select an appropriate parameter value to complete a recommendation. Methods Attention now turns to a description of the various exemplary methods that utilize the system and devices disclosed herein. Operations for the aspects may be further described with reference to various exemplary methods. It may be appreciated that the representative methods do not necessarily have to be executed in the order presented, or in any particular order, unless otherwise indicated. Moreover, various activities described with respect to the methods can be executed in serial or parallel fashion, or any combination of serial and parallel operations. In one or more aspects, the method illustrates operations for the systems and devices disclosed herein. Attention now turns toFIG.6which shows an exemplary method600for generating failure recovery recommendations. Initially, the failure recovery system trains the tools used to facilitate the failure recovery recommendations, such as the machine learning models to predict a failure type, a parameter value type, the conditional probability lookup table, and the data format database (block602). Once the tools are generated, the failure recovery system is deployed to generate failure recovery recommendations for a command line interface-based application (block604). Turning toFIG.7, there is shown an exemplary method700for training the conditional probability model. The conditional probability model obtains CLI telemetry data for each user session (block702). The CLI telemetry data contains raw data representing the execution of CLI commands to the CLI-based application. For each user session (block704), the commands are converted into scripts and sorted by time. The scripts include a set of consecutive commands executed within 2 seconds to 15 minutes of each other and by the same user (block706). Scripts that have consecutive commands issued less than 2 seconds apart are eliminated since they are most likely to be automated scripts. The scripts are filtered to remove help calls and login/logout commands from the sequence of commands in the script. The scripts from each user session are mined for consecutive sequences of failed commands and immediately succeeding successfully-executed commands (block708). These commands form a failure-success pair and include parameters, if any, without parameter values (block710). The failure-success pairs from each user session are then analyzed to compute the count of the number of unique users that executed a unique failure-success pair (block712). Failure-success pairs having a frequency of less than a pre-defined threshold are eliminated. The failure-success pairs are then used to generate the conditional probability model and to format the conditional probabilities and related data into a look-up table format (block714). FIG.8illustrates an exemplary process800for training a random forest classifier. This process is used to train the parameter value type classifier and the failure type classifier. Turning toFIG.8, the training dataset for a classifier is obtained to include both positive and negative samples. For the failure type classifier, the error message containing a description of the failure and an associated label identifying the failure type are used as the training dataset of positive samples for the failure type classifier. Successful completion messages and a corresponding label are used as the training dataset for the negative samples for the failure type classifier. For the parameter value type classifier, the training dataset consists of features from the web examples that produced the parameter value, such as command name, parameter name, the module name of the source, the command description and the parameter description and the corresponding label (Collectively, block802). Initially, the number of trees for each random forest is pre-configured to a particular number (block804). The process starts by selecting a decision tree from the random forest (block806). A random set of test parameters are then generated for use by the binary tests performed at the root node (block808). The binary test is of the form: α>f(x; θ)>β, such that f(x; θ) is a function applied to a feature vector x with parameters θ, and with the output of the function compared to threshold values α and β. If the result of f(x; θ) is in the range between α and β then the result of the binary test is true. Otherwise, the result of the binary test is false. The result of the binary test performed at a split node determines which child node a feature vector is passed to. (Collectively, block810). The random set of test parameters generated comprise a set of random values for the function parameter θ and the threshold values α and β. The function parameters of θ of each split node are optimized over a subset θ of all possible parameters. Then, every combination of a test parameter is applied to each feature vector. For each combination, the information gain is calculated. The combination of parameters that maximizes the information is selected and stored at the current node for further use. (Collectively, block812). Next, it is determined whether the value for the maximized information gain is less than a threshold (block814). If the value for the information gain is less than the threshold (block814—yes), then this indicates that further expansion of the tree does not provide significant benefit and the current depth of the tree is determined. If this is greater than a predefined maximum value, then the current node is set as the leaf node (block816) and the process waits for all branches to complete recursion (block818). If the value for the maximized information gain is greater than or equal to the threshold (block814-no), and the tree depth is less than the maximum value, then the current node is set as a split node (block820). As the current node is a split node, it has child nodes, and the process then moves to training these child nodes. Each child node is trained using a subset of the feature vectors at the current node. The subset of feature vectors sent to a child node is determined using the parameters that maximize the information gain. These parameters are used in the binary test, and the binary test performed on all feature vectors at the current node (block822). The feature vectors that pass the binary test form a first subset sent to a first child node, and the feature vectors that fail the binary test form a second subset sent to a second child node. For each of the child nodes, the process shown in blocks808to822is recursively executed for the subset of feature vectors directed to the respective child node (block824). In other words, for each child node, new test parameters are generated, applied to the respective subset of feature vectors, parameters maximizing the information gain selected, and the type of node is determined. If it is a leaf node, then the current branch of recursion ceases. If it is a split node, binary tests are performed (block822) to determine further subsets of feature vectors and another branch of recursion starts. Therefore, this process recursively moves through the tree, training each node until leaf nodes are reached at each branch. As leaf nodes are reached, the process waits until the nodes in all branches have been trained (block818). Once all the nodes in the tree have been trained to determine the parameters for the binary test maximizing the information gain at each split node, and leaf nodes have been selected to terminate each branch, the probability distribution can be determined for all the leaf nodes of the tree (block826). This is achieved by counting the class labels of the feature vectors that reach each of the leaf nodes (block828). All the features from the feature vectors end up at a leaf node of the tree. Once the probability distribution has been determined for the leaf nodes of the tree, then if more trees are present (block830—yes), the process repeats. If all the trees in the forest have been trained (block830—no), then the training process is complete (block832). Hence, the training process generates multiple decision trees trained using the training dataset. Each tree comprises multiple split nodes storing optimized test parameters and leaf nodes storing associated probability distributions. FIG.9is an exemplary process for predicting a recommendation for a failure900. A user of the CLI attempts to execute a CLI command which is rejected as an error by the CLI-based application (block902). The failure recovery system receives the error message which contains the failed command and a description of the failure (block904). The error message is then pre-processed by the pre-processing component and the filtered error message is input into the bag-of-words model to generate a corresponding embedding (block906). The embedding is then input into the failure type classifier which identifies an associated failure type (block908). The recommendation module uses the CLI version, the failed command and the failure type to search the lookup table for one or more recommendations. In one aspect, the process for selecting an example command from the lookup table distinguishes between a parameter-related failure and a non-parameter-related failure. For example, a parameter-related failure would include the MissingRequiredParameters failure type. A non-parameter-related failure type would include the UnknownSubcommand failure type. (Collectively, block910) In the case of a parameter-related failure type, the recommendation module obtains a select number of recommendations from the top success commands associated with the CLI version, failed command and failure type entries in the lookup table. In the case of a non-parameter-related failure type, a select number of recommendations would be formatted from each success command matching the CLI version, failed command and failure type entries. A recommendation would include the command and a set of parameters, if any. (Collectively, block910). If a recommendation includes a command with a parameter requiring a parameter value, the recommendation module obtains a parameter value data type from the parameter value type classifier given the success command and parameter name. If multiple parameter values are available, then the recommendation module obtains a data format for the data type of the parameter value. The recommendation module then selects a parameter value from the parameter set of the selected success command having the data type of the parameter value type and data format. (Collectively, block912). The recommendation is formatted as a message that includes the error message and the example command in the correct syntax with any subcommands, parameters, and parameter values. The recommendation is then returned back to the user (block914). Attention now turns to the use of the random forest classifiers in inferring the type of the parameter value and the failure type parameter. Turning toFIG.10, there is shown an exemplary method1000for predicting the data type of a parameter value and the failure type. In the case of the failure type, the feature vector includes the embeddings generated from an error message. In the case of a data type for a parameter value, the feature vector includes the embeddings generated from the command name and parameter name of a selected success command (Collectively, block1002). The feature vector is applied to each tree in the random forest for classification. A trained decision tree from the random forest is selected (block1004) and is tested against the trained and optimized parameters in each binary test in each node (block1006). Based on the result of the test, the feature vector is passed to the appropriate child node (block1008). The process is repeated until the feature vector reaches a leaf node (block1010). Once the feature vector reaches a leaf node, the probability distribution associated with this leaf node it stored for this feature vector (block1012). If there are more decision trees in the random forest (block1010—yes), a new decision tree is selected (block1004). The feature vector is pushed through the tree (block1006) and the probability distribution stored (block1008). This is repeated until there are no more decision trees in the random forest (block1010—no). Once the feature vector has been applied to each tree in the random forest (block1010—no), then the stored probability distributions that have been stored are aggregated (block1012) to form the overall probability distribution for each class (block1014). The overall probability distribution for each class is then output (block1016). In the case of the parameter value data type, there may be multiple classes where each class represents a particular data type. In one aspect, there may be two classes where one class represents a string and a second class represents a non-string. In the case of a failure type, there are multiple classes, such as UnknownSubcommand, UnableToParseCommandlnput, MissingRequiredParameters, etc. Each class is associated with a particular probability indicating the likelihood that the input features represent the class. Exemplary Operating Environment Attention now turns to a discussion of an exemplary operating environment.FIG.11illustrates an exemplary operating environment1100used to generate failure recommendations. The operating environment1100may be configured as a cloud service having multiple computing devices or configured as a single computing device. The computing devices1102are coupled to a network1104to other computing devices. However, it should be noted that the aspects disclosed herein is not constrained to any particular configuration of devices and that other configurations are possible. A computing device1102may be any type of electronic device, such as, without limitation, a mobile device, a personal digital assistant, a mobile computing device, a smart phone, a cellular telephone, a handheld computer, a server, a server array or server farm, a web server, a network server, a blade server, an Internet server, a work station, a mini-computer, a mainframe computer, a supercomputer, a network appliance, a web appliance, an Internet-of-Things (IOT) device, a distributed computing system, multiprocessor systems, or combination thereof. The operating environment1100may be configured in a network environment, a distributed environment, a multi-processor environment, or a stand-alone computing device having access to remote or local storage devices. A computing device1102may include one or more processors1106, one or more communication interfaces1108, one or more storage devices1110, one or more input/output devices1114and one or more memory devices1112. A processor1106may be any commercially available or customized processor and may include dual microprocessors and multi-processor architectures. A communication interface1108facilitates wired or wireless communications between the computing devices and other devices. A storage device1110may be computer-readable medium that does not contain propagating signals, such as modulated data signals transmitted through a carrier wave. Examples of a storage device1110may include without limitation RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD), or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage, all of which do not contain propagating signals, such as modulated data signals transmitted through a carrier wave. There may be multiple storage devices in a computing device. The input/output devices1114may include a keyboard, mouse, pen, voice input device, touch input device, display, speakers, printers, etc., and any combination thereof. A memory device1112may be any non-transitory computer-readable storage media that may store executable procedures, applications, and data. The computer-readable storage media does not pertain to propagated signals, such as modulated data signals transmitted through a carrier wave. It may be any type of non-transitory memory device (e.g., random access memory, read-only memory, etc.), magnetic storage, volatile storage, non-volatile storage, optical storage, DVD, CD, floppy disk drive, etc. that does not pertain to propagated signals, such as modulated data signals transmitted through a carrier wave. A memory device1112may also include one or more external storage devices or remotely located storage devices that do not pertain to propagated signals, such as modulated data signals transmitted through a carrier wave. Memory devices1112may include an operating system1116, CLI telemetry data1118, a conditional probability lookup table1120, a data extraction component1122, a parameter value type classifier1124, a parameter value analyzer1126, a web crawler1128, a pre-processing component1130, a bag of words model1132, a classifier training component1134, a failure type classifier1136, a data type format database1138, a CLI-based application1140and other applications and data1142. Network1104may be configured as an ad hoc network, an intranet, an extranet, a virtual private network (VPN), a local area network (LAN), a wireless LAN (WLAN), a wide area network (WAN), a wireless WAN (WWAN), a metropolitan network (MAN), the Internet, a portions of the Public Switched Telephone Network (PSTN), plain old telephone service (POTS) network, a wireless network, a WiFi® network, or any other type of network or combination of networks. A network1104may employ a variety of wired and/or wireless communication protocols and/or technologies. Various generations of different communication protocols and/or technologies that may be employed by a network may include, without limitation, Global System for Mobile Communication (GSM), General Packet Radio Services (GPRS), Enhanced Data GSM Environment (EDGE), Code Division Multiple Access (CDMA), Wideband Code Division Multiple Access (W-CDMA), Code Division Multiple Access 2000, (CDMA-2000), High Speed Downlink Packet Access (HSDPA), Long Term Evolution (LTE), Universal Mobile Telecommunications System (UMTS), Evolution-Data Optimized (Ev-DO), Worldwide Interoperability for Microwave Access (WiMax), Time Division Multiple Access (TDMA), Orthogonal Frequency Division Multiplexing (OFDM), Ultra Wide Band (UWB), Wireless Application Protocol (WAP), User Datagram Protocol (UDP), Transmission Control Protocol/Internet Protocol (TCP/IP), any portion of the Open Systems Interconnection (OSI) model protocols, Session Initiated Protocol/Real-Time Transport Protocol (SIP/RTP), Short Message Service (SMS), Multimedia Messaging Service (MMS), or any other communication protocols and/or technologies. Conclusion A system is disclosed comprising: one or more processors coupled to a memory; and a program stored in the memory and configured to be executed by the one or more processors, the program including instructions that: detect a plurality of failure-success pairs of command line interface (CLI) commands from historical usage data, a failure-success pair including a successfully-executed CLI command immediately following a failed CLI command; compute a probability for a failure-success pair, the probability representing a likelihood that the successfully-executed CLI command remediates the failed command of the failure-success pair; and select a successfully-executed CLI command, of a failure-success pair, to remediate a failed CLI command based on an associated probability. In one aspect, the program includes further instructions that: associate a failure type to each of a plurality of failed CLI commands; and select the successfully-executed CLI command to remediate the failed CLI command based on the failure type of the failed CLI command matching the failure type of the successfully-executed CLI command. In one aspect, the program includes further instructions that: train a failure type classifier to associate a failure type to an error message, the error message including the failed command. In some aspects, the probability associated with the failure-success pair is based on a frequency of the successfully-executed CLI command immediately following the failed CLI command. In some aspects, a successfully-executed CLI command includes a parameter having a parameter value; and the program includes further instructions that: select a parameter value for the parameter based on a data type of the parameter value. In one or more aspects, the data type of the parameter value is identified from a parameter value type classifier, the parameter value type classifier trained to identify a data type of a parameter value from publicly-accessible examples. In an aspect, the program includes further instructions that: select the parameter from a parameter set associated with the successfully-executed command based on a highest frequency associated with the selected parameter. A method is disclosed comprising: obtaining a plurality of failure-success pairs, a failure-success pair including a failed CLI command and a successfully-executed CLI command, wherein the successfully-executed CLI command immediately followed a failed CLI command in historical usage data of the CLI; receiving an error message from a failed CLI script, the failed CLI script including a first failed CLI command; selecting a successfully-executed CLI command to remediate the first failed CLI command based on the first failed CLI command matching the failed CLI command of the selected successfully-executed CLI command; and forming a recommendation to remediate the first failed CLI command, the recommendation including the selected successfully-executed CLI command. In one or more aspects, the method further comprises: associating a failure type with each failed CLI command of the plurality of failure-success pairs; and selecting the selected successfully-executed CLI command based on a failure type of the first failed CLI command matching a failure type of the selected successfully-executed CLI command. In some aspects, the method further comprises training a failure type classifier to associate a failure type given an error message having a failed CLI command. In aspect, the failure type classifier is a random forest classifier. In an aspect, the method further comprises: ranking failure-success pairs associated with a common failure type by increasing frequency. The method further comprises: associating a set of parameters with the successfully-executed command of a failure-success pair, the set of parameters derived from historical usage data. In aspects, the method further comprises: determining a data type of a parameter value for a select parameter of the set of parameters; and using the data type of the parameter value of the select parameter to obtain a selected parameter value. In one or more aspects, the method further comprises: training a classifier to identify the data type of the parameter value from command-parameters pairs of the historical usage data. A device is disclosed comprising: at least one processor coupled to a memory. The at least one processor is configured to: detect a failure of a command of a command line interface to execute successfully; determine a failure type of the failed command; generate a recommendation to correct the failed command from a conditional probability model given the failed command and the failure type, the recommendation including a success command to remedy the failed command, the conditional probability model predicting the success command based on a probability of the success command immediately following the failed command; and output the recommendation to a user device having initiated the failed command. In some aspects, the at least one processor is further configured to: receive an error message indicating the failed command; and utilize a failure type classifier to determine the failure type of the failed command. In aspects, the at least one processor is further configured to: generate an embedding of the error message of the failed command using a bag-of-words model, the embedding used by the failure type classifier to predict the failure type. In one or more aspects, the failure type classifier is a random forest classifier. In an aspect, the success command includes a parameter having a parameter value and the at least one processor is further configured to: obtain a data type of the parameter value from a parameter value type classifier; and select the parameter value from a plurality of parameter values of the success command based on the data type of the parameter value. Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.
42,904
11860726
In the drawings, like reference numbers generally indicate identical, functionally similar, and/or structurally similar elements. The drawing in which an element first appears is indicated by the leftmost digit(s) in the corresponding reference number. DETAILED DESCRIPTION OF THE EMBODIMENTS OF THE DISCLOSURE 1. Overview An aspect of the present disclosure is directed to recommending remediation actions. In one embodiment, a (recommendation) system constructs a knowledge graph based on problem descriptors and remediation actions contained in multiple incident reports previously received from a performance management (PM) system. Each problem descriptor and remediation action in an incident report are represented as corresponding start node and end node in the knowledge graph, with a set of qualifier entities in the incident report represented as causal links between the start node and the end node. Upon receiving a first incident report related to a first incident identified by the PM system, the system extracts a first problem descriptor and a first set of qualifier entities. The system traverses the knowledge graph starting from a start node corresponding to the first problem descriptor using the first set of qualifier entities to determine end nodes representing a first set of remediation actions. The system provides the first set of remediation actions as recommendations for resolving the incident. According to another aspect of the present disclosure, the system maintains a respective confidence score associated with each path from the first problem descriptor to each of the first set of remediation actions, wherein the confidence score for a path represents a likelihood of resolution of the first problem descriptor by the corresponding remediation action. The system identifies rankings for the first set of remediation actions based on the associated confidence scores and also provides the identified rankings along with the first set of remediation actions. According to one more aspect of the present disclosure, the system also extracts a second problem descriptor with a second weight along with the first problem descriptor with a first weight. The system traverses the knowledge graph to determine a second set of remediation actions and associated confidence scores for the second problem descriptor. The system then rankings for both of the first set of remediation actions and the second set of remediation actions together based on associated confidence scores weighted by the respective first weight and second weight. According to yet another aspect of the present disclosure, the system classifies the first incident as being one of a short head incident and a long tail incident. If the first incident is classified as short head incident, the system provides the first set of remediation actions as recommendations. If the first incident is classified as long tail incident, the system performs a web search to determine a third set of remediation actions and then provides the third set of remediation actions as recommendations for resolving the first incident. According to an aspect of the present disclosure, the system performs the classifying by generating a machine learning (ML) model correlating a set of problem types contained in the multiple incident reports (received from the PM system) to a number of occurrences of each problem type in the knowledge graph, and then predicting using the ML model, whether the first incident is one of the short head incident and the long tail incident based on a first problem type determined for the first incident. According to another aspect of the present disclosure, the system combines the first set of remediation actions and the third set of remediation actions to generate a final set of remediation actions. The system provides the final set of remediation actions as recommendations for resolving the first incident. According to one more aspect of the present disclosure, the first problem descriptor is one of a root cause of the first incident and a symptom caused by the first incident. The first set of qualifier entities includes one or more of a performance metric associated with the first incident, a component of an application where the first incident occurred, a sub-component of the application where the first incident occurred, a location of a server hosting the component, and a problem type determined for the first incident. The first set of qualifier entities also includes the symptom when the problem descriptor is the root cause, and the root cause when the problem descriptor is the symptom. Several aspects of the present disclosure are described below with reference to examples for illustration. However, one skilled in the relevant art will recognize that the disclosure can be practiced without one or more of the specific details or with other methods, components, materials and so forth. In other instances, well-known structures, materials, or operations are not shown in detail to avoid obscuring the features of the disclosure. Furthermore, the features/aspects described can be practiced in various combinations, though only some of the combinations are described herein for conciseness. 2. Example Environment FIG.1is a block diagram illustrating an example environment in which several aspects of the present disclosure can be implemented. The block diagram is shown containing end-user systems110-1through110-Z (Z representing any natural number), Internet120, computing infrastructure130and model evaluator150. Computing infrastructure130in turn is shown containing intranet140, nodes160-1through160-X (X representing any natural number), performance management (PM) system170and ITSM (IT Service Management) tool180. The end-user systems and nodes are collectively referred to by110and160respectively. Merely for illustration, only representative number/type of systems are shown inFIG.1. Many environments often contain many more systems, both in number and type, depending on the purpose for which the environment is designed. Each block ofFIG.1is described below in further detail. Computing infrastructure130is a collection of nodes (160) that may include processing nodes, connectivity infrastructure, data storages, administration systems, etc., which are engineered to together host software applications. Computing infrastructure130may be a cloud infrastructure (such as Amazon Web Services (AWS) available from Amazon.com, Inc., Google Cloud Platform (GCP) available from Google LLC, etc.) that provides a virtual computing infrastructure for various customers, with the scale of such computing infrastructure being specified often on demand. Alternatively, computing infrastructure130may correspond to an enterprise system (or a part thereof) on the premises of the customers (and accordingly referred to as “On-prem” infrastructure). Computing infrastructure130may also be a “hybrid” infrastructure containing some nodes of a cloud infrastructure and other nodes of an on-prem enterprise system. All the nodes (160) of computing infrastructure130, PM system170and ITSM tool180are connected via intranet140. Internet120extends the connectivity of these (and other systems of the computing infrastructure) with external systems such as end-user systems110. Each of intranet140and Internet120may be implemented using protocols such as Transmission Control Protocol (TCP) and/or Internet Protocol (IP), well known in the relevant arts. In general, in TCP/IP environments, a TCP/IP packet is used as a basic unit of transport, with the source address being set to the TCP/IP address assigned to the source system from which the packet originates and the destination address set to the TCP/IP address of the target system to which the packet is to be eventually delivered. An IP packet is said to be directed to a target system when the destination IP address of the packet is set to the IP address of the target system, such that the packet is eventually delivered to the target system by Internet120and intranet140. When the packet contains content such as port numbers, which specifies a target application, the packet may be said to be directed to such application as well. Each of end-user systems110represents a system such as a personal computer, workstation, mobile device, computing tablet etc., used by users to generate (user) requests directed to software applications executing in computing infrastructure130. A user request refers to a specific technical request (for example, Universal Resource Locator (URL) call) sent to a server system from an external system (here, end-user system) over Internet120, typically in response to a user interaction at end-user systems110. The user requests may be generated by users using appropriate user interfaces (e.g., web pages provided by an application executing in a node, a native user interface provided by a portion of an application downloaded from a node, etc.). In general, an end-user system requests a software application for performing desired tasks and receives the corresponding responses (e.g., web pages) containing the results of performance of the requested tasks. The web pages/responses may then be presented to a user by a client application such as the browser. Each user request is sent in the form of an IP packet directed to the desired system or software application, with the IP packet including data identifying the desired tasks in the payload portion. Some of nodes160may be implemented as corresponding data stores. Each data store represents a non-volatile (persistent) storage facilitating storage and retrieval of data by software applications executing in the other systems/nodes of computing infrastructure130. Each data store may be implemented as a corresponding database server using relational database technologies and accordingly provide storage and retrieval of data using structured queries such as SQL (Structured Query Language). Alternatively, each data store may be implemented as a corresponding file server providing storage and retrieval of data in the form of files organized as one or more directories, as is well known in the relevant arts. Some of the nodes160may be implemented as corresponding server systems. Each server system represents a server, such as a web/application server, constituted of appropriate hardware executing software applications capable of performing tasks requested by end-user systems110. A server system receives a user request from an end-user system and performs the tasks requested in the user request. A server system may use data stored internally (for example, in a non-volatile storage/hard disk within the server system), external data (e.g., maintained in a data store) and/or data received from external sources (e.g., received from a user) in performing the requested tasks. The server system then sends the result of performance of the tasks to the requesting end-user system (one of110) as a corresponding response to the user request. The results may be accompanied by specific user interfaces (e.g., web pages) for displaying the results to a requesting user. In one embodiment, software applications containing one or more components are deployed in nodes160of computing infrastructure130. Examples of such software include, but are not limited to, data processing (e.g., batch processing, stream processing, extract-transform-load (ETL)) applications, Internet of things (IoT) services, mobile applications, and web applications. Computing infrastructure130along with the software applications deployed there is viewed as a computing environment (135). It may be appreciated that each of nodes160has a fixed number of resources such as memory (RAM), CPU (central processing unit) cycles, persistent storage, etc. that can be allocated to (and accordingly used by) software applications (or components thereof) executing in the node. Other resources that may also be provided associated with the computing infrastructure (but not specific to a node) include public IP (Internet Protocol) addresses, etc. In addition to such infrastructure resources, application resources such as database connections, application threads, etc. may also be allocated to (and accordingly used by) the software applications (or components thereof). Accordingly, it may be desirable to monitor and manage the resources consumed by computing environment135C. PM system170aids in the management of the performance of computing environment135C, in terms of managing the various resources noted above. Broadly, PM system170is designed to process time series of values of various data types characterizing the operation of nodes160while processing user requests. The data types can span a variety of data, for example, performance metrics (such as CPU utilization, memory used, storage used, etc.), logs, traces, topology, etc. Based on processing of such values of potentially multiple data types, PM system170predicts expected values of performance metrics of interest at future time instances. PM system170also identifies potential issues (shortage of resources, etc.) in computing environment135based on such predicted expected values and/or actual values received from nodes160and triggers corresponding alerts for the identified issues. In the instant description, the term “incident” refers to such an identified potential issue that is triggered as an alert by PM system170. In one embodiment, PM system170uses ML (machine learning) based or DL (deep learning) based approaches for co-relating the performance metrics (with time instances or user requests received from end user system110) and predicting the issues/violations for the performance metrics. Examples of machine learning (ML) approaches are KNN (K Nearest Neighbor), Decision Tree, etc., while deep learning approaches are Multilayer Perceptron (MLP), Convolutional Neural Networks (CNN), Long short-term memory networks (LSTM) etc. Such PM systems that employ AI (artificial intelligence) techniques such as ML/DL for predicting the outputs are also referred to as AIOps (AI for IT operations) systems. ITSM tool180facilitates IT managers such as administrators, SREs, etc. to provide end-to-end delivery of IT services (such as software applications) to customers. To facilitate such delivery, ITSM tool180receives the alerts/incidents triggered by PM system170and raises corresponding tickets/incident reports for the attention of the IT managers. ITSM tool180also maintains the raised incident reports in a non-volatile storage such as a data store (e.g., one of nodes160). Examples of ITSM tool180are ServiceNow software available from ServiceNow[R], Helix ITSM (previously Remedy ITSM) software available from BMC Software, Inc, etc. It should be noted that at the time when the incident reports are raised by ITSM tool180, the incident reports contain details related to the incident such as the symptom caused by the incident, a performance metric associated with the incident, a component/sub-component of an application where the incident occurred, etc. An administrator/SRE may thereafter manually add (using end user systems110to send requests to ITSM tool180) additional details related to the incident such as the root cause of the incident, problem type of incident, etc. based on further investigation. After manually determining and performing any remediation actions to resolve the incident, the administrator/SRE may also add the details of the remediation actions to the incident report. In one embodiment, the incident reports/tickets in ITSM tool180are associated with different levels (such as level 0 or L0, level 1 or L1 and level 2 or L2) indicating the difficulty and/or importance of the incident. For L0 incident reports, an administrator/SRE typically manually performs one or more searches (using keywords obtained from the new incident report) on the previously raised and resolved incident reports and determine any remediation actions based on the results of the searches. However, for L1 and L2 incident reports, a sequence of actions may need to be performed to diagnose/resolve the incident completely and typically requires the involvement of one or more domain experts. It may be appreciated that when the number of incident reports increases (more that 10,000+), it may not be feasible to determine the remediation actions based on manual searches (even for L0 tickets). In addition, the involvement of domain knowledge experts may cause delays in the resolving of the L2/L3 tickets. Recommendation system150, provided according to several aspects of the present disclosure, recommends remediation actions for incidents identified by PM systems (170) deployed in a computing environment (135C). Though shown external to computing infrastructure130, in alternative embodiments, recommendation system150may be implemented internal to computing infrastructure130, for example, on one of nodes160or as a system connected to intranet140. The manner in which recommendation system150recommends remediation actions is described below with examples. 3. Recommending Remediation Actions FIG.2is a flow chart illustrating the manner in which remediation actions are recommended for incidents identified by performance management systems (e.g., PM system170) according to several aspects of the present disclosure. The flowchart is described with respect to the systems ofFIG.1, in particular recommendation system150, merely for illustration. However, many of the features can be implemented in other environments also without departing from the scope and spirit of several aspects of the present invention, as will be apparent to one skilled in the relevant arts by reading the disclosure provided herein. In addition, some of the steps may be performed in a different sequence than that depicted below, as suited to the specific environment, as will be apparent to one skilled in the relevant arts. Many of such implementations are contemplated to be covered by several aspects of the present invention. The flow chart begins in step201, in which control immediately passes to step210. In step210, recommendation system150receives incident reports related to incidents identified by a PM system (such as170). The incident reports may be received from ITSM tool180. The incident reports may be raised in ITSM tool180in response to receiving the incidents identified by PM system170. Each incident report contains a corresponding problem descriptor for that incident, a remediation action performed for resolving that incident (added by an administrator/SRE), and a set of qualifier entities associated with the incident. In the following disclosure, the term “qualifier entity” refers to concrete things and/or experiences qualifying the incident present in the incident report. A qualifier entity captures the information that replies to the questions of what, when, where, etc. as related to the incident. For example, when did the incident occur (date/time), what is the effect of the incident (symptom), where did the incident occur (component, sub-component, location), etc. In step220, recommendation system150constructs based on the incident reports, a knowledge graph that co-relates problem descriptors (contained in the incident reports) with remediation actions (contained in the incident reports). In one embodiment, each problem descriptor is represented as a corresponding start node and each remediation action is represented as a corresponding end node in the knowledge graph. The set of qualifier entities in each incident report is represented as causal links between the start node and the end node corresponding to the problem descriptor and remediation action contained in the incident report. In step240, recommendation system150receives an incident report related to an incident identified by the PM system (170). The incident report may be received from ITSM tool180and may be raised by ITSM tool180in response to receiving the incident identified PM system170. However, the received incident report does not contain a remediation action. In step250, recommendation system150extracts from the incident report, a problem descriptor and qualifier entities. The extraction may be performed in a known way, for example, based on a pattern search within the text of the incident report. In step270, recommendation system150traverses the knowledge graph using the extracted problem descriptor and qualifier entities to determine remediation actions. In the embodiment noted above, the traversal is performed by starting from a start node corresponding to the extracted problem descriptor and then using the extracted qualifier entities to determine end nodes representing remediation actions. In step280, recommendation system150provides the determined remediation actions as recommendations for resolving the incident. The recommendations may be displayed to an administrator/SRE on a display unit (not shown) associated with any of end user systems110. Alternatively, the recommendation may be sent, for example, as an email, to the administrator/SRE. Control passes to step299, where the flowchart ends. Thus, recommendation system150recommends remediation actions for incidents identified by a performance management system (170). It may be appreciated that providing such recommendations relieves the administrator/SRE of the burden of performing manual searches for L0 tickets. In addition, by capturing the domain knowledge expertise using a knowledge graph and using the knowledge graph to determine recommendation assists administrator/SREs to resolve L1/L2 tickets without any delays. According to an aspect, recommendation system150also identifies rankings for the determined remediation actions based on the confidence scores associated with the remediation actions in the knowledge graph. Recommendation system150then provides the identified rankings along with the determined remediation actions to the administrator/SRE. The manner in which recommendation system150provides several aspects of the present disclosure according to the steps ofFIG.2is illustrated below with examples. 4. Illustrative Example FIGS.3,4A-4C,5, and6A-6Cillustrate the manner in which recommendation system150recommends remediation actions for incidents identified by a performance management system (170) in one embodiment. Each of the Figures is described in detail below. FIG.3is a block diagram illustrating an implementation of a recommendation system (150) in one embodiment. The block diagram is shown containing entity extractor310, classification engine320, orchestrator330, knowledge graph module350, web search module360and blender/re-ranker380. Each of the blocks is described in detail below. Entity extractor310extracts the necessary or important information from the incident reports, in particular, from the descriptions/texts contained in the incident reports. In the following disclosure, the term “entity” refers to any relevant/important information extracted from an incident report. Example of such entities are problem descriptor, remediation action, qualifier entities, etc. Entity extractor310receives (via path148) historical incident reports that have been previously generated by ITSM tool180at prior time instances. Each historical incident report includes the details of the incident along with work logs/notes or resolution steps (remediation actions) performed to resolve the incident. Each historical incident report may also include the feedback (relevant/irrelevant) provided by IT managers for the remediation actions recommended for previous incidents. In addition, historical incident reports may also contain topology information showing how the services (software applications or components thereof) are interconnected as well as how the services are deployed in the nodes of computing environment135. For each received historical incident report, entity extractor310extracts the problem descriptor of the incident and a resolution action performed for resolving the incident. In one embodiment, the problem descriptor is a root cause of the incident, which may be provided either by the monitoring tool (PM system170) or by the administrators/SREs post investigation. The problem descriptor needs to be extracted from the historical incident report. Remediation actions (resolution steps) refer to the concrete steps taken by the administrators or SREs to resolve the incident. In addition to the problem descriptor and remediation action, entity extractor310also extracts one or more qualifier entities from each historical incident report. Entity extractor310extracts qualifier entities such as what caused the incident, where the incident was caused, what type of an incident it is etc. In one embodiment, the following qualifier entities are extracted: Symptom—Indicates the effect of the incident and is typically the basis for identifying the incident. The symptom can be extracted using natural language processing (NLP) from the description text in the incident report. For example, an extracted symptom may be “Memory utilization of db service was high for instance db01”. Performance metric—Indicates the specific metric (such as CPU utilization, memory used, storage used, etc.) that caused the incident. Component—Indicates the component (of a software application) where the incident occurred, for example, which software application, which service (db, web, app etc.), etc. Sub-component—Indicates the sub-component (of a software application) where the incident occurred, for example, which service (db, web, app etc.), which software module, etc. Location—Indicates the instance/server-name/geo where the incident occurred. The location information can also be extracted from the description text or else may be present in one of the mandatory fields to be entered by the administrators/SREs in ITSM tool180. Problem type—Indicates the broad type or category of the incident such as database, middleware, frontend, backend, etc. In an alternative embodiment, the symptom noted above may be determined to be the problem descriptor and accordingly the root cause may be identified as a qualifier entity and included in the set of qualifier entities. The description is continued assuming that the symptom is the problem descriptor and the set of qualifier entities includes only the component, sub-component and problem tuple for illustration. Aspects of the present disclosure may be provided with other problem descriptors and/or other sets of qualifier entities as will be apparent to one skilled in the relevant arts by reading the disclosure herein. The manner in which entity extractor310extracts the various details of the historical incident reports is described below with examples. 5. Historical Incident Reports FIG.4Adepicts sample incidents identified by a performance management system (170) in one embodiment. Table400depicts sample performance metrics monitored by PM system170. Columns401to405specify the details of the performance metric (including the component and sub-component). Column406specifies the actual value of the performance metric captured at the nodes in the computing environment, while columns407and408specify the upper and lower limits for the performance metric. Each of rows411-413thus specifies the details of a corresponding incident of the performance metric causing a violation (higher than the upper limit or lower than the lower limit). PM system170reports these incidents (411-413) to ITSM tool180, which in turn raises incident reports. FIG.4Bdepicts sample (historical) incident reports raised by a an ITSM tool (180) in one embodiment. Each of data portions420,430and440represents an incident report raised by ITSM tool180in response to receiving incidents identified by PM system170. For example, data potions420and430may be raised in repose to receiving the incidents in rows412and413respectively. It may be observed that the incident reports include not only the information obtained from the incident (such as the component, sub-component, performance metric, etc.), but also additional information (such as name of the database service, problem type/category, etc.) provide by administrators/SREs. It should be noted that the incident reports in data portions420,430and440also include the remediation actions performed to resolve the corresponding incident, and accordingly represent historical incident reports that may be used by recommendation system150to construct a knowledge graph as described in detail below. Referring again toFIG.3, entity extractor310receives the historical incident reports ofFIG.4Band extracts the corresponding problem descriptor, remediation action and set of qualifier entities from each historical incident report. FIG.4Cillustrates the manner in which entities are extracted from incident reports in one embodiment. In particular, table450depicts the entities extracted from the incident report of data portion440ofFIG.4B. The entity name column indicates a pre-defined label used for each entity, while the value column indicates the value/text extracted from the incident report for the corresponding entity name. Thus, table450depicts the time, root cause, symptom and resolution (remediation action) extracted rom the incident report of data portion440. It may be appreciated that such extracted may be performed using natural language processing (NLP) techniques well known in the relevant arts. In one embodiment, a conditional random field (CRF) model/Bi-directional LSTM with CNN (CNN-Bi-LSTM) is trained on the historical incident reports to understand and extract the main entities from the unstructured text data. During inferencing without the above-mentioned labels, the model can highlight the main key components of the entities, viz. root cause, symptom, and resolution. The parameters of the model being a sequence of texts and their respective BIO-encoded label. Such a technique may require manual ground truth labeling of incident reports using the BIO encoding standards as will apparent to one skilled in the relevant arts. Data portion460depicts the incident report of data portion400after performance of BIO encoding. Thus, recommendation system150(in particular, entity extractor310) extracts the desired entities from the historical incident reports (ofFIG.4B). Recommendation system150then constructs s knowledge graph based on the extracted entities, as described below with examples. 6. Knowledge Graph Referring again toFIG.3, knowledge graph module350operates a knowledge hub that contains all the historical incident reports causes and effects (problem descriptors), their topology (component, sub-component, etc.) and their resolution steps (remediation actions). Knowledge graph module350receives (from entity extractor310) the entities extracted from historical incident reports and constructs a knowledge graph based on the received entities such as problem descriptor, remediation action, qualifier entities, etc. In one embodiment, each problem descriptor (such as symptom) and remediation action in an incident report are represented as corresponding start node and end node in the knowledge graph, with a set of qualifier entities (such as component, sub-component and problem type) in the incident report represented as causal links between the start node and the end node in the knowledge graph. The manner in which a knowledge graph may be constructed is described below with examples. FIG.5depicts portions of a knowledge graph (500) constructed based on historical incident reports in one embodiment. In particular, nodes511and512are start nodes representing problem descriptors and indicate the symptom text, while nodes551,552and553are end nodes representing remediation actions (again shown as text). Nodes521and522represent the component qualifier entity and indicates the name of the component, nodes531-534represent the sub-component qualifier entity and indicates the name of the sub-component and nodes541-544represent the problem type qualifier entity and indicates the problem type in text form. Each start node is shown connected to each end node via one or more nodes representing the set of qualifier entities extracted from a historical incident report. For example, start node511is shown connected to end node551via the nodes521,531and541representing the qualifier entities component, sub-component and problem type respectively in a corresponding historical incident report. It is important to note that the remediation action not only depends on the symptom but also on the root cause of the problem, the component/sub-component etc. where the problem surfaced and the specific problem type/category or sub-category. In other words, the nodes representing the qualifier entities capture the various causal links (where, when, which, etc.) between a start node (symptom) and an end node (remediation action). As such, all of these entities are required to be extracted from each incident report to uniquely identify the incident and suggest a remediation action. Each edge in knowledge graph500indicates that the entities represented by the two nodes connected by the edge has occurred/been present in at least one of the historical incident reports. For example, the edge between node511and521indicates that there is at least one historical incident report containing both the problem descriptor/symptom “db service no response” and the component “db”. It may be appreciated that the same edge may occur multiple times in the historical incident reports. In one embodiment, an edge weight is associated with each edge in knowledge graph500indicating the number of occurrences of the entities represented by the two nodes connected by the edge in the historical incident reports. For illustration, only the edge weights (10, 3, 20, etc.) for the edges between nodes541-544representing problem types and end nodes551-553representing remediation actions indicating the number of occurrences of the corresponding problem type— remediation action pairs in the historical incident reports are shown in knowledge graph500. However, similar edge weights may be maintained for the other edges as well as will be apparent to one skilled in the relevant arts by reading the disclosure herein. According to an aspect, the edge weights maintained as part of knowledge graph500is the basis for determining a respective confidence score associated with each path from a problem descriptor to a corresponding remediation action. The confidence score for a path represents a likelihood of resolution of the problem descriptor by the corresponding remediation action. In one embodiment, instead of having text-based nodes, word embeddings are used in order to handle synonyms, semantic similarities, etc. Different embedding techniques can be used such as FastText, BERT etc. as well as sentence embedding techniques such as InferSent and USE. Also, the knowledge graph is designed in a way so as to be effective across multiple customers/tenants using computing environment135. Accordingly, the symptoms, root cause, etc. may be stored in a canonical format so that differences in language etc. do not affect the searchability in knowledge graph500. Thus, recommendation system150constructs a knowledge graph (500) based on the entities extracted from historical incident reports. The manner in which recommendation system150processes a new incident report sought to be resolved (that is, does not include a remediation action) is described below with examples. 7. Processing an Incident Report Referring again toFIG.3, entity extractor310receives (via path148) the incident report (herein after the target incident report) sought to be resolved from ITSM tool180and extracts the entities from the target incident report.FIG.6Adepicts the manner in which an incident report sought to be resolved is processed in one embodiment. In particular, data portion610represents the target incident report raised by ITSM tool180in response to receiving a corresponding incident identified by PM system170. Table620depicts the various entities extracted by entity extractor310from data portion610using NLP techniques. It may be observed that table620does not contain any remediation action/resolution steps. Entity extractor310forwards the extracted entities to orchestrator330. Orchestrator330acts as a relay engine to the system, conveying various information to the different modules to arrive at the remediation actions for the target incident report and then provide them to a user (such as administrator/SRE). For example, orchestrator330coordinates with classification engine320, knowledge graph module350, web search module360as well as the blender/re-ranker560to generate the final recommendation (of remediation actions) for the SREs or end users. Orchestrator330accordingly receives the target incident report (610) and the corresponding extracted entities (620) from entity extractor310and then forwards the details to classification engine320to determine a classification of the received (target) incident. Such classification facilitates orchestrator330to determine the most appropriate remediation actions for the target incident. Classification engine320is implemented to classify a received (target) incident into one or more classes. In one embodiment, classification engine320classifies the target incident reports as either a short head incident or a long tail incident.FIG.6Billustrate the short head long tail classification in one embodiment. The graph is shown with the problem types along the X-axis and the number of occurrences of the problem types in the knowledge graph (500) along the Y-axis. It may be observed that a few problem types occur very frequently (left side of the dotted line) while there is a large number of problem types that occur very less frequent (right side of the dotted line). Thus, the left side of the dotted line may be viewed as a short head, while the right side forms a long tail. In one embodiment, a pre-defined categorization technique (e.g., based on frequency of occurrence of the problem type in the target incident) is used to classify the target incident into a short head incident (e.g., high frequency of occurrence) or a long tail incident (e.g., low frequency of occurrence). According to an aspect, classification engine320classifies the incident using a ML model that correlating problem types contained in the historical incident reports (FIG.4B) to a number of occurrences of each problem type in knowledge graph500. The ML model is trained with historical incident reports. Algorithms such as k-NNs, SVMs, Deep Neural Nets may be used for classification. To handle class imbalance problem, classification engine330can be implemented to use up sampling/down sampling or Learning to Rank well known in the relevant arts. It may be noted that the ML model is specifically designed as a solution for the most frequently occurring problem types as there are too many training samples and the ML model can achieve high accuracy. It may also be appreciated that during the initial phase of operation of recommendation system150, L0 tickets/incident reports are likely to be classified as short head incidents, while L1/L2 incident reports are likely to be classified as long tail incidents. However, after continued operation during which knowledge graph500has been updated with substantive number of historical incident reports, even L1/L2 incident reports are likely to be classified as short head incidents and accordingly recommendation system150facilitates the handling of such L1/L2 incident reports by administrators/SREs without requiring any additional domain knowledge expertise. Upon receiving the details from orchestrator330, classification engine320predicts using the ML model, whether the target incident is a short head incident or a long tail incident based on a problem type extracted from the target incident report. Classification engine320then forwards the predicted classification to orchestrator330. Orchestrator330receives the classification of the target incident and performs a knowledge graph traversal if the target incident is classified as a short head incident and a web search if the target incident is classified as a long tail incident. The knowledge graph traversal and web search are performed to determine the most appropriate remediation actions as described below with examples. 8. Determining Remedial Actions and Confidence Scores For short head incidents, orchestrator330first sends a query to knowledge graph module350, the query containing the target incident report (610) and the extracted entities (620). Knowledge graph module350in response to the query, performs a traversal of knowledge graph500, comparing the nodes of the graph to the various extracted entities, to find a path that is closest to the extracted entities (620). Specifically, the knowledge graph traversal starts from a start node representing the problem descriptor (symptom) that is closest to the problem descriptor (symptom) extracted from the target incident report. In one embodiment, a distance similarity between the extracted problem descriptor (symptom) and all the start nodes (symptoms) in knowledge graph500is calculated and then the best one among them is selected using the following formula: y=argmin(d1,d2, . . . ,dn) The remediation actions associated with the closest problem descriptor (symptom) is what is included in the set of remediation actions recommend for resolving the target incident. For illustration, it is assumed that node512is has the shortest distance with the extracted problem descriptor/symptom (“Responsive times greater than expected”). To identify the remediation actions, knowledge graph500is traversed starting from the matched start node (here,512) and following the nodes corresponding to the qualifier entities (e.g., component, sub-component, instance, etc.) extracted from the target incident report until end nodes (assumed to be552and553) are reached. The remediation actions corresponding to the end nodes are identified as the set of remediation actions to be recommended. It may be noted that a remediation action (corresponding to an end node such as552) is included in the recommendations only when the start node512(symptom) matches the symptom identified in the target incident report and also the other qualifier entities such as component (“db”), sub-component (“oracle”) and problem type (“lock wait”) match with corresponding qualifier entities in the target incident report. According to an aspect, knowledge graph module350also determines a confidence score for each of the identified set of remediation action based on the corresponding edge weights (number of occurrences) maintained in knowledge graph500. For example, the confidence score may be determined as a percentage of the total number of occurrences of the remediation actions. Thus, for node552, the confidence score may be determined to be 4/(4+12)= 4/16=0.25, while for node553may be determined to be 12/(4+12)= 12/16=0.75. It may be appreciated that the higher confidence score indicates that the corresponding remediation action was successfully used a greater number of times to resolve the problem descriptor, and accordingly the likelihood of resolution of the target incident reports by the corresponding remediation action is also high. According to an aspect, entity extractor310extracts multiple problem descriptors from the target incident reports. For example, from the description text of data portion610, entity extractor310may extract the problem descriptors “Responsive times greater than expected” (hereinafter PD1) or “Responsive times not acceptable” (hereinafter PD2) using NLP. Such multiple extraction may be needed to take into consideration the lack of precision commonly associated with NLP. Entity extractor310also determines a match weight associated with each of the problem descriptors. A match weight indicates the level of confidence in the extraction of the problem descriptor from a description text contained in the target incident report and may be determined using NLP techniques well known in the arts. The description is continued assuming that PD1 and PD2 have the match weights of 0.6 and 0.3 for illustration. Knowledge graph module350accordingly performs the knowledge graph traversal noted above starting from each of the start nodes closest to each of the extracted problem descriptors. For example, for PD1, start node512is identified as the closest start node and a first set of remediation actions represented by end nodes552and553are identified. For PD2, start node511is identified as the closest start node and a second set of remediation actions represented by end nodes551and552are identified. Knowledge graph module350then determines the confidence scores for each of the first set and second set of remediation action based on the confidence scores determined based on edge weights weighted by the respective match weights. For example, for node553, the confidence score may be determined as 0.75 (based on edge weights)*0.6 (match weigh)=0.45. Knowledge graph module350then provides the identified (first and second) set of remediation actions along with the determined confidence scores to orchestrator330as a response to the query. For long tail incidents, orchestrator330sends the details of the target incident to web search module360, which generates and provides to orchestrator330, new sets of remediation actions using web search techniques. Web search module360may perform one or more web searches via Internet120using the entities extracted from the target incident report, identify one or more web search results as new remediation actions and determine a confidence score associated with each remediation action based on the closeness of match (e.g., number of entities) in the web search result. Web search module350then provides the determined new set of remediation actions to orchestrator330as the results of the web search. It should be noted that the classification into short head incident or a long tail incident provides an initial guidance to orchestrator330on whether to perform a knowledge graph traversal (for short head incidents) or a web search (for long tail incidents). However, orchestrator330may perform knowledge graph traversals for long tail incidents (for example, when the web search provides insufficient or low confidence score results) as well as web search for short tail incidents (for example, when the knowledge graph traversal provides insufficient or low confidence score results). After determining the remediation actions from either one or both of knowledge graph module350and web search module360, orchestrator330may forward the results (sets of remediation actions) to blender/re-ranker380for ranking of the results. The manner in which the remediation actions are ranked and thereafter provided to end users is described below with examples. 9. Ranking and Providing Remediation Actions Blender/Re-ranker380receives the remediation actions from the different modules and then re-ranks them based on the confidence scores and prior (user) feedback. Blender/re-ranker380may employ various ranking techniques such as RankBoost, RankSVM, LambdaRank, etc. using the NDCG loss function. In one embodiment, blender/re-ranker380receives (via path112) feedback on the recommendations/remediation actions previously provided to end users such as administrators/SREs. The feedback may be in the form of up-votes and down-votes for each remediation action—an up-vote indicating that the remediation action resolved the incident and a down-vote indicating that the remediation action had not or only partially resolved the incident. Blender/re-ranker380may also send to knowledge graph module350, the feedback received from the end users to enable knowledge graph module350to update (for example, change the edge weights) the knowledge graph (500). FIG.6Cillustrates the manner in which remediation actions are recommended for an incident report in one embodiment. In particular,FIG.6Cillustrates the recommendation for the (target) incident report shown inFIG.6A. Table630is the set of remediation actions (rows641-642) determined by knowledge graph module350based on traversal of knowledge graph500, while table650is the new set of remediation actions (rows661-662) determined by web search module350using web search techniques. It may be noted that each of the remediation actions in rows641-642and661-662is shown associated with a corresponding confidence score and a corresponding ranking (based on the confidence scores). Table670is the combined/final set of remediation actions (681-684) rows determined by blender/re-ranker380based on the remediations actions of tables630and650and end user feedback on the previous recommendations. It may be observed from table670, that the confidence score of the remediation action in row662/683has been modified from “0.30” to “0.40” based on the up-votes/feedback received from the end users. Also, the confidence score of the remediation action in row641/681has been modified from “0.82” to “0.78” based on the down-votes/feedback received from the end users. The final ranking in table670is performed based on the modified confidence scores. Orchestrator330receives the final set of remediation actions and corresponding ranking from blender/re-ranker380and provides the final set of remediation actions to end users such as administrators/SREs (using one of end user systems110). In the above example, the remediation actions of table670may be provided to end users as the recommendation for resolving the target incident. The end user may accordingly perform the recommended remediation actions and correspondingly fix/resolve the target incident. Thus, recommendation system150provides a set of remediation actions along with confidence scores to remediate an incident identified generated by a PM/AIOps system. A knowledge graph based on historical remediation actions and feedback from end users (such as site reliability engineers (SREs)) is constructed. Upon receiving an incident report related to an incident, recommendation system150classifies the received incident into either a short head incident or a long tail incident. For a short head incident, recommendation system150determines remediation actions based on a traversal of the knowledge graph. For a long tail incident, recommendation system150generates new remediation actions using web search techniques. Recommendation system150then blends/combines the various remediation actions and re-ranks them to generate a final list of remediation actions along with confidence scores. The final list is then recommended to the end users, thereby enabling them to perform the appropriate remediation actions for fixing the incidents. It may be appreciated that the aspects of the present disclosure recommend remediation actions for incidents identified by PM/AIOps systems. An IT manager (such as SRE) does not need to debug the problem and analyze metrics, logs, etc. and come up with a resolution by himself/herself which may take a long time. Instead, recommendation system150can automatically understand the nature of the problem and suggest a course of action which will remediate/resolve the problem. This will reduce the countless man hours wasted in debugging/triaging repetitive alerts and is of immense business value in AIOps. It should be further appreciated that the features described above can be implemented in various embodiments as a desired combination of one or more of hardware, software, and firmware. The description is continued with respect to an embodiment in which various features are operative when the software instructions described above are executed. 10. Digital Processing System FIG.7is a block diagram illustrating the details of digital processing system (800) in which various aspects of the present disclosure are operative by execution of appropriate executable modules. Digital processing system700may correspond to recommendation system150. Digital processing system700may contain one or more processors such as a central processing unit (CPU)710, random access memory (RAM)720, secondary memory730, graphics controller760, display unit770, network interface780, and input interface790. All the components except display unit770may communicate with each other over communication path750, which may contain several buses as is well known in the relevant arts. The components ofFIG.7are described below in further detail. CPU710may execute instructions stored in RAM720to provide several features of the present disclosure. CPU710may contain multiple processing units, with each processing unit potentially being designed for a specific task. Alternatively, CPU710may contain only a single general-purpose processing unit. RAM720may receive instructions from secondary memory730using communication path750. RAM720is shown currently containing software instructions constituting shared environment725and/or other user programs726(such as other applications, DBMS, etc.). In addition to shared environment725, RAM720may contain other software programs such as device drivers, virtual machines, etc., which provide a (common) run time environment for execution of other/user programs. Graphics controller760generates display signals (e.g., in RGB format) to display unit770based on data/instructions received from CPU710. Display unit770contains a display screen to display the images defined by the display signals. Input interface790may correspond to a keyboard and a pointing device (e.g., touch-pad, mouse) and may be used to provide inputs. Network interface780provides connectivity to a network (e.g., using Internet Protocol), and may be used to communicate with other systems connected to the networks. Secondary memory730may contain hard drive735, flash memory736, and removable storage drive737. Secondary memory730may store the data (e.g., data portions ofFIGS.4A-4C,5and6A-6C) and software instructions (e.g., for implementing the steps ofFIG.2, the blocks ofFIG.2), which enable digital processing system700to provide several features in accordance with the present disclosure. The code/instructions stored in secondary memory730may either be copied to RAM720prior to execution by CPU710for higher execution speeds, or may be directly executed by CPU710. Some or all of the data and instructions may be provided on removable storage unit740, and the data and instructions may be read and provided by removable storage drive737to CPU710. Removable storage unit740may be implemented using medium and storage format compatible with removable storage drive737such that removable storage drive737can read the data and instructions. Thus, removable storage unit740includes a computer readable (storage) medium having stored therein computer software and/or data. However, the computer (or machine, in general) readable medium can be in other forms (e.g., non-removable, random access, etc.). In this document, the term “computer program product” is used to generally refer to removable storage unit740or hard disk installed in hard drive735. These computer program products are means for providing software to digital processing system700. CPU710may retrieve the software instructions, and execute the instructions to provide various features of the present disclosure described above. The term “storage media/medium” as used herein refers to any non-transitory media that store data and/or instructions that cause a machine to operate in a specific fashion. Such storage media may comprise non-volatile media and/or volatile media. Non-volatile media includes, for example, optical disks, magnetic disks, or solid-state drives, such as storage memory730. Volatile media includes dynamic memory, such as RAM720. Common forms of storage media include, for example, a floppy disk, a flexible disk, hard disk, solid-state drive, magnetic tape, or any other magnetic data storage medium, a CD-ROM, any other optical data storage medium, any physical medium with patterns of holes, a RAM, a PROM, and EPROM, a FLASH-EPROM, NVRAM, any other memory chip or cartridge. Storage media is distinct from but may be used in conjunction with transmission media. Transmission media participates in transferring information between storage media. For example, transmission media includes coaxial cables, copper wire and fiber optics, including the wires that comprise bus1050. Transmission media can also take the form of acoustic or light waves, such as those generated during radio-wave and infra-red data communications. Reference throughout this specification to “one embodiment”, “an embodiment”, or similar language means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present disclosure. Thus, appearances of the phrases “in one embodiment”, “in an embodiment” and similar language throughout this specification may, but do not necessarily, all refer to the same embodiment. Furthermore, the described features, structures, or characteristics of the disclosure may be combined in any suitable manner in one or more embodiments. In the above description, numerous specific details are provided such as examples of programming, software modules, user selections, network transactions, database queries, database structures, hardware modules, hardware circuits, hardware chips, etc., to provide a thorough understanding of embodiments of the disclosure. 11. Conclusion While various embodiments of the present disclosure have been described above, it should be understood that they have been presented by way of example only, and not limitation. Thus, the breadth and scope of the present disclosure should not be limited by any of the above-described exemplary embodiments, but should be defined only in accordance with the following claims and their equivalents. It should be understood that the figures and/or screen shots illustrated in the attachments highlighting the functionality and advantages of the present disclosure are presented for example purposes only. The present disclosure is sufficiently flexible and configurable, such that it may be utilized in ways other than that shown in the accompanying figures. Further, the purpose of the following Abstract is to enable the Patent Office and the public generally, and especially the scientists, engineers and practitioners in the art who are not familiar with patent or legal terms or phraseology, to determine quickly from a cursory inspection the nature and essence of the technical disclosure of the application. The Abstract is not intended to be limiting as to the scope of the present disclosure in any way.
59,120
11860727
While the present disclosure is amenable to various modifications and alternative forms, specifics thereof have been shown by way of example in the drawings and will be described in detail. It should be understood, however, that the intention is not to limit the present disclosure to the particular embodiments described. On the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the present disclosure. DETAILED DESCRIPTION It will be readily understood that the components of the present embodiments, as generally described and illustrated in the Figures herein, may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the apparatus, system, method, and computer program product of the present embodiments, as presented in the Figures, is not intended to limit the scope of the embodiments, as claimed, but is merely representative of selected embodiments. In addition, it will be appreciated that, although specific embodiments have been described herein for purposes of illustration, various modifications may be made without departing from the spirit and scope of the embodiments. Reference throughout this specification to “a select embodiment,” “at least one embodiment,” “one embodiment,” “another embodiment,” “other embodiments,” or “an embodiment” and similar language means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment. Thus, appearances of the phrases “a select embodiment,” “at least one embodiment,” “in one embodiment,” “another embodiment,” “other embodiments,” or “an embodiment” in various places throughout this specification are not necessarily referring to the same embodiment. The illustrated embodiments will be best understood by reference to the drawings, wherein like parts are designated by like numerals throughout. The following description is intended only by way of example, and simply illustrates certain selected embodiments of devices, systems, and processes that are consistent with the embodiments as claimed herein. It is to be understood that although this disclosure includes a detailed description on cloud computing, implementation of the teachings recited herein is not limited to a cloud computing environment. Rather, embodiments of the present disclosure are capable of being implemented in conjunction with any other type of computing environment now known or later developed. Cloud computing is a model of service delivery for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, network bandwidth, servers, processing, memory, storage, applications, virtual machines, and services) that can be rapidly provisioned and released with minimal management effort or interaction with a provider of the service. This cloud model may include at least five characteristics, at least three service models, and at least four deployment models. Characteristics are as follows. On-demand self-service: a cloud consumer can unilaterally provision computing capabilities, such as server time and network storage, as needed automatically without requiring human interaction with the service's provider. Broad network access: capabilities are available over a network and accessed through standard mechanisms that promote use by heterogeneous thin or thick client platforms (e.g., mobile phones, laptops, and PDAs). Resource pooling: the provider's computing resources are pooled to serve multiple consumers using a multi-tenant model, with different physical and virtual resources dynamically assigned and reassigned according to demand. There is a sense of location independence in that the consumer generally has no control or knowledge over the exact location of the provided resources but may be able to specify location at a higher level of abstraction (e.g., country, state, or datacenter). Rapid elasticity: capabilities can be rapidly and elastically provisioned, in some cases automatically, to quickly scale out and rapidly released to quickly scale in. To the consumer, the capabilities available for provisioning often appear to be unlimited and can be purchased in any quantity at any time. Measured service: cloud systems automatically control and optimize resource use by leveraging a metering capability at some level of abstraction appropriate to the type of service (e.g., storage, processing, bandwidth, and active user accounts). Resource usage can be monitored, controlled, and reported, providing transparency for both the provider and consumer of the utilized service. Service Models are as follows. Software as a Service (SaaS): the capability provided to the consumer is to use the provider's applications running on a cloud infrastructure. The applications are accessible from various client devices through a thin client interface such as a web browser (e.g., web-based e-mail). The consumer does not manage or control the underlying cloud infrastructure including network, servers, operating systems, storage, or even individual application capabilities, with the possible exception of limited user-specific application configuration settings. Platform as a Service (PaaS): the capability provided to the consumer is to deploy onto the cloud infrastructure consumer-created or acquired applications created using programming languages and tools supported by the provider. The consumer does not manage or control the underlying cloud infrastructure including networks, servers, operating systems, or storage, but has control over the deployed applications and possibly application hosting environment configurations. Infrastructure as a Service (IaaS): the capability provided to the consumer is to provision processing, storage, networks, and other fundamental computing resources where the consumer is able to deploy and run arbitrary software, which can include operating systems and applications. The consumer does not manage or control the underlying cloud infrastructure but has control over operating systems, storage, deployed applications, and possibly limited control of select networking components (e.g., host firewalls). Deployment Models are as follows. Private cloud: the cloud infrastructure is operated solely for an organization. It may be managed by the organization or a third party and may exist on-premises or off-premises. Community cloud: the cloud infrastructure is shared by several organizations and supports a specific community that has shared concerns (e.g., mission, security requirements, policy, and compliance considerations). It may be managed by the organizations or a third party and may exist on-premises or off-premises. Public cloud: the cloud infrastructure is made available to the general public or a large industry group and is owned by an organization selling cloud services. Hybrid cloud: the cloud infrastructure is a composition of two or more clouds (private, community, or public) that remain unique entities but are bound together by standardized or proprietary technology that enables data and application portability (e.g., cloud bursting for load-balancing between clouds). A cloud computing environment is service oriented with a focus on statelessness, low coupling, modularity, and semantic interoperability. At the heart of cloud computing is an infrastructure that includes a network of interconnected nodes. Referring now toFIG.1, illustrative cloud computing environment50is depicted. As shown, cloud computing environment50includes one or more cloud computing nodes10with which local computing devices used by cloud consumers, such as, for example, personal digital assistant (PDA) or cellular telephone54A, desktop computer54B, laptop computer54C, and/or automobile computer system54N may communicate. Nodes10may communicate with one another. They may be grouped (not shown) physically or virtually, in one or more networks, such as Private, Community, Public, or Hybrid clouds as described hereinabove, or a combination thereof. This allows cloud computing environment50to offer infrastructure, platforms and/or software as services for which a cloud consumer does not need to maintain resources on a local computing device. It is understood that the types of computing devices54A-N shown inFIG.1are intended to be illustrative only and that computing nodes10and cloud computing environment50can communicate with any type of computerized device over any type of network and/or network addressable connection (e.g., using a web browser). Referring now toFIG.2, a set of functional abstraction layers provided by cloud computing environment50(FIG.1) is shown. It should be understood in advance that the components, layers, and functions shown inFIG.2are intended to be illustrative only and embodiments of the disclosure are not limited thereto. As depicted, the following layers and corresponding functions are provided: Hardware and software layer60includes hardware and software components. Examples of hardware components include: mainframes61; RISC (Reduced Instruction Set Computer) architecture based servers62; servers63; blade servers64; storage devices65; and networks and networking components66. In some embodiments, software components include network application server software67and database software68. Virtualization layer70provides an abstraction layer from which the following examples of virtual entities may be provided: virtual servers71; virtual storage72; virtual networks73, including virtual private networks; virtual applications and operating systems74; and virtual clients75. In one example, management layer80may provide the functions described below. Resource provisioning81provides dynamic procurement of computing resources and other resources that are utilized to perform tasks within the cloud computing environment. Metering and Pricing82provide cost tracking as resources are utilized within the cloud computing environment, and billing or invoicing for consumption of these resources. In one example, these resources may include application software licenses. Security provides identity verification for cloud consumers and tasks, as well as protection for data and other resources. User portal83provides access to the cloud computing environment for consumers and system administrators. Service level management84provides cloud computing resource allocation and management such that required service levels are met. Service Level Agreement (SLA) planning and fulfillment85provide pre-arrangement for, and procurement of, cloud computing resources for which a future requirement is anticipated in accordance with an SLA. Workloads layer90provides examples of functionality for which the cloud computing environment may be utilized. Examples of workloads and functions which may be provided from this layer include: mapping and navigation91; software development and lifecycle management92; virtual classroom education delivery93; data analytics processing94; transaction processing95; and computing a confidence value for time-series data96. Referring toFIG.3, a block diagram of an example data processing system, herein referred to as computer system100, is provided. System100may be embodied in a computer system/server in a single location, or in at least one embodiment, may be configured in a cloud-based system sharing computing resources. For example, and without limitation, the computer system100may be used as a cloud computing node10. Aspects of the computer system100may be embodied in a computer system/server in a single location, or in at least one embodiment, may be configured in a cloud-based system sharing computing resources as a cloud-based support system, to implement the system, tools, and processes described herein. The computer system100is operational with numerous other general purpose or special purpose computer system environments or configurations. Examples of well-known computer systems, environments, and/or configurations that may be suitable for use with the computer system100include, but are not limited to, personal computer systems, server computer systems, thin clients, thick clients, hand-held or laptop devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputer systems, mainframe computer systems, and file systems (e.g., distributed storage environments and distributed cloud computing environments) that include any of the above systems, devices, and their equivalents. The computer system100may be described in the general context of computer system-executable instructions, such as program modules, being executed by the computer system100. Generally, program modules may include routines, programs, objects, components, logic, data structures, and so on that perform particular tasks or implement particular abstract data types. The computer system100may be practiced in distributed cloud computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed cloud computing environment, program modules may be located in both local and remote computer system storage media including memory storage devices. As shown inFIG.3, the computer system100is shown in the form of a general-purpose computing device. The components of the computer system100may include, but are not limited to, one or more processors or processing devices104(sometimes referred to as processors and processing units), e.g., hardware processors, a system memory106(sometimes referred to as a memory device), and a communications bus102that couples various system components including the system memory106to the processing device104. The communications bus102represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. By way of example, and not limitation, such architectures include Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MCA) bus, Enhanced ISA (EISA) bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnects (PCI) bus. The computer system100typically includes a variety of computer system readable media. Such media may be any available media that is accessible by the computer system100and it includes both volatile and non-volatile media, removable and non-removable media. In addition, the computer system100may include one or more persistent storage devices108, communications units110, input/output (I/O) units112, and displays114. The processing device104serves to execute instructions for software that may be loaded into the system memory106. The processing device104may be a number of processors, a multi-core processor, or some other type of processor, depending on the particular implementation. A number, as used herein with reference to an item, means one or more items. Further, the processing device104may be implemented using a number of heterogeneous processor systems in which a main processor is present with secondary processors on a single chip. As another illustrative example, the processing device104may be a symmetric multiprocessor system containing multiple processors of the same type. The system memory106and persistent storage108are examples of storage devices116. A storage device may be any piece of hardware that is capable of storing information, such as, for example without limitation, data, program code in functional form, and/or other suitable information either on a temporary basis and/or a permanent basis. The system memory106, in these examples, may be, for example, a random access memory or any other suitable volatile or non-volatile storage device. The system memory106can include computer system readable media in the form of volatile memory, such as random access memory (RAM) and/or cache memory. The persistent storage108may take various forms depending on the particular implementation. For example, the persistent storage108may contain one or more components or devices. For example, and without limitation, the persistent storage108can be provided for reading from and writing to a non-removable, non-volatile magnetic media (not shown and typically called a “hard drive”). Although not shown, a magnetic disk drive for reading from and writing to a removable, non-volatile magnetic disk (e.g., a “floppy disk”), and an optical disk drive for reading from or writing to a removable, non-volatile optical disk such as a CD-ROM, DVD-ROM or other optical media can be provided. In such instances, each can be connected to the communication bus102by one or more data media interfaces. The communications unit110in these examples may provide for communications with other computer systems or devices. In these examples, the communications unit110is a network interface card. The communications unit110may provide communications through the use of either or both physical and wireless communications links. The input/output unit112may allow for input and output of data with other devices that may be connected to the computer system100. For example, the input/output unit112may provide a connection for user input through a keyboard, a mouse, and/or some other suitable input device. Further, the input/output unit112may send output to a printer. The display114may provide a mechanism to display information to a user. Examples of the input/output units112that facilitate establishing communications between a variety of devices within the computer system100include, without limitation, network cards, modems, and input/output interface cards. In addition, the computer system100can communicate with one or more networks such as a local area network (LAN), a general wide area network (WAN), and/or a public network (e.g., the Internet) via a network adapter (not shown inFIG.3). It should be understood that although not shown, other hardware and/or software components could be used in conjunction with the computer system100. Examples of such components include, but are not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data archival storage systems. Instructions for the operating system, applications and/or programs may be located in the storage devices116, which are in communication with the processing device104through the communications bus102. In these illustrative examples, the instructions are in a functional form on the persistent storage108. These instructions may be loaded into the system memory106for execution by the processing device104. The processes of the different embodiments may be performed by the processing device104using computer implemented instructions, which may be located in a memory, such as the system memory106. These instructions are referred to as program code, computer usable program code, or computer readable program code that may be read and executed by a processor in the processing device104. The program code in the different embodiments may be embodied on different physical or tangible computer readable media, such as the system memory106or the persistent storage108. The program code118may be located in a functional form on the computer readable media120that is selectively removable and may be loaded onto or transferred to the computer system100for execution by the processing device104. The program code118and computer readable media120may form a computer program product122in these examples. In one example, the computer readable media120may be computer readable storage media124or computer readable signal media126. Computer readable storage media124may include, for example, an optical or magnetic disk that is inserted or placed into a drive or other device that is part of the persistent storage108for transfer onto a storage device, such as a hard drive, that is part of the persistent storage108. The computer readable storage media124also may take the form of a persistent storage, such as a hard drive, a thumb drive, or a flash memory, that is connected to the computer system100. In some instances, the computer readable storage media124may not be removable from the computer system100. Alternatively, the program code118may be transferred to the computer system100using the computer readable signal media126. The computer readable signal media126may be, for example, a propagated data signal containing the program code118. For example, the computer readable signal media126may be an electromagnetic signal, an optical signal, and/or any other suitable type of signal. These signals may be transmitted over communications links, such as wireless communications links, optical fiber cable, coaxial cable, a wire, and/or any other suitable type of communications link. In other words, the communications link and/or the connection may be physical or wireless in the illustrative examples. In some illustrative embodiments, the program code118may be downloaded over a network to the persistent storage108from another device or computer system through the computer readable signal media126for use within the computer system100. For instance, program code stored in a computer readable storage medium in a server computer system may be downloaded over a network from the server to the computer system100. The computer system providing the program code118may be a server computer, a client computer, or some other device capable of storing and transmitting the program code118. The program code118may include one or more program modules (not shown inFIG.3) that may be stored in system memory106by way of example, and not limitation, as well as an operating system, one or more application programs, other program modules, and program data. Each of the operating systems, one or more application programs, other program modules, and program data or some combination thereof, may include an implementation of a networking environment. The program modules of the program code118generally carry out the functions and/or methodologies of embodiments as described herein. The different components illustrated for the computer system100are not meant to provide architectural limitations to the manner in which different embodiments may be implemented. The different illustrative embodiments may be implemented in a computer system including components in addition to or in place of those illustrated for the computer system100. The present disclosure may be a system, a method, and/or a computer program product at any possible technical detail level of integration. The computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present disclosure. The computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. A non-exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing. A computer readable storage medium, as used herein, is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire. Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network. The network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. A network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device. Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network. The network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. A network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device. Computer readable program instructions for carrying out operations of the present disclosure may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, configuration data for integrated circuitry, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++, or the like, and procedural programming languages, such as the “C” programming language or similar programming languages. The computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). In some embodiments, electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present disclosure. Aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions. These computer readable program instructions may be provided to a processor of a computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram block or blocks. The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus or other device to produce a computer implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks. The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the blocks may occur out of the order noted in the Figures. For example, two blocks shown in succession may, in fact, be accomplished as one step, executed concurrently, substantially concurrently, in a partially or wholly temporally overlapping manner, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions. Many known entities, including business entities and residential entities, include systems that collect time series data from various sources such as Internet of Things (IoT) devices, smart home devices, human activity, device activity, etc. The collected data may be analyzed to facilitate energy conservation, occupancy allotment, etc. On occasion, a portion of the collected time series data may be erroneous due to various reasons such as a malfunction of a device being controlled, a respective sensing device malfunction, and issues with respect to the data collection systems, data storage systems, or data transmission systems. For example, in one embodiment, an occupancy management system assesses electrical power usage with respect to a peak load value, and, in response to an erroneous occupancy data value, to avoid peak usage charges, may erroneously initiate de-energization of predetermined devices in the associated spaces. Also, many known entities possess one or more key performance indicators (KPIs), where, as used herein, KPI refers to one or more measurable indicators associated with one or more key objectives. The KPIs facilitate achieving the key objectives through evaluations of success at meeting those key objectives. KPIs are scalable in that enterprise-wide KPIs may be used as well as lower level, suborganization-specific KPIs, e.g., sales, marketing, HR, IT support, and maintenance KPIs may be used. In some embodiments, the KPIs are explicitly identified and described in one or more documents, and in some embodiments, the KPIs become evident as a function of analysis of the collected data, where “hidden” KPIs may be “discovered,” and previously stated KPIs may be verified. A system, computer program product, and method are disclosed and described herein directed toward collecting time series data from one or more sensor devises. In some embodiments, the system includes a data quality-to-KPI predictions confidence engine. The collected time series data is referred to herein as “the original data” and “the original time series data streams.” Unless otherwise indicated, each stream of time series data, as discussed herein for the various embodiments, originates from a single sensing device, or from a plurality of sensors where the stream is either a combination, e.g., an aggregation of the outputs from the respective sensors or an output from one sensor of the plurality of sensors that has been auctioned or otherwise selected. Also, unless otherwise indicated, the system as described herein is configured to simultaneously analyze a plurality of data streams without limitation as to the number of data streams. Accordingly, the system as described herein is configured to analyze individual data streams. In one or more embodiments, the quality of the data embedded within the respective data streams is analyzed and a determination is made as to the respective one or more KPIs that are related to the respective data through a two-step process. Initially, the quality of the original data as data packets are transmitted from the sensors to a data inspection module, where the data packets are inspected by a data inspection sub-module embedded within the data inspection module. In some cases, one or more data packets may include issues that identify the respective data packets as containing potentially faulty data. One such issue may be associated with the sampling frequency. For example, and without limitation, the data inspection sub-module checks the sampling frequency of the data sensors to determine if multiple sampling frequencies are present in the data, e.g., if there are occasional perturbations in the sampling frequency, and if there are continuous sampling frequency changes. Also, for example, and without limitation, the data inspection sub-module checks the timestamps of the data to determine if any timestamps are missing in the data, if the data is missing for a continuous extended duration, and if there are timestamps of varied formats. Moreover, for example, and without limitation, the data inspection sub-module checks for syntactic value issues to determine if supposedly numeric data includes extensive durations of data that is “not-a-number (NaN)” and improper numerical rounding and truncation. In addition, for example, and without limitation, the data inspection sub-module checks for semantic value issues to determine if any of the data includes anomalous events, and noisy data. Accordingly, the data inspection sub-module examines the data in the streams and determines if the data is within predetermined tolerances and if there are any suspected errors in the data, and the nature of the errors. In some embodiments, there are two modalities, i.e., processing of the original data the system is trying to operate on to determine data quality (as described above), and determining one or more KPI formulations that are planned to be applied on the data. As used herein, the KPI formulations include one or more KPI characteristics, where the KPI characteristics also include, without limitation, the details of the formulation, e.g., and without limitation, one or more data issues the algorithms of the formulations are directed to, the algorithms themselves, and any parameters and definitions of the respective KPIs. In some embodiments, both modalities are executed through the data inspection module, i.e., the data quality is evaluated through the data inspection sub-module and the KPI-characterized formulation evaluations are executed through a KPI characteristic determination sub-module that is operably coupled to the data inspection sub-module. In some embodiments, the KPI characteristic determination sub-module is a separate module operably coupled to the data inspection module. Accordingly, the data inspection features and the determination of the relevant KPI formulation characteristics are closely integrated. In at least some embodiments, at least a portion of such KPI formulation characteristics are typically implemented as algorithms to operate on the incoming data streams to provide a user with the necessary output data and functionality to support the respective KPIs. Also, in some embodiments, the KPI formulations are readily located within a KPI formulations sub-module embedded within the KPI characteristic determination sub-module. Therefore, as previously described, the data is initially checked to verify it is within certain tolerances, and then secondly, a determination is made if there is any association of the potentially erroneous data with one or more particular KPIs. In one or more embodiments, at least a portion of the collected original data is not associated with any KPIs, and therefore, such erroneous original data is not impactful for a given KPI. Therefore, in order to execute an initial identification of relevant issues, a simple KPI relevancy test is performed. For example, and without limitation, if one or more particular KPIs use an average-based formulation, and the potentially erroneous data in the respective data streams includes unordered timestamps, it is determined that the unordered timestamps do not create any impact on the respective one or more KPIs. Similarly, if one or more particular KPIs are median- or mode-based formulations, then the presence of outliers in the respective data streams do not create any impact on the respective KPIs. Accordingly, some erroneous data attributes may have no effect on a particular KPI, and such data is not relevant for the KPI-related analyses described further herein. In some embodiments, one further mechanism for determining KPI relevancy that may be employed is to pass at least a portion of the original time series data streams with known non-erroneous data, and, in some embodiments, suspect erroneous data, to one or more respective KPI formulations in the KPI formulations sub-module to generate numerical values therefrom, i.e., to generate original KPI test values therefrom. Specifically, data with no erroneous values may be manipulated to change at least one value to a known erroneous value, thereby generating imputed error-laden data that is also passed to the respective one or more KPIs to generate imputed KPI test values therefrom. In some embodiments, the injected errors may include, without limitation, random selection of some of the data in the original data stream and removal of such random data to determine if missing data issues are relevant, and random selection of known non-erroneous data and injecting values that are known to extend beyond established tolerances to determine if outlier issues are relevant. The imputed KPI test values are compared to the original KPI test values and if there is sufficient similarity therebetween, then the original data, i.e., the issues associated with the original data, are labeled as relevant to the respective KPIs. If there is insufficient similarity between the imputed KPI test values and the original KPI test values, then the original data, including the aforementioned issues, are labeled as not relevant to the respective KPIs. Accordingly, in order to determine if there are any relevant relationships between the suspected, or otherwise identified, data errors in the original data stream, data with predetermined errors embedded therein is used for determining if there is any relevant and appreciable effect of the erroneous data on the respective KPI formulations. In at least some embodiments, a KPI characteristic determination is performed. The basis for each KPI characteristic determination, sometimes referred to as KPI characterization, includes one or more KPIs, e.g., for a business, one or more business-specific KPIs, and for a private residence, one or more residential-specific KPIs. In some embodiments, the KPIs are predetermined and described, e.g., as explicit measurements of success, or lack thereof, toward attaining specific business goals. In some embodiments, the KPIs are developed as a function of collection and analysis of business data to determine otherwise unidentified measurements for attaining business goals, thereby facilitating the identification of one or more additional KPIs. Accordingly, regardless of the origins, the KPIs are available to match the associated inherent properties within the KPIs to the respective issues found in the original data, and in some instances, facilitate identification of the relevant issues. In one or more embodiments, the KPI characteristic determination operations are executed in situ as the original data is transmitted into the data inspection module. Moreover, since the nature of the original data issues generated in real time are not known in advance, the data inspections and KPI characteristic determinations are to be made dynamically in real time. Therefore, the determinations of the respective KPIs with the respective characteristics that are embedded within the respective formulations of the respective KPIs are executed in conjunction with the determinations of the issues affecting the incoming original data. At least a portion of the KPI characteristic determinations include determining the nature of the respective KPIs associated with the original data. In some embodiments, a portion of the incoming original data will not be associated with any KPIs, and this data will not be operated on any further for this disclosure and any embedded issues will be either ignored and the data processed as is, or an issue notification will be transmitted to the user in one or more manners. In other embodiments, the relationships of the incoming original data and the associated KPI formulations will be further determined. In embodiments, the KPI formulations are grouped into one of two types of formulations, i.e., “observable box” and “unobservable box” formulations. The observable box KPI formulations are available for inspection, i.e., the details are observable, and the KPI characteristic determination sub-module includes a observable box sub-module. The unobservable box KPI formulations are opaque as to the operations and algorithms contained therein, for example, and without limitation, the respective unobservable box algorithms and operations may be proprietary in nature and the respective users require some level of secrecy and confidentiality for the contents thereof. The KPI characteristic determination sub-module includes a unobservable box sub-module. In some embodiments of both of the observable box and unobservable box KPI formulations, the associated algorithms examine whether the appropriate KPI characteristic formulation includes one or more analyses of the surrounding original data values with respect to the data with issues, including, without limitation, one or more of maximum value determinations, minimum value determinations, mean value determinations, median value determinations, and other statistical determinations, such as, and without limitation, standard deviation analyses. As previously discussed, if there is no relationship between the respective original data and the KPI formulations characteristic, no further action is taken on the issue-laden data per this disclosure. Accordingly, for those issues associated with original data that has a relationship with a KPI (both supplied by the user), the properties, i.e., the characteristics of the KPI formulations, whether observable box or unobservable box, are determined so that the relevant data quality issues that may adversely impact a related KPI can be properly classified and the subsequent optimizations can be performed. In one or more embodiments, a snapshot generator module receives the outputs of the data inspection module that include the erroneous data with the known embedded issues and the respective KPI formulation characteristics. The snapshot generator module is configured to generate snapshots of simulated data through simulation of the respective data values through one or more models that are placed into production to facilitate the simulation of the original data. In some embodiments, method-based simulations and point-based simulations are used. While either of the simulations may be used regardless of the nature of the issues in the erroneous data, including both simultaneously, in some embodiments, the selection of which of the two simulations is based on the nature of the original data quality issues, and in some embodiments, the selection may be based on predetermined instructions generated by the user. However, in general, the method-based simulations are better configured for processing missing value issues and the point-based simulations are better configured for processing outlier value issues. For example, in some embodiments, previous trial runs by a user may have indicated that missing data may be determined based on if data is missing for a continuous extended duration or syntactic value issues exist, i.e., supposedly numeric data includes extensive durations of data that is NaN or improper numerical rounding or truncation is determined. As such, the method-based simulations may provide the better analyses for the aforementioned conditions. Outlier value issues may be determined if there are semantic value issues, i.e., some of the data includes anomalous events or consistently or patterned noisy data. As such, the point-based simulations may provide the better analyses for the aforementioned conditions. Similarly, if a user determines that it may be indeterminate if the method-based or point-based simulations provide better simulations for the specified conditions, for those conditions when either may provide better simulations, and as described above, both simulation methods may be used. The snapshot generator module is configured to use method-based simulations to analyze for one or more methods of remediation, where each method of remediation may include, for example, and without limitation, algorithms for determining means, medians, etc. In addition, a method-based simulations sub-module may be used whether or not the KPI formulation characteristics are unobservable box or observable box. Each method of remediation includes generation of one or more imputed values that are included in the respective simulation snapshot as potential solutions to, or replacements for, the erroneous values if that particular method of remediation were to be used. Notably, the imputed values may, or may not, be potential replacement values. Since there are no predetermined notions as to which of the methods of remediation provides the best, or most correct, replacement values for the particular present conditions, a plurality of models are used, where each model is used to execute the respective method of remediation. In some embodiments, the method-based simulations sub-module is communicatively coupled to the KPI formulations sub-module. Also, the fraction of non-erroneous data used to compute the imputed values for the erroneous data depends on the particular remediation technique. For example, if it is determined to replace a missing value with the mean of all values, then a substantially complete set of respective data is used in the remediation module. Alternatively, if only the surrounding three values are used to compute the missing value, then only those surrounding values are used by the remediation module. Accordingly, method-based simulations are used to generate one or more simulation snapshots of the non-erroneous original data and imputed values for each erroneous original data value, where each simulated value indicates what the data value would look like when the particular remediation method is used, thereby generating a plurality of imputed values, each imputed value a product of a different remediation method. In at least some embodiments, data collection includes use of heuristically-based features that facilitate determinations of patterns in the data points as they are collected. As used herein, the term “data point” and the term “data element” are used interchangeably. Under some circumstances, one or more instances of the original data may appear to be incorrect due to the respective data points exceeding a threshold value that is based on a probability of what the data point value should be as a function of the established data patterns. For example, and without limitation, an apparent data excursion, i.e., a data spike upwards or downwards, may be generated through either an erroneous data packet or as a function of an accurate portrayal of what is occurring in real time. Therefore, the snapshot generator module is further configured to use point-based simulations to analyze the errors to determine if apparently erroneous data is actually erroneous data. In one or more embodiments, the data, including the known correct original data and the suspect potentially erroneous data points, is combined into various configurations to initiate determinations of probabilities of whether the potentially erroneous data values are correct or erroneous. Each potentially erroneous data value is individually inferred as either a discrete “correct” or a discrete “erroneous,” and the potentially erroneous data values are then referred to as “inferred data points” to distinguish them from the known correct original data. As such, the inferred data points have an original data value as transmitted and an inferred label as either correct or erroneous. The remainder of the analysis focuses exclusively on the inferred data points. Specifically, the full range of all of the possible combinations of the inferred data points collected in the aforementioned simulation snapshot are evaluated. The generation of all possible combinations of discrete “correct” labels and discrete “erroneous” labels and subsequent aggregation of such facilitates further determinations of “best” actions, whether those actions are to correct erroneous data or to accept correct data. These operations consider the accepted inaccuracies associated with original data that may or may not be erroneous, through determining the probabilities of one or more of the suspect potentially erroneous data values being “correct” or “erroneous.” For example, and without limitation, for an instance where there the erroneous data points, 23, or 8 combinations are generated by the point-based simulation. In each of the combinations, it is assumed that some of the erroneous values are wrongly identified and some are correctly identified as erroneous. So, for each combination, the erroneous values are replaced with an imputed value based on a predetermined remediation method. Therefore, each combination has a different set of correct and incorrect data points and require different imputed values based on the predetermined remediation technique. The total number of each possible combination of the discrete “correct” and “erroneous” inferred data points grows exponentially with the number of inferred data points (i.e., 2x, where x=the number of inferred data values), and generating all possible combinations and processing them can be time and resource intensive. Each combination of inferred data points is a potential simulation, and processing each of the combinations as a potential simulation merely increases the processing overhead. Therefore, the described possible combinations of inferred data points are further considered; however, the possible combinations of inferred data points are “pruned” such that only a subset of all possible combinations are further considered. Therefore, the point-based simulations sub-module is operatively coupled to a snapshot optimization sub-module. In such embodiments, snapshot optimization features are employed through using the KPI formulation characteristics determined as previously described, whether or not the KPI formulation characteristics are unobservable box or observable box. For example, and without limitation, KPI formulation characteristics for maximum, minimum, mean, and median analyses may be used to filter the simulations of the inferred data points. Therefore, the snapshot optimization module is communicatively coupled to the KPI formulations sub-module. In general, only those combinations of inferred data points with the imputed values that successfully pass through the pruning process will survive to generate the respective simulations of the suspect point values through the models and generate the respective simulation snapshots with the non-erroneous original data and imputed values for the identified erroneous data, where a portion of the suspected erroneous point values may in fact not be erroneous and will not require replacement thereof. In at least some embodiments, the simulation snapshots created by the snapshot generator modules, whether method-based or point-based, are transmitted to a KPI value inference module. As discussed above, each simulation snapshot includes the non-erroneous original data and imputed values for the erroneous data. Each of the imputed values and the associated original data are presented to the respective KPI formulation to generate a predicted replacement value, i.e., an inferred snapshot value for each of the imputed values in the respective simulation snapshots. Each inferred snapshot value is at least partially based on the respective KPI formulation within the context of the non-erroneous original data on the time-series data stream. Therefore, for each simulation snapshot transmitted to the KPI value inference module, one or more predicted replacement values, i.e., inferred snapshot values, are generated. In some embodiments, the inferred snapshot values are transmitted to a confidence measures module to generate analytical scores in the form of confidence values (discussed further below) for each of the inferred snapshot values. For each respective scored inferred snapshot value for the erroneous data, a best confidence value is selected, and the respective inferred snapshot value is now elevated to the selected KPI value to replace the erroneous data, where the selected KPI value is referred to as the inferred KPI value. Accordingly, the inferred KPI value is that value selected from one or more predicted replacement values (i.e., the inferred snapshot values) to resolve the potentially erroneous data instances. In one or more embodiments, in addition, the confidence measures module receives the respective information to facilitate the selection of the inferred KPI value and additional information to generate an explanation of the inferred KPI value that was selected. In general, the confidence measures module compares the inferred snapshot values generated through one or more of the aforementioned simulations with the respective original erroneous data. At least one of the results of the comparison are the respective confidence values, in the format of numerical values, for each of the inferred snapshot values. The respective confidence values as applied to the respective snapshots of the data are indicative of a predicted level of confidence that the respective inferred snapshot values are correct. A relatively low confidence value indicates that either the respective inferred snapshot value, including the inferred KPI value, should not be used, or used with caution. A relatively high confidence value indicates that the respective inferred snapshot values, including the inferred KPI value, should be used. The threshold values for the associated confidence values may be established by the user, and may also be used to train one or more models, both conditions to facilitate fully automating the selection. Furthermore, the subsequent actions may be automated. For example, and without limitation, for confidence values below a predetermined threshold, the respective inferred snapshot values will not be passed for further processing within the native application utilizing the original data stream. Similarly, for confidence values above a predetermined threshold, the respective selected inferred KPI value will be passed for further processing within the native application utilizing the original data. Accordingly, the systems and methods as described herein correct the issues with the erroneous data in the original data stream automatically in a manner that prevents inadvertent actions or initiates proper actions as the conditions and correct data dictate. In addition, since it is possible that the confidence value for the inferred KPI value is not 100%, the confidence measures module includes an explanatory sub-module to provide an explanatory basis for the resolution of the one or more potentially erroneous data instances through providing the details and evidence of the selection of the particular simulated snapshot with the inferred KPI value. The explanatory sub-module provides such details including, and without limitation, the types of issues detected in the dataset, the number and nature of the simulations generated, the statistical properties of the scores obtained from various simulations, and a comparison of the scores. Accordingly, the confidence measures module generates various values for the simulated snapshots from the KPI value inference module and the information to facilitate the user understanding the properties of the distribution of the values to further provide clarity of the selection of the respective inferred KPI values. In some embodiments, the confidence measures module also includes a plurality of additional sub-modules to facilitate generating the aforementioned confidence values and the details and evidence to support such values. In some of those embodiments, three confidence measures sub-modules are used, i.e., a quantity-based confidence measures sub-module, a spread-based confidence measures sub-module, and a quantity- and spread-based confidence measures sub-module. The quantity-based confidence measures sub-module is configured to take into consideration the magnitude of the values obtained from the KPI value inference module and generate the associated confidence measures information, e.g., whether the magnitude of the KPI value is 50 or 1050, the confidence in the resultant KPI value may differ in light of the additional data and circumstances. The spread-based confidence measures sub-module considers the range in which the simulated values lie and generate the associated confidence measures information, i.e., instead of the absolute magnitude of the KPI values, the spread-based confidence measures use the statistical properties like mean, min, max, and standard deviation of the KPI values, and hence are substantially unaffected by the magnitude. The quantity- and spread-based confidence measures sub-module considers the magnitude as well as the range of values to generate the associated confidence measures information. Referring toFIG.4, a schematic diagram is provided illustrating a system, i.e., a time-series data replacement confidence system400configured to compute a confidence value for corrected data within time-series data. The time-series data replacement confidence system400is referred to hereon as “the system400” with reference to any but the time-series data replacement confidence system400identified. The system400includes one or more processing devices404(only one shown) communicatively and operably coupled to one or more memory devices406(only one shown). The system400also includes a data storage system408that is communicatively coupled to the processing device404and memory device406through a communications bus402. In one or more embodiments, the communications bus402, the processing device404, the memory device406, and the data storage system408are similar to their counterparts shown inFIG.3, i.e., the communications bus102, the processing device104, the system memory106, and the persistent storage devices108, respectively. In one or more embodiments, the system400includes a process control system410that is configured to operate any process that enables operation of the system400as described herein, including, without limitation, electrical processes (e.g., energy management systems), mechanical processes (machinery management systems), electro-mechanical processes (industrial manufacturing systems) and financial processes. In some embodiments, the process control system410is an external system communicatively coupled to the system400. As shown and described herein, the processing device404, memory device406, and the data storage system408are communicatively coupled to the process control system410through, in some embodiments, the input/output unit112(shown inFIG.3). The process control system410includes one or more process devices412interfaced with the respective one or more processes, where the process devices412execute device/process control commands414generated through the interaction of the associated programming instructions through the processing device404and memory device406. The process control system410also includes a sensor suite416that includes the sensors used to monitor the process devices412and the respective processes, generate feedback418to the process devices412(e.g., and without limitation, “sensor operating normally” and “sensor malfunction” signals, and one or more original time series data streams420that include data packets, hereinafter referred to as original data422representative of the process measurement outputs of the sensor suite416. The memory device406includes a process control algorithms and logic engine430that is configured to receive the original time series data streams420to generate the device/process control commands414. Also, in some embodiments the memory device406includes a data quality-to-KPI predictions confidence engine440. In one or more embodiments, the data quality-to-KPI predictions confidence engine440includes one or more models442embedded therein. The system400also includes one or more output devices450communicatively coupled to the communications bus402to receive an output444of the data quality-to-KPI predictions confidence engine440. Modules and sub-modules of the data quality-to-KPI predictions confidence engine440are discussed with respect toFIG.5 The data storage system408stores data quality-to-KPI predictions confidence data460that includes, without limitation, original time series data462(captured through the original time series data streams420) and confidence values and explanations464. The data storage system408stores also stores the business KPIs466including formulations468, properties and characteristics470(used interchangeably herein), and respective measures472, where the formulations468include the characteristics470and measures472. Referring toFIG.5A, a flowchart is provided illustrating a process500for computing a confidence value for corrected data within time-series data. Also, referring toFIG.4, at least a portion of the modules and sub-modules of the data quality-to-KPI predictions confidence engine440are also illustrated and discussed with respect toFIG.5A. In one or more embodiments, the quality of the original data504(that is substantially similar to the original data422) embedded within the respective original time series data streams420is analyzed and a determination is made as to the respective one or more KPIs that are related to the respective original data504through a two-step process. Initially, the quality of the original data504as data packets are transmitted from the respective sensors of the sensor suite502(that is substantially similar to the sensor suite416) to a data inspection module510(resident within the data quality-to-KI predictions confidence engine440), where the data packets are inspected by a data inspection sub-module512embedded within the data inspection module510. In at least some embodiments, as discussed further, the data inspection module510also includes integrated KPI characteristic determination features, thereby avoiding the data inspection sub-module512being redundant. In some embodiments, one or more data packets of original data504may include issues that identify the respective data packets as containing potentially faulty data. One such issue may be associated with the sampling frequency. For example, and without limitation, the data inspection sub-module512checks the sampling frequency of the sensor suite502to determine if multiple sampling frequencies are present in the original data504, e.g., if there are occasional perturbations in the sampling frequency, and if there are continuous sampling frequency changes. Also, for example, and without limitation, the data inspection sub-module512checks the timestamps of the original data504to determine if any timestamps are missing in the original data504, if the original data504is missing for a continuous extended duration, and if there are timestamps of varied formats. Moreover, for example, and without limitation, the data inspection sub-module512checks for syntactic value issues to determine if supposedly numeric data includes extensive durations of data504that is “not-a-number (NaN)” and improper numerical rounding and truncation. In addition, for example, and without limitation, the data inspection sub-module512checks for semantic value issues to determine if any of the original data504includes anomalous events, and noisy data. Accordingly, the data inspection sub-module512examines the original data504in the respective original time series data stream420and determines if the original data504is within predetermined tolerances and if there are any suspected errors in the original data504, and the nature of the errors. In some embodiments, there are two modalities, i.e., processing of the original data504the system400is trying to operate on to determine data quality (as described above), and determining one or more KPI formulations468that are planned to be applied on the original data504. As used herein, the KPI formulations468include one or more KPI characteristics470, where the KPI characteristics470also include, without limitation, the details of the formulation468, e.g., and without limitation, one or more data issues the algorithms of the formulations468are directed to, the algorithms themselves, and any parameters and definitions of the respective KPIs466. The KPIs466, including the formulations468, characteristics470, and measures472, are stored in the data storage system408. In some embodiments, both modalities are executed through the data inspection module510, i.e., the data quality is evaluated through the data inspection sub-module512and the KPI formulation characteristic evaluations are executed through a KPI characteristic determination sub-module514that is operably coupled to the data inspection sub-module512. In some embodiments, the KPI characteristic determination sub-module514is a separate module operably coupled to the data inspection module510. Accordingly, the data inspection features and the determination of the relevant KPI formulation characteristics470are closely integrated. In at least some embodiments, at least a portion of such KPI formulation characteristics470are typically implemented as algorithms to operate on the incoming original time series data streams420to provide a user with the necessary output data and functionality to support the respective KPIs466. Also, in some embodiments, the KPI formulations468are readily located within a KPI formulations sub-module516embedded within the KPI characteristic determination sub-module514, where such KPI formulations468may be imported from the data storage system408. Therefore, as previously described, the original data504is initially checked to verify it is within certain tolerances, and then secondly, a determination is made if there is any association of any potentially erroneous data with one or more particular KPIs466. In one or more embodiments, at least a portion of the collected original data504is not associated with any KPIs466, and therefore, such erroneous original data504is not impactful for a given KPI466. Therefore, in order to execute an initial identification of relevant issues, a simple KPI relevancy test is performed. For example, and without limitation, if one or more particular KPIs466use an average-based formulation, and the potentially erroneous data504in the respective original time series data streams420includes unordered timestamps, it is determined that the unordered timestamps do not create any impact on the respective one or more KPIs466. Similarly, if one or more particular KPIs466are median- or mode-based formulations468, then the presence of outliers in the respective original time series data streams420do not create any impact on the respective KPIs466. Accordingly, some erroneous data attributes may have no effect on a particular KPI466, and such data is not relevant for the KPI-related analyses described further herein. In some embodiments, one further mechanism for determining KPI relevancy that may be employed is to pass at least a portion of the original time series data streams420with known non-erroneous data, and, in some embodiments, suspect erroneous data, to one or more respective KPI formulations468in the KPI formulations sub-module516to generate numerical values therefrom, i.e., to generate original KPI test values therefrom. Specifically, data with no erroneous values may be manipulated to change at least one value to a known erroneous value, thereby generating imputed error-laden data that is also passed to the respective one or more KPI formulations468to generate imputed KPI test values therefrom. Referring toFIG.6, a textual diagram is presented illustrating an example algorithm600for identifying relevant issues. Also referring toFIGS.4and5A, the algorithm600is resident within the KPI formulations sub-module516. The algorithm600includes an issue listing operation602, where a predetermined set of potential data error issues are listed for selection within the algorithm600, and each potential data error issue includes one or more corresponding models442. A data identification operation604is executed to identify which portion of the original data504in the original time series data streams420will be analyzed for potential errors, potential data replacement, and confidence determinations of the replacement. In some embodiments, the data quality-to-KPI predictions confidence engine440is scalable to examine multiple streams of the original time series data streams420simultaneously, including, without limitation, a small fraction of the original data504and scaling up to all original data504in all original time series data streams420. The KPI formulations468as developed by the user are identified and retrieved in a KPI formulation identification and retrieval operation606, and the selected original data504to be analyzed is passed through the respective KPI formulations468in an original data-to-KPI formulation operation608. The affecting issues from the issue listing operation602are cycled through either one at a time or simultaneously in parallel through an affecting issues analysis selection algorithm610. In one or more embodiments, a data-to-issues sub-algorithm612is executed that includes injecting at least a portion of the original data504with imputed erroneous data through an imputed data injection operation614. In some embodiments, the injected errors may include, without limitation, random selection of some of the original data504in the original time series data stream420and removal of such random data to determine if missing data issues are relevant. In addition, the injected errors may include, without limitation, random selection of known non-erroneous original data504and injecting values that are known to extend beyond established tolerances to determine if outlier issues are relevant. The imputed data is transmitted through the KPI formulations468to determine imputed KPI test values through a KPI test values generation operation616. The imputed KPI test values from the operation616are compared to the original KPI test values from the operation608through a KPI value comparison operation618, and an issue determination operation620is executed as a function of the comparison operation618. In some embodiments, if there is sufficient similarity imputed KPI test values and the original KPI test values, then the original data504, i.e., the issues associated with the original data504, are labeled as relevant to the respective KPIs466through the KPI formulations468thereof. If there is insufficient similarity between the imputed KPI test values and the original KPI test values, then the original data504, including the aforementioned issues, are labeled as not relevant to the respective KPIs466through the KPI formulations468thereof. Upon executing the sub-algorithm612through exhaustion of the issues from the issue listing operation602, the sub-algorithm612is ended622and the algorithm600is ended624. Accordingly, in order to determine if there are any relevant relationships between the suspected, or otherwise identified, data errors in the original time series data stream420, data with predetermined errors embedded therein is used for determining if there is any relevant and appreciable effect of the erroneous data on the respective KPI formulations468. Referring again toFIGS.4and5A, in at least some embodiments, a KPI characteristic determination is performed. The basis for each KPI characteristic determination, sometimes referred to as KPI characterization, includes one or more KPIs466. For example, and without limitation, the basis for a business is one or more business-specific KPIs466, and for a private residence, the basis is one or more residential-specific KPIs466. In some embodiments, any entity-based KPIs are used that would enable the time-series data replacement confidence system400disclosed herein are used. In some embodiments, the KPIs466are predetermined and described, e.g., as explicit measurements of success, or lack thereof, toward attaining specific business goals. In some embodiments, the KPIs466are developed as a function of collection and analysis of business data to determine otherwise unidentified measurements for attaining business goals, thereby facilitating the identification of one or more additional KPIs466. Accordingly, regardless of the origins, the KPIs466are available to match the associated inherent properties within the respective KPI formulations468to the respective issues found in the original data504, and in some instances, facilitate identification of the relevant issues. In one or more embodiments, the KPI characteristic determination operations are executed in situ as the original data504is transmitted into the data inspection module512. Moreover, since the nature of the original data issues generated in real time are not known in advance, the data inspections and KPI characteristic determinations are to be made dynamically in real time. Therefore, the determinations of the respective KPIs466with the respective characteristics470that are embedded within the respective formulations468of the respective KPIs466are executed in conjunction with the determinations of the issues affecting the incoming original data422. At least a portion of the KPI characteristic determinations include determining the nature of the respective KPIs466associated with the original data504. In some embodiments, a portion of the incoming original data504will not be associated with any KPIs466, and this data will not be operated on any further for this disclosure and any embedded issues will be either ignored and the data processed as is, or an issue notification will be transmitted to the user in one or more manners, e.g., and without limitation, through one or more of the output devices450. In other embodiments, the relationships of the incoming original data and the associated KPI formulations468will be further determined. In embodiments, the KPI formulations468are grouped into one of two types of formulations, i.e., “observable box” and “unobservable box” formulations. In some embodiments of both of the observable box and unobservable box KPI formulations, the associated algorithms examine whether the appropriate KPI characteristic formulation includes one or more analyses of the surrounding original data values with respect to the data with issues, including, without limitation, one or more of maximum value determinations, minimum value determinations, mean value determinations, median value determinations, and other statistical determinations, such as, and without limitation, mode value determinations and standard deviation analyses. In at least some embodiments, the observable box KPI formulations468are available for inspection, i.e., the details are observable, and the KPI characteristic determination sub-module514includes a observable box sub-module518. Referring toFIG.7, a textual diagram is provided illustrating an example algorithm700for observable box KPI analysis. Also referring toFIGS.4and5A, the algorithm700is resident within the observable box sub-module518. The algorithm700includes a KPI formulation presentation operation702, where the characteristics of the respective KPI formulations468are clearly articulated to the user and the system400as described herein. The algorithm also includes a parse tree operation704, where the KPI characteristics470are translated into am Abstract Syntax Tree (AST) to generate the KPI characteristics470as an AST representation of the source code in the respective programming language such that when the details of the KPI466are available the various code blocks may be parsed and understandable as nodes in the AST. As shown inFIG.7, the algorithm700includes a first sub-algorithm, i.e., a function analysis operation706that is configured to determine if a particular node in the AST is a function, e.g., and without limitation, a mathematical operation as discussed further. In the embodiment shown inFIG.7, a second sub-algorithm, i.e., a median determination operation708is executed for those KPI formulation characteristics470that define a median value determination of the original data504such that a KPI characteristic assignment operation710is executed, where in this case, the assigned KPI characteristic470is “median” for the subsequent portions of the process500. The median determination operation708is then ended712. In some embodiments, the algorithm includes one or more further portions of the first sub-algorithm for other types of KPI characteristics, e.g., and without limitation, maximum value determinations, minimum value determinations, mean value determinations, and other statistical determinations, such as, and without limitation, mode value determinations and standard deviation analyses. In the illustrated embodiment ofFIG.7, a third sub-algorithm, i.e., a mean determination operation714is executed for those KPI formulation characteristics470that define a mean value determination of the original data504such that a KPI characteristic assignment operation716is executed, where in this case, the assigned KPI characteristic470is “mean” for the subsequent portions of the process500. The mean determination operation714is then ended718. Any remaining possible KPI formulation characteristics470as described above are determined similarly. Upon completion of the function analysis operation706, is ended720. Further, in one or more embodiments, as shown inFIG.7, the algorithm700includes a fourth sub-algorithm, i.e., a binary operations analysis operation722that is configured to determine if a particular node in the AST is a binary operation, e.g., and without limitation, a mathematical operation that uses two elements, or operands, to create another element. In the embodiment shown inFIG.7, a fifth sub-algorithm, i.e., a division sub-algorithm724is executed for those KPI formulation characteristics470that define a division operation of the original data504. The division operation includes a sixth sub-algorithm, i.e., an integrated summation operand and len operand, or integrated mean sub-algorithm726where the len operand, or operation provides the number of items being summed, such that a KPI characteristic assignment operation728is executed, where in this case, the assigned KPI characteristic470is “mean” for the subsequent portions of the process500. The integrated mean sub-algorithm726is ended730, the division sub-algorithm724is ended732and the binary operation sub-algorithm722is ended734. An open sub-algorithm736is shown if further operations beyond the function and binary operations are required by the user. The parse tree operation704is ended738when all of the respective observable box operations associated with the respective KPIs466are identified. In at least some embodiments, the unobservable box KPI formulations468are opaque as to the operations and algorithms contained therein, for example, and without limitation, the respective unobservable box algorithms and operations may be proprietary in nature and the respective users require some level of secrecy and confidentiality for the contents thereof. In some embodiments, such unobservable box formulations may take the form of an application programming interface (API). Therefore, one mechanism for determining the KPI formulation characteristics470within the unobservable box KPI formulations468includes repeated sampling of the original data504to test the original data504through simulations of the formulations. Accordingly, the KPI characteristic determination sub-module514includes a unobservable box sub-module520. Referring toFIG.8, a textual diagram is provided illustrating an example algorithm800for unobservable box KPI analysis. Also referring toFIGS.4and5A, the algorithm800is resident within the unobservable box sub-module520. In at least some embodiments, the algorithm800includes a data subset generation operation802, where the original data504is divided into K subsets of data, each subset having M data points therein, where M is a predetermined constant. For example, and without limitation, a string of 100 data points may be divided into five subsets of 20 points each. Generation of such subsets facilitates determinations if a particular error is recurring, or is a single instance of an error, i.e., a one-off error. The algorithm800also includes a KPI formulation characteristics listing operation804that is configured to identify all of the potential KPI formulation characteristics470that may be used within the unobservable box computations. As previously described herein, such KPI formulation characteristics470include, without limitation, one or more of mean value determinations (“mean”), median value determinations (“median”), mode value determinations (“mode”), maximum value determinations (“max”), minimum value determinations (“min”), and other statistical determinations, such as, and without limitation, standard deviation analyses. Each of these KPI formulation characteristics470will be examined through one or more unobservable box model-based simulations to identify potential issues of erroneous data, where the unobservable box model-based simulations are not directly related to the simulation modeling discussed further herein with respect to snapshot generation. In one or more embodiments, an original KPI valuation operation806is executed, where each data element of each data subset is processed through using the respective unobservable box model, where such model is not yet determined. As used herein, the term “data point” and the term “data element” are used interchangeably. Therefore, in an embodiment of 100 data points, or data elements of original data504, there will be 100 respective KPI values, i.e., 20 KPI values for each of 5 subsets of original data504. As such, the 100 process data elements are processed through the unobservable box formulations, whatever they may be, to generate100original KPI values through the actual unobservable box formulations. Also, in some embodiments, a correlations operation808is executed that includes a simulation/correlation sub-algorithm810. Specifically, in one or more embodiments, a simulated KPI valuation operation812is executed, where each data element of the original data504is analyzed through using a respective model of each KPI formulation characteristic470identified in the KPI formulation characteristics listing operation804. An original KPI value-to-simulated KPI value correlation operation814is executed where each value of the original KPI values is compared to each respective simulated KPI value generated through each model of the KPI formulation characteristics470identified from the KPI formulation characteristics listing operation804. As such, for the embodiment with 100 data elements, there will be 100 correlations for each KPI formulation characteristics470identified from the KPI formulation characteristics listing operation804. In some embodiments, a statistical evaluation of each set of correlated data elements is executed to determine a strength of the correlation, e.g., and without limitation, weak correlation and strong correlation, where definitions of each correlation may be established by the user. A strong correlation indicates the simulated KPI formulations are following the actual unobservable box KPI formulations468. A weak correlation indicates the simulated KPI formulations are not aligned with the actual unobservable box KPI formulations468. Once the processing through the correlations is completed, the simulation/correlation sub-algorithm810is ended816. The algorithm800for unobservable box KPI analysis includes a KPI formulation characteristic selection operation818where the most highly correlated characteristic is selected. Once the unobservable box KPI formulations are determined, the algorithm800ends820. The output522of the data inspection module510includes the analysis of the original data504to determine if there are any data errors therein, and the affected KPI formulation characteristics470, if any. If there are no errors, the respective data is no longer processed through the process500, where the operations within the KPI characteristic determination sub-module514are not invoked, and there is not output522. If there is a data error in the original data504, the output522is transmitted to a determination operation524that determines524if the data issues are relevant for the identified KPI based on the analyses of the KPI characteristic determination sub-module514. As previously discussed, if there is no relationship between the respective original data504and the KPI formulation characteristics470, a “No” determination is generated and no further action is taken on the issue-laden data per this disclosure. The user, if so desired, may elect to take other action on the data errors. For a “Yes” determination, i.e., for those data error issues associated with the original data504that have a relationship with a KPI (both supplied by the user), through the respective properties, an output526of the determination operation524is transmitted for further processing, where the output526is substantially similar to the output522. Accordingly, when the characteristics470of the KPI formulations468for erroneous data, whether observable box or unobservable box, are determined to adversely impact a related KPI, the error is further analyzed such that it can be properly classified and the subsequent optimizations can be performed. Referring toFIG.5B, a continuation of the flowchart shown inFIG.5A, is provided further illustrating the process500for computing a confidence value for corrected data within time-series data. Also referring toFIG.4, in one or more embodiments, the process500further includes transmitting the output526to a snapshot generator module530. The snapshot generator module530receives the output526of the data inspection module510that includes the erroneous data with the known embedded issues and identification of the respective KPI formulation characteristics470. The snapshot generator module530is configured to generate snapshots of simulated data through simulation of the respective data values through one or more models that are placed into production to facilitate the simulation of the original data504. Referring toFIG.9, a schematic diagram is provided illustrating a portion of a process900for snapshot simulation using a snapshot generator module904that is substantially similar to the snapshot generator module530. Also referring toFIG.5B, the original data902that is substantially similar to the original data504that is transmitted through output526to the snapshot generator module904is further evaluated. The original data902(with the erroneous data issues embedded therein) is processed through a plurality of models532(that are substantially similar to the models442shown inFIG.4) to generate a plurality of simulation snapshots906that include simulated data as discussed further herein. The simulated data snapshots906are subsequently used for KPI inference908and confidence measurement910that are shown inFIG.9for context only. Referring again toFIGS.4and5B, in some embodiments, method-based simulations and point-based simulations are used. While either of the simulations may be used regardless of the nature of the issues in the erroneous data, including both simultaneously, in some embodiments, the selection of which of the two simulations is based on the nature of the original data quality issues, and in some embodiments, the selection may be based on predetermined instructions generated by the user. However, in general, the method-based simulations are better configured for processing missing value issues and the point-based simulations are better configured for processing outlier value issues. For example, in some embodiments, previous trial runs by a user may have indicated that missing data may be determined based on if data is missing for a continuous extended duration or syntactic value issues exist, i.e., supposedly numeric data includes extensive durations of data that is NaN or improper numerical rounding or truncation is determined. As such, the method-based simulations may provide the better analyses for the aforementioned conditions. Outlier value issues may be determined if there are semantic value issues, i.e., some of the data includes anomalous events or consistently or patterned noisy data. As such, the point-based simulations may provide the better analyses for the aforementioned conditions. Similarly, if a user determines that it may be indeterminate if the method-based or point-based simulations provide better simulations for the specified conditions, for those conditions when either may provide better simulations, and as described above, both simulation methods may be used. In one or more embodiments, the snapshot generator module530is configured to use method-based simulations to analyze for one or more methods of remediation, where each method of remediation may include, for example, and without limitation, algorithms for determining means, medians, etc. at least partially as a function of the respective KPI466that is affected by the data errors. However, the methods of remediation are not necessarily limited to the KPI formulation characteristics470. The snapshot generator module530includes a method-based simulations sub-module534that may be used whether or not the KPI formulation characteristics470are unobservable box or observable box. Referring toFIG.10, a schematic diagram is presented illustrating a process1000for generating method-based simulations. Also referring toFIGS.4and5B, the method-based simulations are generated through method-based simulations sub-module534. A portion of the output526of the data inspection module510that includes the erroneous data with the embedded issues and identification of the respective KPI formulation characteristics470is shown as a snippet1002with ten instances of non-erroneous data1004and three instances of erroneous data1006. The data snippet1002is transmitted to a plurality of methods of remediation1010, including methods of remediation M1, M2, M3, and M4, where each method of remediation M1-M4 is associated with a different respective model532, and the number four is non-limiting. Each method of remediation M1-M4 includes generation of one or more imputed values that are included in the respective simulation snapshot as potential solutions to, or replacements for, the erroneous values if that particular method of remediation were to be used. Since there are no predetermined notions as to which of the methods of remediation M1-M4 provides the best, or most correct, potential replacement values for the particular present erroneous data1006, the plurality of models532are used, where each model532is used to execute the respective method of remediation M1-M4. In some embodiments, the method-based simulations sub-module534is communicatively coupled to the KPI formulations sub-module516for ready access to the KPI formulations468resident therein. In at least some embodiments, a plurality of simulated data snapshots1020are generated. For example, in the illustrated embodiment, the method of remediation M1 employs the respective model532to calculate imputed values1024in a simulated data snapshot1022. In some embodiments, the fraction of non-erroneous data1004used to compute the imputed values1024for the erroneous data1006depends on the particular remediation technique associated with the respective method of remediation M1. For example, if it is determined to replace a missing value with the mean of all values, then a substantially complete set of respective non-erroneous data1004is used in the respective method of remediation M1. Alternatively, if only the surrounding three values of non-erroneous data1004are used to compute the missing value, i.e., erroneous data1006, then only those surrounding values of non-erroneous data1004are used by the respective method of remediation M1. Similarly, simulated data snapshots1032.1042, and1052are generated through respective methods of remediation M2-M4, including respective imputed values1034,1044, and1054. Since the methods of remediation M1-M4 are different, it is expected that the respective imputed values1024,1034,1044, and1054are different as well. Referring toFIGS.4and5B, the simulated data snapshots1022,1032.1042, and1052are shown as output536from the method-based simulations sub-module534, where the output536is transmitted to a data simulations snapshots storage module538that, in some embodiments, resides within the data storage system408. In at least one embodiment, such as the exemplary embodiment, the three instances of erroneous data1006are substantially identical. In at least one embodiment, each of the instances of erroneous data1006are different. Therefore, since multiple models532and methods of remediation M1-M4 are used for all of the erroneous data1006, generating multiple respective imputed values1024,1034,1044, and1054for each different error is facilitated. Accordingly, method-based simulations in the form of methods of remediation M1-M4 are used to generate one or more simulation snapshots1022,1032,1042, and1052of the non-erroneous original data1004and imputed values1024,1034,1044, and1054for each erroneous original data value1006, where each of imputed values1024,1034,1044, and1054indicates what the data value would look like when the particular method of remediation M1-M4 is used, each of imputed values1024,1034,1044, and1054is a product of a different method of remediation M1-M4. In at least some embodiments, collection of the original time-series data streams420through the sensor suite416includes use of heuristically-based features that facilitate determinations of patterns in the data elements as they are collected. Under some circumstances, one or more instances of the original data422may appear to be incorrect due to the respective data elements exceeding a threshold value that is based on a probability of what the data element value should be as a function of the established data patterns. For example, and without limitation, an apparent data excursion, i.e., a data spike upwards or downwards, may be generated through either an erroneous data packet or as a function of an accurate portrayal of what is occurring in real time. Therefore, the snapshot generator module530is further configured to use point-based simulations to analyze the errors to determine if apparently erroneous data is actually erroneous data, i.e., the snapshot generator module530includes a point-based simulations sub-module540. Referring toFIG.11, a schematic diagram is provided illustrating a process1100for point-based simulations. Also referring toFIGS.4and5B, the point-based simulations are generated through the point-based simulations sub-module540. A portion of the output526of the data inspection module510that includes the erroneous data with the embedded issues and identification of the respective KPI formulation characteristics470is shown as a data snippet1102with ten instances of non-erroneous data points1104and three instances of suspect potentially erroneous data points1106. The three instances of suspect potentially erroneous data points1106are individually referred to as1106A,1106B, and1106C, and collectively as1106. In one or more embodiments, the data snippet1102, including the known correct original data, i.e., non-erroneous data points1104and the suspect potentially erroneous data points1106, is combined into a matrix1110of configurations to initiate determinations of probabilities of whether the values of the suspect potentially erroneous data points1106are correct or erroneous. As shown, the matrix1110in based on the three suspect potentially erroneous data points1106, i.e., with 23, or eight possible combinations of the three suspect potentially erroneous data points1106. The matrix1110is configured with three columns1112,1114, and1116, i.e., one for each of the suspect potentially erroneous data points1106A,1106B, and1106C, respectively. The resultant eight rows, individually referred to as D1 through D8 and collectively as1120, include the available combinations of the three suspect potentially erroneous data points1106. Each of the three suspect potentially erroneous data points1106is individually inferred as either a discrete “correct” or a discrete “erroneous,” and the potentially erroneous data values are then referred to as “inferred data points” to distinguish them from the known correct original data, i.e., non-erroneous data points1104. As shown inFIG.11, the inferred erroneous data points are collectively referred to as1130. Those inferred erroneous data points1130associated with suspect potentially erroneous data point1106A are individually shown and referred to as1122,1132,1162, and1182in the column1112. Also, those inferred erroneous data points1130associated with suspect potentially erroneous data point1106B are individually shown and referred to as1124,1144,1164, and1174in the column1114. Further, those inferred erroneous data points1130associated with suspect potentially erroneous data point1106C are individually shown and referred to as1126,1146,1176, and1186in the column1116. In a similar manner, as shown inFIG.11, the inferred correct data points are collectively referred to as1140. those inferred correct data points1140associated with suspect potentially erroneous data point1106A are individually shown and referred to as1142,1152,1172, and1192in column1112. Also, those inferred correct data points1140associated with suspect potentially erroneous data point1106B are individually shown and referred to as1134,1154,1184, and1194in column1114. Further, those inferred correct data points1140associated with suspect potentially erroneous data point1106C are individually shown and referred to as1136,1146,1166, and1196in column1116. A simulation snapshot542of the matrix1120is executed. Therefore, the first row D1 represents all three suspect potentially erroneous data points1106as inferred erroneous data points1130. Similarly, the eighth row D8 represents all three suspect potentially erroneous data points1106as inferred correct data points1140. The second, third, and fourth rows D2, D3, and D4, respectively, represent only one of the three suspect potentially erroneous data points1106as inferred erroneous data points1130and two of the three suspect potentially erroneous data points1106as inferred correct data points1140. The fifth, sixth, and seventh rows D5, D6, and D7, respectively, represent two of the three suspect potentially erroneous data points1106as inferred erroneous data points1130and only one of the three suspect potentially erroneous data points1106as inferred correct data points1140. As such, the inferred erroneous data points1130and inferred correct data points1140have an original data value as transmitted and a discrete inferred label as either correct or erroneous. The remainder of the analysis focuses exclusively on the inferred erroneous data points1130and inferred correct data points1140. Specifically, the full range of all of the possible combinations of the inferred data points1130and1140as shown as D1 through D8 are collected in the aforementioned simulation snapshot542for further evaluation. The generation of all possible combinations of discrete “correct” labels, i.e., the inferred correct data points1140and discrete “erroneous” labels, i.e., the inferred erroneous data points1130and subsequent aggregation of such facilitates further determinations of “best” actions, and whether those actions are to correct erroneous data or to accept correct data. These operations consider the accepted inaccuracies associated with original data in the data snippet1102that may or may not be erroneous, through determining the probabilities of one or more of the suspect potentially erroneous data values being “correct” or “erroneous. In each of the combinations D1 through D8, it is assumed that some of the suspect potentially erroneous values1106are wrongly identified as erroneous and some are correctly identified as erroneous. So, for each combination D1 through D8, the erroneous values are replaced with an imputed value based on a predetermined remediation method, similar, and not limited to, to those discussed with respect toFIG.10. Therefore, each combination D1 through D8 has a different set of correct and incorrect data points and require different imputed values based on the predetermined remediation technique. As described above, the point-based simulations are better configured for processing outlier value issues and outlier issues will be used to further describe the exemplary embodiment inFIG.11. As described above, patterns may be discerned in the original data504including the data snippet1102and a probability of what the respective data element values should be as a function of the established data patterns. The discrete “erroneous” inferred data points1130therefore have a probability of misclassification as erroneous with a percent assurity assigned thereto. The probabilities of each of the three suspect potentially erroneous values1106are used to determine if the values1106are erroneous or not. As the varying eight combinations D1 through D8 are evaluated, the probability of each of D1 through D8 being true is determined and those rows D1 through D8 that have the highest probability of being true are submitted for further analysis. The total probability of D1 through D8 is 100%. For example, and without limitation, given the heuristic analysis of each point1122,1124, and1126in D1, and the associated summed probabilities thereof, it may be determined that all three points in D1 being erroneous has a relatively low probability as does row D8 (all three values being correct). These two rows D1 and D8 will receive no further consideration. Notably, for those embodiments where the row D8, with no erroneous values, has the highest probability of being correct, no further analysis need be executed and the values1106are not corrected through the downstream operations as further described. Accordingly, the combinations of values that have the higher probabilities of being true are processed further. In general, the total number of each possible combination of the discrete “correct” and “erroneous” inferred data points1130and1140grows exponentially with the number of inferred data points (i.e.,2′, where x=the number of inferred data values), and generating all possible combinations and processing them can be time and resource intensive. Each combination of inferred data points is a potential simulation, and processing each of the combinations as a potential simulation merely increases the processing overhead. Therefore, the described possible combinations D1 through D8 of inferred data points1130and1140are further considered; however, the possible combinations of inferred data points1130and1140are “pruned” such that only a subset of all possible combinations are further considered. As described above, the initial pruning occurs as low-probability combinations of potentially erroneous values are eliminated from further processing. The point-based simulations sub-module540is operatively coupled to a snapshot optimization sub-module544. In such embodiments, snapshot optimization features are employed through using the KPI formulation characteristics470determined as previously described, whether or not the KPI formulation characteristics470are unobservable box or observable box. For example, and without limitation, KPI formulation characteristics470for maximum, minimum, mean, and median analyses may be used to filter the simulations of the inferred data points1130and1140. Therefore, the snapshot optimization module544is communicatively coupled to the KPI formulations sub-module516. In general, only those combinations of inferred data points that successfully pass through the pruning process will survive to generate the respective simulations of the suspect point values through the models and generate the respective simulation snapshots with the non-erroneous original data and imputed values for the identified erroneous data, where a portion of the suspected erroneous point values may in fact not be erroneous and will not require replacement thereof. Referring toFIG.12, a textual diagram is provided illustrating an example algorithm1200for a snapshot optimizer that is configured for execution within the snapshot optimization sub-module544(as shown inFIG.5B). Referring toFIGS.4,5A,5B, and11, the algorithm1200includes an operation to determine1202the KPI formulation characteristic470as previously determined by the KPI characteristic determination sub-module514and as described with respect toFIGS.6-8. The data as represented in the exemplary embodiment as the matrix1120, that is, the data embedded in the remaining rows that have not been eliminated due to low probabilities as described above, is further analyzed to produce the pruning effect as described herein through a data presentation operation1204. As described above, the exemplary embedment includes analyzing outliers. In one or more embodiments, a first sub-algorithm, i.e., a “maximum” sub-algorithm1206is considered for execution. In the event that the previously determined KPI formulation characteristic470is a maximum characteristic, then a modified data operation1208is executed through one or more of models532. The modified data operation1208includes determining if the suspect potentially erroneous data1106are outliers within rising peaks of the data snippet1102of the original data504. If the data snippet1102does not exhibit a rising trend, thereby eliminating any chance of a rising peak, the algorithm1200proceeds to the next set pf operations. If the data snippet1102does exhibit a rising trend, the affected outliers are replaced with values that provide a smoothing effect on the rising trend per modified data operation1208with the previously described probability values providing some level of assurance that the suspected erroneous data was in fact erroneous. These data points are selected for simulation through one or more models532. Once the data replacement identifications, or “fixes,” are executed, the maximum sub-algorithm1206is ended1210. A second sub-algorithm, i.e., a “minimum” sub-algorithm1212is considered for execution. In the event that the previously determined KPI formulation characteristic470is a minimum characteristic, then a modified data operation1214is executed through one or more of models532. The modified data operation1214includes determining if the suspect potentially erroneous data1106are outliers within falling peaks of the data snippet1102of the original data504. If the data snippet1102does not exhibit a falling trend, thereby eliminating any chance of a falling peak, the algorithm1200proceeds to the next set pf operations. If the data snippet1102does exhibit a falling trend, the affected outliers are replaced with values that provide a smoothing effect on the falling trend per modified data operation1214with the previously described probability values providing some level of assurance that the suspected erroneous data was in fact erroneous. These data points are selected for simulation through one or more models532. Once the data repairs, or “fixes,” are executed, the minimum sub-algorithm1212is ended1216. A third sub-algorithm, i.e., a “mean” sub-algorithm1218is considered for execution. In the event that the previously determined KPI formulation characteristic470is a mean characteristic, then a modified data operation1220is executed through one or more models532. The modified data operation1220includes determining if the suspect potentially erroneous data1106are outliers through considering all the issues i.e., all of the affected suspect potentially erroneous data1106and the respective probabilities discussed above and group them into one or more clusters of potentially erroneous data values based on the proximity of the related respective values to each other. In some embodiments, there may be multiple clusters of the potentially erroneous data values, indicative of the mean characteristic used as the basis for the clustering. A cluster consideration operation1222is executed where a collection of representative points, e.g., and without limitation, a mean value from each cluster, are considered as representative points for simulation. Once the data selections for simulation are executed through one or more models532, the mean sub-algorithm1218is ended1224. A fourth sub-algorithm, i.e., a “median” sub-algorithm1226is considered for execution. In the event that the previously determined KPI formulation characteristic470is a median characteristic, then a modified data operation1228is executed through one or more of the models532. The modified data operation1228includes determining if the suspect potentially erroneous data1106are outliers through considering all the issues i.e., all of the affected suspect potentially erroneous data1106and the respective probabilities discussed above. If the suspect potentially erroneous data1106are in fact outliers, and since median-based KPIs are not affected by the value perturbations, no further action is taken on the data, and the median sub-algorithm1226ends1230. In some embodiments, the sub-algorithms1206,1212,1218, and1226may be executed simultaneously in parallel. The output of the snapshot optimization module544shown as optimized simulated data snapshot546is transmitted to the data simulations snapshots storage module538that, in some embodiments, resides within the data storage system408. Accordingly, a plurality of simulations snapshots536and546are generated for further processing, where the simulation snapshots536and546are generated in a manner to significantly reduce the otherwise large number of imputed values. Continuing to refer toFIGS.4,5B,10, and11, in at least some embodiments, the simulation snapshots536and546created by the snapshot generator modules, whether method-based or point-based, are transmitted to a KPI value inference module550. As discussed above, each simulation snapshot of the simulation snapshots536and546includes the non-erroneous original data (e.g.,1004and1104) and imputed values for the established erroneous data (e.g.,1006and1106). Each of the imputed values and the associated original data are presented to the respective KPI formulation468to generate a predicted replacement value, i.e., an inferred snapshot value for each of the imputed values in the respective simulation snapshots536and546. As such, the original data504is also transmitted to the KPI value inference module550. Referring toFIG.13, a graphical diagram is presented illustrating at least a portion of a KPI value inference process1300. Also referring toFIGS.4and5B, the inferred snapshot values for the simulation snapshots536and546are based on the respective KPI formulation468and in the context of the non-erroneous original data on the time-series data stream. Therefore, for each simulation snapshot536and546transmitted to the KPI value inference module550, a predicted replacement value, i.e., an inferred snapshot value is generated.FIG.13shows an abscissa (Y-axis)1302and an ordinate (X-axis)1304. The Y-axis1302is shown extending from 41.8 to 42.6, where the values are unitless. The X-axis1304is shown valueless and unitless. The nature of the values is not important; however, the process1300shows a portion of the values determined as a function of the KPI formulation characteristics470that are presented with the simulation snapshots536and546. The original KPI value1306, i.e., the value generated by processing the suspect erroneous data through the respective KPI formulations468, is presented as a reference, where the respective value is 42.177. The Simulated KPI Max snapshot1308presents an inferred snapshot value of 42.548, the Simulated KPI Mean snapshot1310presents an inferred snapshot value of 42.091, and the Simulated KPI Min snapshot1312presents an inferred snapshot value of 41.805. These inferred snapshot values will be used in the discussions of the subsequent portions of the process500. Referring toFIG.5C, a continuation of the flowchart shown inFIGS.5A and5Bis provided illustrating the process500for computing a confidence value for corrected data within time-series data. Also referring toFIG.5B, the outputs of the KPI value inference module550includes inferred point-based snapshot values552, inferred method-based snapshot values554, and original data504are transmitted to a confidence measures module570that is communicatively coupled to the KPI value inference module550. In general, for each respective inferred snapshot value for the erroneous data that is generated from the simulation snapshots in the KPI value inference module550, within the confidence measures module570, the inferred snapshot values are individually scored. The respective scoring includes generating scored inferred point-based snapshot values562, i.e., inferred point-based snapshot values552with respective confidence values. In addition, the respective scoring generates scored inferred method-based snapshot values564, i.e., inferred method-based snapshot values554with respective confidence values. The generation of the confidence values is discussed further below. The best analytical score is selected, and the respective inferred snapshot value is now elevated to the selected KPI value to replace the erroneous data, where the selected KPI value is referred to as the inferred KPI value566. Accordingly, the inferred KPI value566is that value selected from one or more predicted replacement values (i.e., the scored inferred snapshot values562and564) to resolve the potentially erroneous data instances. In some embodiments, the confidence measures module570includes a plurality of additional sub-modules to facilitate generating the confidence values and the details and evidence to support such values. In some of those embodiments, three confidence measures sub-modules are used, i.e., a quantity-based confidence measures sub-module572, a spread-based confidence measures sub-module574, and a quantity- and spread-based confidence measures sub-module576. The quantity-based confidence measures sub-module572is configured to take into consideration the magnitude of the values obtained from the KPI value inference module550and generate the associated confidence measures information, including the respective confidence scores. For example, and without limitation, whether the magnitude of the KPI value is 50 or 1050, the confidence in the resultant KPI value may differ in light of the additional data and circumstances. The spread-based confidence measures sub-module574considers the range in which the simulated values lie and generate the associated confidence measures information, including the respective confidence scores. Instead of the absolute magnitude of the KPI values, the spread-based confidence measures use the statistical properties like mean, min, max, and standard deviation of the KPI values, and hence are substantially unaffected by the magnitude. The quantity- and spread-based confidence measures sub-module576considers the magnitude as well as the range of values to generate the associated confidence measures information, including the respective confidence scores. In some embodiments, all three of the sub-modules572,574, and576are used in parallel and the results of each are considered and valued for selection. In some embodiments, only one or two of the sub-modules572,574, and576are selected based on the nature of the incoming inferred KPI value566and other data568(discussed further below). Referring toFIG.14, a graphical/textual diagram is provided illustrating generation1400of the numerical confidence measures. Also referring toFIGS.5B and5C, the confidence values of the inferred point-based snapshot values552and the inferred method-based snapshot values554are generated. A linear graphical representation1410is presented with the four values shown inFIG.13. Specifically, other data568(shown inFIG.5C) such as, and without limitation, the Simulated KPI Min snapshot value1412with the inferred snapshot value of 41.805, the Simulated KPI Mean snapshot value1414with the inferred snapshot value of 42.091, the original KPI value1416of 42.117, and the simulated KPI Max snapshot value1418with the inferred snapshot value of 42.548 are shown. Also presented inFIG.14is a first set of confidence measure valuation algorithms, i.e., max deviations confidence measure algorithms1430. The confidence measure 1A algorithm determines the relationship between the maximum variance of the inferred snapshot values1412,1414, and1418as a function of the original KPI value1416. The confidence measure 1B algorithm determines the relationship between the maximum variance of the inferred snapshot values1412,1414, and1418as a function of the Simulated KPI Mean snapshot value1414. Further presented inFIG.14is a second set of confidence measure valuation algorithms, i.e., mean deviations confidence measure algorithms1440. The confidence measure 2A algorithm determines the relationship of the variance between the original KPI value1416and the Simulated KPI Mean snapshot value1414as a function of the original KPI value1416. The confidence measure 2B algorithm determines the relationship of the variance between the original KPI value1416and the Simulated KPI Mean snapshot value1414as a function of the Simulated KPI Mean snapshot value1414. Moreover,FIG.14presents a spread-based measures algorithm1450, i.e., a confidence measure 3 algorithm that evaluates the deviation1452between the original KPI value1416and the Simulated KPI Mean snapshot value1414as a function of the spread1454between the Simulated KPI Max value1418and the Simulated KPI Min value1412. The max deviations confidence measure algorithms1430for the confidence measures 1A and 1B and mean deviations confidence measure algorithms1440for the confidence measures 2A and 2B are resident within the quantity-based confidence measures sub-module572and quantity- and spread-based confidence measures sub-module576. Similarly, the confidence measure 3 algorithm of the spread-based measures algorithm1450is resident within the spread-based confidence measures sub-module574and the quantity- and spread-based confidence measures sub-module576. Also, referring toFIG.15. a graphical diagram, i.e., a column chart1500is provided illustrating confidence measures with values calculated from the algorithms and values provided inFIG.14, with a comparison therebetween. The column chart1500includes an ordinate (Y-Axis1502) representative of the value of the calculated confidence values extending between 0% and 100%. The column chart1500also includes an abscissa (X-Axis)1504that identifies the confidence measures 1A, 1B, 2A, 2B, and 3. The confidence values of the confidence measures 2A and 2B provide the highest values of 99.8. Therefore, the Simulated KPI Mean snapshot value1414provides the best confidence value for the erroneous data. In at least some embodiments, the Simulated KPI Mean snapshot value1414is the inferred KPI value566for the present example. In general, the confidence measures module570compares the inferred snapshot values552and554generated through one or more of the aforementioned simulations with the respective original erroneous data. At least one of the results of the comparison is a confidence value, in the format of a numerical value, for each of the inferred snapshot values552and554as applied to the respective snapshot of the data indicative of a level of confidence that the inferred snapshot values552and554are suitable replacements for the erroneous data. A relatively low confidence value indicates that either the respective inferred snapshot values552and554, including the resultant inferred KPI value566, should not be used, or used with caution. A relatively high confidence value indicates that the respective inferred snapshot values552and554, including the resultant inferred KPI value566, should be used. The threshold values for the associated confidence values may be established by the user, and may also be used to train one or more models, both conditions to facilitate fully automating the selection. Furthermore, the subsequent actions may be automated. For example, and without limitation, for confidence values below a predetermined threshold, the inferred KPI value566will not be passed for further processing within the native application, e.g., the process control system410, utilizing the original data stream420. Similarly, for confidence values above a predetermined threshold, the inferred KPI value566will be passed for further processing within the native application utilizing the original data422. Accordingly, the systems and methods as described herein correct the issues with the erroneous data in the original data stream420automatically in a manner that prevents inadvertent actions or initiates proper actions as the conditions and correct data dictate. Referring again toFIG.5C, the confidence measures module570includes an explanatory sub-module578configured to receive confidence-based data580from the confidence measures sub-modules572,574, and576. The confidence-based data580includes, without limitation, the inferred KPI value566and its associated confidence value, the respective information associated with the selection of the inferred KPI value566and additional information to generate an explanation of the inferred KPI value566that was selected, including the other data568that includes, without limitation, all of the inferred snapshot values552and554, including the respective confidence values. In addition, since it is possible that the confidence value for the predicted, i.e., inferred KPI value566is not 100%, the explanatory sub-module578provides an explanatory basis for the resolution of the one or more potentially erroneous data instances through providing the details and evidence of the selection of the particular scored inferred snapshot values562and564as the inferred KPI value566. The explanatory sub-module578provides such details including, and without limitation, the types of issues detected in the dataset, the number and nature of the simulations generated, the statistical properties of the scores obtained from various simulations, and a comparison of the scores. Accordingly, the confidence measures module570generates various confidence measurements for the scored inferred snapshot values562and564and the information to facilitate the user understanding the properties of the distribution of the scored inferred snapshot values562and564to further provide clarity of the selection of the respective inferred KPI values566, thereby generating a confidence score and explanation582as an output of the process500. Also, referring toFIG.16, a textual diagram is provided illustrating confidence measures explanations1600. The data provided in the confidence measures explanations1600is substantially self-explanatory. The system, computer program product, and method as disclosed herein facilitate overcoming the disadvantages and limitations of inadvertently processing erroneous time series data, and potentially encountering inadvertent consequences therefrom. For example, as the respective data is generated, and for a given business KPI, the automated system and method described herein decides whether a data quality issue is impactful for the respective business KPI. In addition, the system and method described herein identifies the related business KPI's properties (or characteristics) so that relevant data issues can be identified, and optimizations can be performed whether the exact KPI formulations are explicitly visible or not, i.e., regardless of whether the formulations are observable box or unobservable box in nature. Moreover, the system and method described herein resolves the identified data issues by selecting a scored prediction of a replacement value for the erroneous data. Furthermore, the system and method described herein optimize selection of the possible replacement values to efficiently use system resources. Moreover, the scored predictions are accompanied with quantified confidence values with an explanation of the confidence values with respect to the confidence measures inferred and the reasons for the values. Accordingly, as described herein, data quality issues are filtered based on the analysis of given KPI and data is modified to mitigate the quality issues considering various scenarios to compute its impact on the measurement of given KPI and additionally measure the confidence of the predicted replacement values. In addition, the features of the system, computer program product, and method as disclosed herein may be extended beyond implementation in exclusively business-based embodiments. Non-business implementations are also envisioned to overcome similar disadvantages and limitations of inadvertently processing erroneous time series data, and potentially encountering inadvertent consequences therefrom. Specifically, any computer-implemented process that relies on time-series data to execute the respective functions properly may be improved through implementation of the features in this disclosure. For example, and without limitation, any use of time-series data collected from IoT devices, including residential and vehicular users, will avoid inadvertent and unnecessary automated actions through replacement of missing data values with the highest confidence. Specifically, for residential users, erroneous data falsely indicating low voltage from the respective electric utility may be prevented from inadvertently and unnecessarily activating low-voltage protective circuitry that would otherwise disrupt satisfactory electric power delivery to the respective residence. In such an implementation, one respective KPI may be to maintain continuity of electric power to the residential user. Also, specifically, for vehicular users, erroneous data falsely indicating excessive propulsion mechanism temperatures may be prevented from inadvertently and unnecessarily activating automated emergency engine shutdowns. In such an implementation, one respective KPI may be to maintain continuity of propulsion to the vehicular user. Therefore, the embodiments disclosed herein provide an improvement to computer technology by providing a mechanism for efficiently, effectively, and automatically identifying issues associated with erroneous time series data, determining whether a data quality issue is impactful for a given business KPI through identifying the business KPI's characteristics so that relevant data issues can be identified, and optimizations can be performed whether the exact KPI characteristics are openly defined or not, i.e., whether the KPI formulations are observable box or unobservable box in nature, and resolving the identified data issues while presenting a confidence analysis of the examined potential resolutions. The descriptions of the various embodiments of the present disclosure have been presented for purposes of illustration, but are not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein was chosen to best explain the principles of the embodiments, the practical application or technical improvement over technologies found in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.
130,976
11860728
DETAILED DESCRIPTION The techniques described herein enable prediction of database failure through early detection of failure signals, and in turn, allow actions to minimize service interruption and potentially reverse the declining performance of an overloaded database. With reference toFIG.1, an example embodiment of a high-level client-server-based network architecture100is shown. A networked system102provides server-side functionality via a network110(e.g., the Internet or wide area network (WAN)) to one or more user devices105A-N. In some implementations, a user interacts with the networked system102using the user device105A-N and the user device may execute a web client (e.g., a browser), a client application, or a programmatic client. The user device105A-N may be a computing device that includes a display and communication capabilities that provide access to the networked system102via the network110. Although only three user devices105A,105B and105N are illustrated inFIG.1, the network architecture100can accommodate communication with many user devices. The user device105A-N can be, for example, a smart phone, a laptop, desktop computer, general purpose computer, tablet, a remote device, work station, Internet appliance, hand-held device, wireless device, portable device, wearable computer, smart TV, game console, set-top box, network Personal Computer (PC), mini-computer, and so forth. The user device105A-N communicates with the network110via a wired or wireless connection. For example, one or more portions of the network104comprises an ad hoc network, an intranet, an extranet, a Virtual Private Network (VPN), a Local Area Network (LAN), a wireless LAN (WLAN), a WAN, a wireless WAN (WWAN), a Metropolitan Area Network (MAN), a portion of the Internet, a portion of the Public Switched Telephone Network (PSTN), a cellular telephone network, a wireless network, a Wireless Fidelity (WI-FI®) network, a Worldwide Interoperability for Microwave Access (WiMax) network, another type of network, or any suitable combination thereof. In some example embodiments, the user device105A-N includes one or more of the applications (also referred to as “apps”) such as, but not limited to, web browsers, book reader apps (operable to read e-books), media apps (operable to present various media forms including audio and video), fitness apps, biometric monitoring apps, messaging apps, electronic mail (email) apps, and e-commerce site apps. In some implementations, a user application may include various components operable to present information to the user and communicate with networked system102. In some example embodiments, if the e-commerce site application is included in the user device105A-N, then this application is configured to locally provide the user interface and at least some of the functionalities with the application configured to communicate with the networked system102, on an as needed basis, for data or processing capabilities not locally available (e.g., access to a database of items available for sale, to authenticate a user, to verify a method of payment). Conversely, if the e-commerce site application is not included in the user device105A-N, the user device105A-N can use its web browser to access the e-commerce site (or a variant thereof) hosted on the networked system102. Each of the user devices105A-N can utilize a web client106to access the various systems of the networked system102via the web interface supported by a web server115. Similarly, the user device can utilize a client application107and programmatic client108to accesses various services and functions provided by the networked system102via a programmatic interface provided by an Application Program Interface (API) server113. The programmatic client can, for example, be a seller application (e.g., the Turbo Lister application developed by EBAY® Inc., of San Jose, Calif.) to enable sellers to author and manage listings on the networked system102in an off-line manner, and to perform batch-mode communications between the programmatic client108and the networked system. The API server113and the web server115are coupled to, and provide programmatic and web interfaces respectively to, a plurality of application servers120A-N. The application servers120A-N can host one or more publication system121. The application servers120A-N are, in turn, shown to be networked with a connection server125executing a connection service that manages connections to a plurality of databases140. The connection server125can comprise a connection pool130having a number of database connections, including open connections and connections in use. Different network components, third party applications, client applications, and/or publication systems executing on the application servers120A-N may transmit database connection requests to connection server125. If connections are available in the connection pool system130, the connection server125serves the open connections to the request applications, which may use the connections to retrieve query data from databases140A-N, managed by the connection server125. The connection server125includes a connection adjuster135that can comprise a number of engines, each of which can be embodied as hardware, software, firmware, or any combination thereof. In an example embodiment, the databases140A-N are storage devices that store information to be posted (e.g., publications or listings) to the publication system on the application server120A-N. The databases140A-N also store digital good information in accordance with some example embodiments. Additionally, it is noted that one or more of the user devices105A-N can be a third party server executing a third party application, which third party application may have programmatic access to the networked system102via a programmatic interface provided by the API server113. For example, the third party application, utilizing information retrieved from the networked system102, supports one or more features or functions on a website hosted by the third party. The publication system executing on the application server120A-N provides a number of publication functions and services to the users that access the networked system102. While the publication system121may form part of the networked system102, it will be appreciated that, in alternative example embodiments, the publication system may form part of a service that is separate and distinct from the networked system102. Further, in some example embodiments, the components and logical functionality of the connection pool130and connection adjuster135may be implemented in a distributed service operating on a plurality of machines, or alternatively, may be integrated into existing servers, such as application servers120A-N. Further, while the client-server-based network architecture100shown inFIG.1employs a client-server architecture, the present inventive subject matter is, of course, not limited to such an architecture, and can equally well find application in a distributed, or peer-to-peer, architecture system, for example. The various systems of the application server120A-N (e.g., the publication system) can also be implemented as standalone software programs, which do not necessarily have networking capabilities. FIG.2illustrates a block diagram showing components provided within the connection adjuster135, according to some example embodiments. The connection adjuster135can be stored on non-transitory memory of a hosting system (e.g., connection server125), or may be hosted on dedicated or shared server machines that are communicatively coupled to enable communications between server machines to operate the functionality of the connection server as a distributed service. As such a distributed service, the connection server125can operate with broad resource visibility and control to manage connections across an entire data center or multiple data centers. The components themselves are communicatively coupled (e.g., via appropriate interfaces) to each other and to various data sources, so as to allow information to be passed between the applications or so as to allow the applications to share and access common data. FIG.2illustrates components of the connection adjuster135, according to some example embodiments. As illustrated, the connection adjuster135may comprise a connection engine205, a quality-of-service (QOS) module group260and a response module group270. The connection engine205receives incoming database connection requests from applications120A-N (e.g., publication system121or client application107) and adds or terminates open database connections in the connection pool130(FIG.1) based on demand or instructions from the other modules. In at least one example embodiment, the connection engine205receives inputs from the QOS module group260and the response module group270, modifies the number of open database connections in the connection pool based on the received inputs from the groups. Furthermore, the connection engine205modifies connections and traffic flow among the multiple databases140A-N based upon detection of impending database failure or instability, as will be described in greater detail below. A metrics engine207records metrics describing connections to the databases140A-N and stores the recorded metrics as connection metrics data. The connection metrics data includes wait time data, connection use time data, and request frequency data, according to some example embodiments. The wait time data describes how long past database connection requests had to wait before receiving a database connection (e.g., how long before being serviced by the connection engine). The connection use time data describes how long past database connections were open or used by an application. The request frequency data describes the rate at which incoming database connection requests are received from the applications. The QOS module group260is responsible for monitoring database connection requests, analyzing connection pool metrics, and generating instructions for the pool connection engine205to open, close, or throttle the amount of newly created connections. As illustrated, the QOS module group260comprises a wait time engine210, a load level engine220, and a throttle engine230. Still referring toFIG.2, each of the modules is discussed in further detail below, but is explained here briefly in a high-level manner. The wait time engine210determines, for each of the databases140A-N (FIG.1), whether the time that past database connection requests waited before being serviced surpasses a wait time threshold. If the wait time threshold is exceeded, the wait time engine210instructs the connection engine205to create new database connections in the connection pool130. The load level engine implements an equilibrium equation that, for each of the databases140A-N (FIG.1) uses database connection traffic density to calculate a load level. If the load level limit is exceeded, the load level engine220instructs the connection engine205to create new database connections to the respective database in the connection pool130. The throttle engine230works as a type of safeguard against too many new connections being created at a time. For example, according to one example embodiment, the throttle engine analyzes, for each of the databases140A-N (FIG.1) the number of new connections being ordered open by the wait time engine210and load level engine220and throttles the amount of connections being opened in steps, thereby alleviating an overloaded database instance. The response module group270is responsible for correcting inefficient predictions and/or handling of new database connections for connection pool management as ordered by modules of the QOS module group260. The response module group270is further responsible for analyzing metrics data to determine, for each database, if the database is becoming unstable or going to fail, which would lead to service interruptions and significant QOS violations. As illustrated, the response module group270comprises a magnitude violation engine240, a frequency violation engine250, and a database down detection engine255. Generally, applications requesting database connections may subscribe to different quality-of-service or service-level agreements (SLAs), whereby developers/administrators of the applications may pay increasingly more money for increasingly higher performance database access. In some implementations, the quality of service may be arranged in tiers, for example, bronze tier database access, silver tier database access, and gold tier database access, where silver tier access applications are assured better database performance than bronze tier access applications, and gold tier access applications are assured better database performance than applications subscribed to the silver and gold tiers. Although three tiers are discussed as an example here, it is appreciated that any number of tiers and granularity can be implemented in a similar manner. The magnitude violation engine240is configured to determine whether QOS levels for applications are being violated and by how much (e.g., by what magnitude). If QOS violations of significant magnitude are found in the metrics data, the magnitude violation engine240instructs the wait time engine210, load level engine220, and throttle engine230to take corrective actions (e.g., by lowering the wait time threshold, lowering the load level, and by increasing the amount of throttling, respectively). The frequency violation engine250is configured to determine whether QOS levels for applications are frequently being violated. For example, while a single large magnitude violation may not have occurred, a number of small violations may have occurred over a small period of time. Repetitive violations, however large the size, can signify that the connection creation/termination instructions ordered by the QOS modules is inefficient or problematic. Responsive to determining that too many QOS violations are occurring in too small an amount of time, the frequency violation engine250may instruct the QOS module group260to take corrective actions (e.g., by lowering the wait time threshold, lowering the load level, and increasing the amount of throttling). The database down detection engine255is configured to determine, for each database140A-N, if metrics data is showing early signs that a particular database is becoming unstable or leading to partial or full failure. For example, increased response times during a certain quantity of intervals over a time periods is a condition found to be consistent with a database that is becoming unstable and is going to fail. If the database down detection engine255determines that one of the databases is going to fail, it can initiate instruct the connection engine to implement adjustments to maintain optimal system performance, for example by redirecting requests to an alternate database or shedding connections, as will be described in greater detail with reference toFIGS.3-7. Hereinafter, a more detailed discussion of the operation of the systems and components described above is provided with reference to flow diagrams. As illustrated inFIGS.3,5,6, and7aspects of routines300,500,600and700to provide database health detection, mitigating actions to adjust connections that may be triggered, and reloading a database that had been previously unavailable. It should be understood that the operations of the routines and methods disclosed herein are not necessarily presented in any particular order and that performance of some or all of the operations in an alternative order(s) is possible and is contemplated. The operations have been presented in the demonstrated order for ease of description and illustration. Operations may be added, omitted, and/or performed simultaneously, without departing from the scope of the appended claims. It also should be understood that the illustrated routines can end at any time and need not be performed in their entireties. Some or all operations of the routines, and/or substantially equivalent operations, can be performed by execution of computer-readable instructions included on a computer-storage media, as defined below. The term “computer-readable instructions,” and variants thereof, as used in the description and claims, is used expansively herein to include routines, applications, application modules, program modules, programs, components, data structures, algorithms, and the like. Computer-readable instructions can be implemented on various system configurations, including single-processor or multiprocessor systems, minicomputers, mainframe computers, personal computers, hand-held computing devices, microprocessor-based, programmable consumer electronics, combinations thereof, and the like. Thus, it should be appreciated that the logical operations described herein are implemented (1) as a sequence of computer implemented acts or program modules running on a computing system and/or (2) as interconnected machine logic circuits or circuit modules within the computing system. The implementation is a matter of choice dependent on the performance and other requirements of the computing system. Accordingly, the logical operations described herein are referred to variously as states, operations, structural devices, acts, or modules. These operations, structural devices, acts, and modules may be implemented in software, in firmware, in special purpose digital logic, and any combination thereof. For example, the operations of the routines300,500,600and700are described herein as being implemented, at least in part, by system components, which can comprise an application, component and/or a circuit. In some configurations, the system components include a dynamically linked library (DLL), a statically linked library, functionality produced by an application programing interface (API), a compiled program, an interpreted program, a script or any other executable set of instructions. Data, such as the audio data,360canvas and other data, can be stored in a data structure in one or more memory components. Data can be retrieved from the data structure by addressing links or references to the data structure. Although the following description refers to the elements ofFIGS.1,2and4, it can be appreciated that the operations of the routines300,500,600, and700may be also implemented in many other ways. For example, the routines300,500,600, and700may be implemented, at least in part, by a processor of another remote computer or a local circuit. In addition, one or more of the operations of the routines300,500,600, and700may alternatively or additionally be implemented, at least in part, by a chipset working alone or in conjunction with other software modules. Any service, circuit or application suitable for providing the techniques disclosed herein can be used in operations described herein. FIG.3is a flow diagram of a routine300to detect early signs of impending database failure or instability and to implement related operations such as connection adjustments and database reloading to optimize traffic handling in a networked data system, according to some example embodiments. The routine300begins at operation310, the connection adjuster135receives database connection requests from applications. At320, the database down detection engine255accesses wait time data from the connection pool metrics data. The database down detection engine255analyzes the metrics data to determine, for each database, if the wait times are being repeatedly exceeded within a degree of consistency over a series of time intervals. A change in query response times alone does not necessarily suggest an unreliable database condition, as there is a wide variance in response times from a database that is operating normally. For example, although a temporary spike in traffic ingress might result in excessive wait times momentarily, but a normally operating database can return to operating within normal wait times. Thus, according to exemplary techniques, wait times are not merely analyzed at specific moments, but wait times are analyzed over a series of intervals in order to identify conditions consistent with a database that is not healthy. For example, at operation330, the database down detection engine255determines based on the metrics data if a wait time of at least one connection event to the database exceeded a wait time limit during the latest, most recent time interval. A suitable amount of time for the time interval can be, for example, one second, however the interval could be a longer or shorter time duration. Intervals analyzed at operation330are identified as having one of two conditions, for purposes of this discussion referred to as being either “clean” or “dirty”. A dirty interval includes at least one connection wait time event that exceeded the wait limit, whereas a “clean” interval included zero connection wait time events that exceeded the wait limit. A wait time that exceeds the wait limit is a wait time violation event, indicating that the wait time exceeds an acceptable normal range for quality of service. In practice, the range of normal wait times varies widely depending on the particular database, equipment, and operating environment. By way of example, however, for some database implementations a suitable wait limit may be 1000 ms. At operation340, upon each new time interval, the intervals are assessed over a time evaluation window. The time evaluation window is defined by a fixed number of time intervals in a series, including the latest interval and previously occurring intervals to span the duration of the time evaluation window. More particularly, the dirty intervals during the time window are counted to yield a quantity. Operation350determines if the quantity of dirty intervals during the current time window exceeds a count threshold. If the quantity does not exceed the count threshold, the database is determined to be healthy, and database connections continue as normal. If the quantity exceeds the count threshold, the database is considered to be unhealthy, as the wait time conditions over the time window reflect early signs that the database is becoming unstable or will fail. As a result of the determination that the database is unhealthy, the connection engine205treats the database as unavailable for handling incoming data requests. The designation of a database as unhealthy at operation350is a predictive flag that the database, while still operational, is in a stage of partial failure that may worsen if current conditions continue. This predictive information enables the implementation of preemptive corrective adjustments in an effort to avoid a full failure of the database while minimizing service disruptions of the overall data system. Such preemptive corrective adjustments are also referred to herein as mitigating actions. At operation360, the determination that the database is unhealthy (operation350) triggers a mitigating action implemented by the connection engine205(FIG.2). A mitigating action, sometimes referred to as a failover action, can be any of a variety of connection adjustment actions to expected to yield one or more overall system performance improvements in view of problems with the database, such as to alleviate load on the failing database, cause requests to connect to an alternate database, or preserve persistent connections for high priority data requests under partial failure conditions where the database can still operate to service some connections. Exemplary operations360A and360B are described below as processes500and600with referenceFIGS.5and6. Referring still toFIG.3, operation370monitors the health of a database that has been flagged as unhealthy. Among other tests, this can include sending dummy connection requests to the database to see if a connection is available, and if so whether the wait time is within normal ranges. The database down detection engine255(FIG.2) periodically checks one or more health parameters of the database140A-N (FIG.1). Operation380determines if the database has recovered sufficiently to handle normal traffic. If the database has not recovered, operation370resumes monitoring the health of the database. If the database has recovered, at operation390the database is reloaded. Various techniques are possible for reloading the database at operation390, and one example is described below with reference toFIG.6. Turning toFIG.4, an exemplary series400of time intervals401A-FF covering a time evaluation window is illustrated. Each of the time intervals401A-FF has a time duration t, for example, one second. The time evaluation window comprises an amount W of intervals, so that the time evaluation window covers a time span W(t). Where W is thirty intervals and each interval t=1 second, as in the example shown, the time window covers thirty seconds. As illustrated, interval401A shown at the top ofFIG.4is the latest and most recent, and previously occurring intervals401B-401FF are sequentially older. The time window W(t) is shown as spanning thirty intervals, from the most recent interval401A to the thirtieth interval401DD. Time intervals401EE and401FF at the bottom of the series are more than thirty seconds old, and are thus outside of the time window. Upon each new time interval, a new interval takes the place of401A and the stack moves down. FIG.4further indicates next to each of the time intervals401A-FF a number of connection events that exceeded the time limit during the interval, and whether the interval is “clean” or “dirty” as a result. For example, time interval401A had 1 connection wait time event that exceeded the wait limit and is labeled “dirty”, time interval401B had zero connection wait time event that exceeded the wait limit and is labeled “clean”, time interval401C had 90 connection wait time events that exceeded the time limit and is labeled “dirty”, and so on. In the illustrated example, the time window includes a quantity of twenty-one dirty time intervals:401A,401C,401D,401G,401H,401I,401K-Q,401S-U,401W,401X,401Z,401AA, and401DD. Where the count limit is set as twenty, the series400of intervals would result a determination that database as unhealthy, as the quantity of intervals401A-DD covering the time window include a twenty-one intervals in which at least one wait time was exceeded. In some exemplary embodiments, within a given time interval, the volume of connection wait time events that exceeded the wait limit are not considered, as the interval is counted as dirty so long as at least one such event occurred. Embodiments are possible wherein the volume of wait time violations are considered to identify temporary surges in traffic and assessing wait time violation patterns to determine if the violations are likely temporary due to the surge queues or being caused by a failing database. Values for the time interval duration, time evaluation window duration, count limit, wait limit are described herein as examples, it should be noted that different databases exhibit different normal behavior characteristics depending on the type of database, equipment, resources and operating environment, thus there can be a wide variance in normal wait times and normal database performance behaviors depending on such factors. Thus, in order to identify metrics patterns that predict failing performance, the wait time limit, interval duration, and time window duration will vary and in practice must be set to suit the particular situation. In an embodiment, these parameters can be set and adjusted as needed by a system administrator. Turning toFIG.5, exemplary operation360A to trigger a mitigating action is illustrated as process500. Following a determination that the database is unhealthy at operation350, process500executes operation510whereby the connection adjuster135notifies the requesting applications and application servers that the unhealthy database is not available. At operation520, new resource requests are directed to at least one different database providing an alternate resource to fulfill the requests. In an embodiment, the connection engine205updates the connection pool130accordingly, for example to open new connections to the different database as new requests are generated and to close some or all open connections to the unhealthy database. InFIG.6, another exemplary operation360B to trigger a mitigating action involving tiered shedding of connections is illustrated as process600. By shedding connections in tiers, an overloaded database may have an opportunity to recover while remaining operational, continuing to provide service to priority connections while reversing the declining performance that caused it to be designated as unhealthy at operation350(FIG.3). In the process600, each database connection request is assigned one of a plurality of ranks. The plurality of ranks define priority levels for maintaining the database connection. The assignment of ranks to request may be a function of the application server, for example, the ranks may be consistent with the bronze, silver, and gold tier database access described above in connection withFIG.1. Of course, it should be understood, that any number of ranks may be defined, so long as the ranks define a hierarchy of priority for database access. The ranks include at least a lowest rank and a next-higher rank (e.g., a rank above the lowest rank). In the three-tier bronze-silver-gold system, for example, the lowest rank may be bronze, and the next-higher rank may be silver. The highest rank, for example, gold, represents a highest priority group of connections for which a connection must be provided and maintained above all else tier. Following a determination that the database is unhealthy at operation350, operation610designates a degree of unhealthiness to the database. At operations620and640, the designated degree of unhealthiness is measured against shed thresholds. As will be explained below, exceeding the shed thresholds results in fast shedding of connections from the database in tiered groups associated with the priority ranks. Different tests may be applied at operation610to designate the degree of unhealthiness of the database. One technique can be based at least in part upon the quantity of dirty intervals within the current time evaluation window. An alternate technique for determining a degree of unhealthiness at operation610, may be based upon volumes of wait time violations within the dirty intervals, or more particularly the number of individual dirty intervals that have a volumes of wait time violations in excess of a volume limit, such as fifty, one hundred, and so on. Such a volume limit can be set as appropriate for the environment. The degree of unhealthiness may be a value on a numerical scale, such as the number of dirty intervals, or the number of dirty intervals with volumes over a volume threshold. Shed thresholds are provided to trigger shedding of tiers of connections according to their ranks. As shown inFIG.6at operation620, the degree of unhealthiness is compared against a lowest shed threshold. If the degree of unhealthiness is greater than the lowest shed threshold, operation630is invoked to terminate connections with the lowest rank. The shed threshold may be set as a value on unit scale used for the degree of unhealthiness. For example, in a case where the degree of unhealthiness is based on the quantity of dirty intervals within the time evaluation window, the degree may be designated as twenty-five when the quantity is twenty-five. If the low threshold is twenty-four, operation620determines that the degree exceeds the lowest shed threshold, thereby triggering operation630. As shown inFIG.6at operation630, the connection server125terminates the connections with the lowest rank from the database, for example the connections assigned a bronze rank. If the degree does not exceed the lowest shed threshold at operation620, the process goes to operation370of routine300(FIG.3). In the case where the lowest shed threshold is exceeded at operation620, operation640is also invoked. Operation640determines if the degree of unhealthiness exceeds a next-higher threshold. For example, if the in a case wherein the degree of unhealthiness is based on the quantity of dirty intervals in the current time evaluation window, the degree may be designated as twenty-seven when the quantity equals twenty-seven. If the next-higher shed threshold is twenty-six, operation640determines that the degree exceeds the next-higher shed threshold, thereby invoking operation650. At operation650, the connection server125terminates connections at the database for connections having the next-higher rank, such as the connections assigned the silver rank. If operation640determines the degree does not exceed the net-higher shed threshold, the process goes to operation370of routine300(FIG.3). FIG.7shows an example of operation390(FIG.3) as implementing a process700to reload the database after it has recovered and ready for normal operation in the system. The process700enables a graceful reentry into service by staggering waiting connection requests seeking to connect with the database. As mentioned above operation380has determined that the database meets performance parameters and is ready to be reintroduced to the system. In some high traffic systems, a sudden onrush of all connection requests to a cold database can overload certain operations and create a new failure. To avoid such a problem, at operation710the connection engine205(FIG.2) allows a portion of waiting connection requests to connect immediately to the database. By initially allowing only some of the requests, the database has an opportunity to ramp up and reestablish its internal systems, such as cache operations, achieve stable operation without being overloaded. A remainder of the connections are caused to wait for a delay period at operation720, allowing sufficient time for the database to warm up while processing the initial portion of requests. After the delay period has ended, operation730allows the remainder of the requests to connect to the database. The portion of requests allowed to initially connect can be a predetermined percentage of requests that will suitably reestablish functional operation of the database components while presenting low risk of overloading the cold systems. Although a suitable amount of such initial traffic will vary depending on various factors specific to the operating environment, a suitable portion of requests may be 15% of the total requests allowed to initially connect at operation710, while 85% of the total requests are caused to wait during the delay period at operation730. In some embodiments, the requests can be selected randomly, or in other implementations some other criteria may be utilized to select the requests, such as their priority rank. A suitable delay period may also vary depending on the specific operating environment, but in one example a suitable delay period is about ten seconds to ready the database to handle the full onrush of traffic. FIG.8is a block diagram illustrating components of a machine800, according to some example embodiments, able to read instructions from a machine-readable medium (e.g., a machine-readable storage medium) and perform any one or more of the methodologies discussed herein. Specifically,FIG.8shows a diagrammatic representation of the machine800in the example form of a computer system, within which instructions816(e.g., software, a program, an application, an applet, an app, or other executable code) for causing the machine800to perform any one or more of the methodologies discussed herein can be executed. For example, the instructions816can cause the machine800to execute the flow diagrams ofFIGS.3and5-7. Additionally, or alternatively, the instruction816can implement the pool connection engine205, pool metrics engine207, wait time engine210, load level engine220, throttle engine230, magnitude violation engine240, frequency violation engine250, and so forth. The instructions816transform the general, non-programmed machine into a particular machine programmed to carry out the described and illustrated functions in the manner described. In alternative example embodiments, the machine800operates as a standalone device or can be coupled (e.g., networked) to other machines. In a networked deployment, the machine800may operate in the capacity of a server machine or a client machine in a server-client network environment, or as a peer machine in a peer-to-peer (or distributed) network environment. The machine800can comprise, but not be limited to, a server computer, a client computer, a PC, a tablet computer, a laptop computer, a netbook, a set-top box (STB), a PDA, an entertainment media system, a cellular telephone, a smart phone, a mobile device, a wearable device (e.g., a smart watch), a smart home device (e.g., a smart appliance), other smart devices, a web appliance, a network router, a network switch, a network bridge, or any machine capable of executing the instructions816, sequentially or otherwise, that specify actions to be taken by the machine800. Further, while only a single machine800is illustrated, the term “machine” shall also be taken to include a collection of machines800that individually or jointly execute the instructions816to perform any one or more of the methodologies discussed herein. The machine800can include processors810, memory/storage830, and input/output (I/O) components850, which can be configured to communicate with each other such as via a bus802. In an example embodiment, the processors810(e.g., a Central Processing Unit (CPU), a Reduced Instruction Set Computing (RISC) processor, a Complex Instruction Set Computing (CISC) processor, a Graphics Processing Unit (GPU), a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Radio-Frequency Integrated Circuit (RFIC), another processor, or any suitable combination thereof) can include, for example, processor812and processor814that may execute instructions816. The term “processor” is intended to include multi-core processor that may comprise two or more independent processors (sometimes referred to as “cores”) that can execute instructions contemporaneously. AlthoughFIG.8shows multiple processors810, the machine800may include a single processor with a single core, a single processor with multiple cores (e.g., a multi-core processor), multiple processors with a single core, multiple processors with multiples cores, or any combination thereof. The memory/storage830can include a memory832, such as a main memory. or other memory storage, and a storage unit836, both accessible to the processors810such as via the bus802. The storage unit836and memory832store the instructions816embodying any one or more of the methodologies or functions described herein. The instructions816can also reside, completely or partially, within the memory832, within the storage unit836, within at least one of the processors810(e.g., within the processor's cache memory), or any suitable combination thereof, during execution thereof by the machine800. Accordingly, the memory832, the storage unit836, and the memory of the processors810are examples of machine-readable media. As used herein, the term “machine-readable medium” means a device able to store instructions and data temporarily or permanently and may include, but is not be limited to, random-access memory (RAM), read-only memory (ROM), buffer memory, flash memory, optical media, magnetic media, cache memory, other types of storage (e.g., Erasable Programmable Read-Only Memory (EEPROM)) or any suitable combination thereof. The term “machine-readable medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, or associated caches and servers) able to store instructions816. The term “machine-readable medium” shall also be taken to include any medium, or combination of multiple media, that is capable of storing instructions (e.g., instructions816) for execution by a machine (e.g., machine800), such that the instructions, when executed by one or more processors of the machine800(e.g., processors810), cause the machine800to perform any one or more of the methodologies described herein. Accordingly, a “machine-readable medium” refers to a single storage apparatus or device, as well as “cloud-based” storage systems or storage networks that include multiple storage apparatus or devices. For the purposes of the claims, the phrase “machine-readable medium,” “computer storage medium,” “computer-readable storage medium,” and variations thereof, does not include waves or signals per se. The I/O components850can include a wide variety of components to receive input, provide output, produce output, transmit information, exchange information, capture measurements, and so on. The specific I/O components850that are included in a particular machine will depend on the type of machine. For example, portable machines such as mobile phones will likely include a touch input device or other such input mechanisms, while a headless server machine will likely not include such a touch input device. It will be appreciated that the I/O components850can include many other components that are not shown inFIG.8. The I/O components850are grouped according to functionality merely for simplifying the following discussion, and the grouping is in no way limiting. In various example embodiments, the I/O components850can include output components852and input components854. The output components852can include visual components (e.g., a display such as a plasma display panel (PDP), a light emitting diode (LED) display, a liquid crystal display (LCD), a projector, or a cathode ray tube (CRT)), acoustic components (e.g., speakers), haptic components (e.g., a vibratory motor, resistance mechanisms), other signal generators, and so forth. The input components854can include alphanumeric input components (e.g., a keyboard, a touch screen configured to receive alphanumeric input, a photo-optical keyboard, or other alphanumeric input components), point based input components (e.g., a mouse, a touchpad, a trackball, a joystick, a motion sensor, or other pointing instruments), tactile input components (e.g., a physical button, a touch screen that provides location and force of touches or touch gestures, or other tactile input components), audio input components (e.g., a microphone), and the like. In further example embodiments, the I/O components850can include biometric components856, motion components858, environmental components860, or position components862among a wide array of other components. For example, the biometric components856can include components to detect expressions (e.g., hand expressions, facial expressions, vocal expressions, body gestures, or eye tracking), measure biosignals (e.g., blood pressure, heart rate, body temperature, perspiration, or brain waves), identify a person (e.g., voice identification, retinal identification, facial identification, fingerprint identification, or electroencephalogram based identification), and the like. The motion components858can include acceleration sensor components (e.g., an accelerometer), gravitation sensor components, rotation sensor components (e.g., a gyroscope), and so forth. The environmental components860can include, for example, illumination sensor components (e.g., a photometer), temperature sensor components (e.g., one or more thermometers that detect ambient temperature), humidity sensor components, pressure sensor components (e.g., a barometer), acoustic sensor components (e.g., one or more microphones that detect background noise), proximity sensor components (e.g., infrared sensors that detect nearby objects), gas sensor components (e.g., machine olfaction detection sensors, gas detection sensors to detect concentrations of hazardous gases for safety or to measure pollutants in the atmosphere), or other components that may provide indications, measurements, or signals corresponding to a surrounding physical environment. The position components862can include location sensor components (e.g., a GPS receiver component), altitude sensor components (e.g., altimeters or barometers that detect air pressure from which altitude may be derived), orientation sensor components (e.g., magnetometers), and the like. Communication can be implemented using a wide variety of technologies. The I/O components850may include communication components864operable to couple the machine800to a network880or devices870via a coupling882and a coupling872, respectively. For example, the communication components864include a network interface component or other suitable device to interface with the network880. In further examples, communication components864include wired communication components, wireless communication components, cellular communication components, Near Field Communication (NEC) components, BLUETOOTH® components (e.g., BLUETOOTH® Low Energy), WI-FI® components, and other communication components to provide communication via other modalities. The devices870may be another machine or any of a wide variety of peripheral devices (e.g., a peripheral device coupled via a Universal Serial Bus (USB)). Moreover, the communication components864can detect identifiers or include components operable to detect identifiers. For example, the communication components864can include Radio Frequency Identification (RFID) tag reader components, NEC smart tag detection components, optical reader components (e.g., an optical sensor to detect one-dimensional bar codes such as a Universal Product Code (UPC) bar code, multi-dimensional bar codes such as a Quick Response (QR) code, Aztec Code, Data Matrix, Dataglyph, MaxiCode, PDF417, Ultra Code, Uniform Commercial Code Reduced Space Symbology (UCC RSS)-2D bar codes, and other optical codes), acoustic detection components (e.g., microphones to identify tagged audio signals), or any suitable combination thereof. In addition, a variety of information can be derived via, the communication components864, such as location via Internet Protocol (IP) geo-location, location via WI-FI® signal triangulation, location via detecting a BLUETOOTH® or NFC beacon signal that may indicate a particular location, and so forth. In various example embodiments, one or more portions of the network880can be an ad hoc network, an intranet, an extranet, a VPN, a LAN, a WLAN, a WAN, a WWAN, a MAN, the Internet, a portion of the Internet, a portion of the PSTN, a plain old telephone service (POTS) network, a cellular telephone network, a wireless network, a WI-FI® network, another type of network, or a combination of two or more such networks. For example, the network880or a portion of the network880may include a wireless or cellular network, and the coupling882may be a Code Division Multiple Access (CDMA) connection, a Global System for Mobile communications (GSM) connection, or other type of cellular or wireless coupling. In this example, the coupling882can implement any of a variety of types of data transfer technology, such as Single Carrier Radio Transmission Technology (1×RTT), Evolution-Data Optimized (EVDO) technology, General Packet Radio Service (GPRS) technology, Enhanced Data rates for GSM Evolution (EDGE) technology, third Generation Partnership Project (3GPP) including 3G, fourth generation wireless (4G) networks, Universal Mobile Telecommunications System (UMTS), High Speed Packet Access (HSPA), Worldwide Interoperability for Microwave Access (WiMAX), Long Term Evolution (LTE) standard, others defined by various standard setting organizations, other long range protocols, or other data transfer technology. The instructions816can be transmitted or received over the network880using a transmission medium via a network interface device (e.g., a network interface component included in the communication components864) and utilizing any one of a number of well-known transfer protocols (e.g., Hypertext Transfer Protocol (HTTP)). Similarly, the instructions816can be transmitted or received using a transmission medium via the coupling872(e.g., a peer-to-peer coupling) to devices870. The term “transmission medium” shall be taken to include any intangible medium that is capable of storing, encoding, or carrying the instructions816for execution by the machine800, and includes digital or analog communications signals or other intangible medium to facilitate communication of such software. Throughout this specification, plural instances may implement components, operations, or structures described as a single instance. Although individual operations of one or more methods are illustrated and described as separate operations, one or more of the individual operations may be performed concurrently, and nothing requires that the operations be performed in the order illustrated. Structures and functionality presented as separate components in example configurations may be implemented as a combined structure or component. Similarly, structures and functionality presented as a single component may be implemented as separate components. These and other variations, modifications, additions, and improvements fall within the scope of the subject matter herein. Although an overview of the inventive subject matter has been described with reference to specific example embodiments, various modifications and changes may be made to these example embodiments without departing from the broader scope of example embodiments of the present disclosure. Such example embodiments of the inventive subject matter may be referred to herein, individually or collectively, by the term “invention” merely for convenience and without intending to voluntarily limit the scope of this application to any single disclosure or inventive concept if more than one is, in fact, disclosed. The example embodiments illustrated herein are described in sufficient detail to enable those skilled in the art to practice the teachings disclosed. Other example embodiments may be used and derived therefrom, such that structural and logical substitutions and changes may be made without departing from the scope of this disclosure. The Detailed Description, therefore, is not to be taken in a limiting sense, and the scope of various example embodiments is defined only by the appended claims, along with the full range of equivalents to which such claims are entitled. As used herein, the term “or” may be construed in either an inclusive or exclusive sense. Moreover, plural instances may be provided for resources, operations, or structures described herein as a single instance. Additionally, boundaries between various resources, operations, modules, engines, and data store are somewhat arbitrary, and particular operations are illustrated in a context of specific illustrative configurations. Other allocations of functionality are envisioned and may fall within a scope of various example embodiments of the present disclosure. In general, structures and functionality presented as separate resources in the example configurations may be implemented as a combined structure or resource. Similarly, structures and functionality presented as a single resource may be implemented as separate resources. These and other variations, modifications, additions, and improvements fall within a scope of example embodiments of the present disclosure as represented by the appended claims. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense. The disclosure presented herein may be considered in view of the following examples. Example A: A computer-implemented method of processing a connection request, comprising: receiving database connection requests from a plurality of application servers and directing the database connection requests to a first database; accessing metrics data including a wait time for each database connection request describing how long the request waited until being serviced by an open connection at a first database; determining if at least one of the wait times occurring during a time interval exceeds a wait limit; counting, during a time window comprising a series of intervals, a quantity of the intervals in which the wait limit was exceeded at least once; determining that the first database is unhealthy if the quantity exceeds a predetermined count threshold during the time window; and triggering at least one mitigating action if the first database is determined to be unhealthy. Example B: The computer implemented method of Example A, wherein the at least one mitigating action includes redirecting new connection requests to a second database. Example C: The computer-implemented method of Example A, wherein the at least one mitigating action includes notifying the plurality of application servers that the first database is unavailable for connection requests. Example D: The computer-implemented method of Example A, wherein the time window is a fixed amount of intervals, the time window being refreshed at each new interval. Example E: The computer-implemented method of Example A, wherein the metrics data further includes a volume of requests received by the first database per interval, and wherein intervals during which the volume of requests exceeds a volume threshold are excluded from the quantity of the counting operation. Example F: The computer-implemented method of Example A, further comprising: periodically checking at least one performance parameter of the first database to determine if the database is healthy, and, if the database is determined to be healthy, allowing a predetermined portion of connection requests from the plurality of application servers to connect to the first database, while causing a remainder of the requests to wait a predetermined delay period before connecting those requests to the first database. Example G: The computer-implemented method of Example F, wherein the predetermined portion of connections allowed to connect is about 15%, and the predetermined delay period is about 10 seconds. Example H: The computer-implemented method of Example F, wherein the performance parameter includes the wait time for at least one dummy connection request, and wherein the first database is determined to be healthy if the wait time does not exceed the wait limit. Example I: The method of Example A, wherein each database connection request has one of a plurality of ranks, the plurality of ranks defining priority for maintaining the connection, the ranks at least including a lowest rank and a next-higher rank; the method further comprising: when the first database is determined to be unhealthy, designating a degree of unhealthiness based on one or more factors including by how much the quantity exceeds the predetermined count threshold during the time window; wherein the at least one mitigating actions includes: terminating connections assigned the lowest rank if the degree of unhealthiness exceeds a first shed threshold corresponding to the lowest rank; and terminating connections assigned to the next-higher rank if the degree of unhealthiness exceeds a second shed threshold corresponding to the next-lowest rank. Example J: A system comprising: a processor; and a memory in communication with the processor, the memory having computer-readable instructions stored thereupon that, when executed by the processor, cause the processor to: receive database connection requests from a plurality of application servers and direct the database connection requests to a first database; access metrics data including a wait time for each database connection request describing how long the request waited until being serviced by an open connection at a first database; determine if at least one of the wait times occurring during a time interval exceeds a wait limit; count, during a time window comprising a series of the time intervals, a quantity of the time intervals in which the wait limit was exceeded at least once; determine that the first database is unhealthy if the quantity exceeds a predetermined count threshold during the time window; and trigger at least one mitigating action if the first database is determined to be unhealthy. Example K: The system of Example J, wherein the at least one mitigating action includes redirecting new connection requests to a second database. Example L: The system of Example J, wherein the at least one mitigating action includes notifying the plurality of application servers that the first database is unavailable for connection requests. Example M: The system of Example J, wherein the time window is a fixed amount of intervals, the time window being refreshed at each new interval. Example N: The system of Example J, wherein the metrics data further includes a volume of requests received by the first database per interval, and wherein intervals during which the volume of requests exceeds a volume threshold are excluded from the quantity of the counting operation. Example O: The system of Example J, wherein the instructions further cause the processor to: periodically check at least one performance parameter of the first database to determine if the database is healthy, and if the database is determined to be healthy, allow a predetermined portion of connection requests from the plurality of application servers to connect to the first database, while causing a remainder of the requests to wait a predetermined delay period before connecting those requests to the first database. Example P: The system of Example O, wherein the predetermined portion of connection requests allowed to connect is about 15%, and wherein the predetermined delay is about 10 seconds. Example Q: The system of Example O, wherein the performance parameter includes the wait time for at least one dummy connection request, and wherein the database is determined to be healthy if the wait time does not exceed the wait limit. Example R: The system of Example O, wherein each database connection request has one of a plurality of ranks, the plurality of ranks defining priority for maintaining the connection, the ranks at least including a lowest rank and a next-higher rank, wherein the instructions further cause the processor to: when the first database is determined to be unhealthy, designate a degree of unhealthiness based on one or more factors including by how much the quantity exceeds the predetermined count threshold during the time window; wherein the at least one mitigating actions includes: terminating connections assigned the lowest rank if the degree of unhealthiness exceeds a first shed threshold corresponding to the lowest rank; and terminating connections assigned to the next-higher rank if the degree of unhealthiness exceeds a second shed threshold corresponding to the next-lowest rank. Example S: A system comprising: one or more processors of a machine; and a memory storing instructions that, when executed by the one or more processors, cause the machine to operate a connection service operable to: receive connection requests from a plurality of application servers, the connection requests requesting connections to a database, each connection request having one of a plurality of ranks, the plurality of ranks reflecting a range of priority of maintaining a connection, the ranks at least including a lowest rank and a next-lowest rank; access metrics data relating to performance of the database; determine based on the metrics data if the database is decreasing in performance and designating a degree of unhealthiness based on a magnitude of the decrease; terminate connections assigned the lowest rank if the degree of unhealthiness exceeds a first shed threshold; and terminate connections assigned to the next-higher rank if the degree of unhealthiness exceeds a second shed threshold. Example T: The system of Example S, wherein the metrics data includes wait times describing how long requests waited until being serviced by an open connection at the database; wherein the instructions further cause the machine to: determine if at least one of the wait times occurring during a time interval exceeds a wait limit; and count, during a time window comprising a series of the time intervals, a quantity of the time intervals in which the wait limit was exceeded at least once; wherein to determine based on the metrics data if the database is decreasing in performance includes determining that the quantity exceeds the predetermined count threshold during the time window; and wherein designating a degree of unhealthiness is based at least in part on a volume of responses in which wait times exceed the wait limit during the time window. In closing, although the various embodiments have been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended representations is not necessarily limited to the specific features or acts described. Rather, the specific features and acts are disclosed as example forms of implementing the claimed subject matter.
61,611
11860729
DETAILED DESCRIPTION In the following description, for the purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding. One or more embodiments may be practiced without these specific details. Features described in one embodiment may be combined with features described in a different embodiment. In some examples, well-known structures and devices are described with reference to a block diagram form in order to avoid unnecessarily obscuring the present invention.1. GENERAL OVERVIEW2. ARCHITECTURAL OVERVIEW3. PREDICTIVE SYSTEM REMEDIATION4. ILLUSTRATIVE EXAMPLE5. MISCELLANEOUS; EXTENSIONS6. HARDWARE OVERVIEW7. COMPUTER NETWORKS AND CLOUD NETWORKS8. MICROSERVICE APPLICATIONS 1. General Overview One or more embodiments include techniques for predictive system remediation. Based on attributes associated with applications of one or more system-selected remedial actions to one or more problematic system behaviors in a system (e.g., a database system), the system determines a predicted effectiveness of one or more future applications of a remedial action to a particular problematic system behavior, as of one or more future times. The system determines that the predicted effectiveness of the one or more future applications of the remedial action is positive but does not satisfy one or more performance criteria. Responsive to determining that the predicted effectiveness is positive but does not satisfy the one or more performance criteria, the system generates a notification corresponding to the predicted effectiveness not satisfying the one or more performance criteria. The system applies the remedial action to the particular problematic system behavior, despite already determining that the predicted effectiveness of the one or more future applications of the remedial action does not satisfy the one or more performance criteria. One or more embodiments described in this Specification and/or recited in the claims may not be included in this General Overview section. 2. Architectural Overview FIG.1illustrates a system100in accordance with one or more embodiments. As illustrated inFIG.1, system100includes an administrative interface104, a self-healing system102, a data repository116, and various components thereof. In one or more embodiments, the system100may include more or fewer components than the components illustrated inFIG.1. The components illustrated inFIG.1may be local to or remote from each other. The components illustrated inFIG.1may be implemented in software and/or hardware. Each component may be distributed over multiple applications and/or machines. Multiple components may be combined into one application and/or machine. Operations described with respect to one component may instead be performed by another component. Additional embodiments and/or examples relating to computer networks are described below. In an embodiment, administrative interface104refers to hardware and/or software configured to facilitate communications between a user (e.g., a user in an administrative role and/or who performs administrative functions) and a self-healing system102. Administrative interface104renders user interface elements and receives input via user interface elements. Examples of interfaces include a graphical user interface (GUI), a command line interface (CLI), a haptic interface, and a voice command interface. Examples of user interface elements include checkboxes, radio buttons, dropdown lists, list boxes, buttons, toggles, text fields, date and time selectors, command lines, sliders, pages, and forms. In an embodiment, different components of administrative interface104are specified in different languages. The behavior of user interface elements is specified in a dynamic programming language, such as JavaScript. The content of user interface elements is specified in a markup language, such as hypertext markup language (HTML) or XML User Interface Language (XUL). The layout of user interface elements is specified in a style sheet language, such as Cascading Style Sheets (CSS). Alternatively, administrative interface104is specified in one or more other languages, such as Java, C, or C++. In an embodiment, self-healing system102refers to hardware and/or software configured to perform operations described herein for predictive system remediation. Examples of operations for predictive system remediation are described below. The self-healing system102is configured to ‘self-heal’ by detecting one or more problematic system behaviors and applying one or more system-selected remedial actions to the problematic system behavior(s), without requiring intervening user input to select and/or initiate the remedial action(s). The self-healing system102may be a self-healing database system that includes a database106. The self-healing system102may be configured to detect problematic system behaviors that affect the database106and apply one or more system-selected remedial actions to ‘heal’ the database. In an embodiment, a problematic system behavior may be any kind of behavior that affects access to the self-healing system102, data integrity of data stored by the self-healing system102, responsiveness of the self-healing system, and/or any other performance characteristic of the self-healing system102or combination thereof. A problematic system behavior may be associated with one or more system metrics, where different values of the metric(s) correspond to different system states, ranging from non-problematic system behaviors to problematic system behaviors. For example, the self-healing system102may track metrics corresponding to bandwidth usage, response times, processor utilization, storage availability, transaction rates, processing times, wait times, and/or any other kind of metric or combination thereof that quantifies a system behavior. A non-problematic system behavior corresponds to a system state in which the component(s) in question is/are functioning as intended. A problematic system behavior indicates that one or more components of the self-healing system102are in a degraded and/or non-functional state. For example, a problematic system behavior may indicate that one or more components of the self-healing system have failed, display symptoms of impending failure, and/or are not performing to expectations. Examples of problematic system behaviors may include, but are not limited to: bandwidth saturation; slow response time; high processor utilization; low storage space (e.g., disk space) availability; an abnormally high rate of requests and/or transactions per time unit (e.g., per second); slow processing time per transaction; abnormally long times spent in wait states (e.g., input/output wait times, processor wait times, etc.); and/or any other kind of problematic system behavior or combination thereof. In an embodiment, a remedial action may be any kind of action or combination thereof designed to remediate one or more problematic system behaviors. A remedial action may restart or reset a component of the self-healing system102(e.g., a database, a service, a virtual machine, an operating system, and/or any other component of the self-healing system102or combination thereof). Alternatively or additionally, a remedial action may provision additional resources (e.g., network bandwidth, processor cycles, memory, storage, and/or any other kind of resource or combination thereof) for the self-healing system102. For example, the self-healing system102may operate in a data center, a virtual machine environment, and/or any other kind of operating environment in which available resources are allocated between multiple physical and/or virtual systems. A remedial action may allocate free resources and/or reallocate resources from another system to the self-healing system102. Alternatively or additionally, a remedial action may reconfigure one or more components of the self-healing system. For example, if a network interface is saturated, a remedial action may impose a data rate limit on transactions conducted over that network interface. Many different kinds of remedial actions and/or combinations thereof may be applied to many different kinds of problematic system behaviors. In an embodiment, a remediation engine108refers to hardware and/or software configured to perform operations described herein for detecting a problematic system behavior, selecting a remedial action to apply to a problematic system behavior, and/or applying a remedial action to a problematic system behavior. The remediation engine108may be configured to monitor components of the self-healing system102(e.g., using polling, logging agents, a heartbeat system in which components periodically report their health status, and/or any other kind of monitoring or combination thereof). Based on the monitoring, the remediation engine108may detect a problematic system behavior. Responsive to detecting a problematic system behavior, the remediation engine108may select from a set of available remedial actions, which may have been designated as applicable to one or more particular problematic system behaviors. The remediation engine108may apply a system-selected remedial action to the problematic system behavior. In an embodiment, a remediation engine108is configured to predict the future effectiveness of one or more remedial actions for resolving one or more problematic system behaviors. Specifically, the remediation engine108may use information about prior applications of remedial actions to problematic system behaviors to predict future effectiveness of remedial actions. The remediation engine108may use information store in a data repository116, described below. In an embodiment, the self-healing system102includes a machine learning engine109. Machine learning includes various techniques in the field of artificial intelligence that deal with computer-implemented, user-independent processes for solving problems that have variable inputs. The self-healing system102may be configured to use the machine learning engine109to perform one or more operations, described herein, to predict future effectiveness of one or more remedial actions. In embodiment, the machine learning engine109trains a machine learning model110to perform one or more operations. Training a machine learning model110uses training data to generate a function that, given one or more inputs to the machine learning model, computes a corresponding output. The output may correspond to a prediction based on prior machine learning. In an embodiment, the output includes a label, classification, and/or categorization assigned to the provided input(s). The machine learning model110corresponds to a learned model for performing the desired operation(s) (e.g., labeling, classifying, and/or categorizing inputs). In an embodiment, the machine learning engine109may use supervised learning, semi-supervised learning, unsupervised learning, reinforcement learning, and/or another training method or combination thereof. In supervised learning, labeled training data includes input/output pairs in which each input is labeled with a desired output (e.g., a label, classification, and/or categorization), also referred to as a supervisory signal. In semi-supervised learning, some inputs are associated with supervisory signals and other inputs are not associated with supervisory signals. In unsupervised learning, the training data does not include supervisory signals. Reinforcement learning uses a feedback system in which the machine learning engine109receives positive and/or negative reinforcement in the process of attempting to solve a particular problem (e.g., to optimize performance in a particular scenario, according to one or more predefined performance criteria). In an embodiment, the machine learning engine109initially uses supervised learning to train the machine learning model110and then uses unsupervised learning to update the machine learning model110on an ongoing basis. In an embodiment, a machine learning engine109may use many different techniques to label, classify, and/or categorize inputs. A machine learning engine109may transform inputs into feature vectors that describe one or more properties (“features”) of the inputs. The machine learning engine109may label, classify, and/or categorize the inputs based on the feature vectors. Alternatively or additionally, a machine learning engine109may use clustering (also referred to as cluster analysis) to identify commonalities in the inputs. The machine learning engine109may group (i.e., cluster) the inputs based on those commonalities. The machine learning engine109may use hierarchical clustering, k-means clustering, and/or another clustering method or combination thereof. In an embodiment, a machine learning engine109includes an artificial neural network. An artificial neural network includes multiple nodes (also referred to as artificial neurons) and edges between nodes. Edges may be associated with corresponding weights that represent the strengths of connections between nodes, which the machine learning engine109adjusts as machine learning proceeds. Alternatively or additionally, a machine learning engine109may include a support vector machine. A support vector machine represents inputs as vectors. The machine learning engine109may label, classify, and/or categorizes inputs based on the vectors. Alternatively or additionally, the machine learning engine109may use a naïve Bayes classifier to label, classify, and/or categorize inputs. Alternatively or additionally, given a particular input, a machine learning model may apply a decision tree to predict an output for the given input. Alternatively or additionally, a machine learning engine109may apply fuzzy logic in situations where labeling, classifying, and/or categorizing an input among a fixed set of mutually exclusive options is impossible or impractical. The aforementioned machine learning model110and techniques are discussed for exemplary purposes only and should not be construed as limiting one or more embodiments. In an embodiment, as a machine learning engine109applies different inputs to a machine learning model110, the corresponding outputs are not always accurate. As an example, the machine learning engine109may use supervised learning to train a machine learning model110. After training the machine learning model110, if a subsequent input is identical to an input that was included in labeled training data and the output is identical to the supervisory signal in the training data, then output is certain to be accurate. If an input is different from inputs that were included in labeled training data, then the machine learning engine109may generate a corresponding output that is inaccurate or of uncertain accuracy. In addition to producing a particular output for a given input, the machine learning engine109may be configured to produce an indicator representing a confidence (or lack thereof) in the accuracy of the output. A confidence indicator may include a numeric score, a Boolean value, and/or any other kind of indicator that corresponds to a confidence (or lack thereof) in the accuracy of the output. In an embodiment, given a problematic system behavior and a candidate remedial action, a machine learning engine109may be configured to predict future effectiveness of the candidate remedial action for resolving the problematic system behavior. The machine learning engine109may be configured to detect and store patterns in system behaviors and/or prior applications of remedial actions to problematic system behaviors. The machine learning engine109may be configured to predict future system behaviors and/or effectiveness of remedial actions, based on those patterns. In an embodiment, the machine learning engine109is configured to detect and store seasonal patterns112. Seasonal patterns112are patterns of system behaviors associated with particular seasons, i.e., periods of time during which system behaviors may vary in relatively predictable ways due to seasonal factors. For example, holidays and sales events typically are associated with seasonal system behaviors. Seasonality is discussed in further detail in U.S. patent application Ser. No. 15/186,938, incorporated herein by reference in its entirely. In an embodiment, the machine learning engine109is configured to detect and store non-seasonal patterns114. Non-seasonal patterns114are patterns of system behaviors that are not associated with particular seasons. A non-seasonal pattern114may correspond to a trend in a system behavior (for example, increasing wait times) over time. Alternatively or additionally, a non-seasonal pattern114may correspond to a cyclical pattern in system behavior (for example, moving between long wait times and short wait times according to a discernable pattern) over time. Alternatively or additionally, seasonal patterns112and/or non-seasonal patterns114may reflect patterns of system behaviors when remedial actions are applied to resolve problematic system behaviors. Seasonal patterns112and/or non-seasonal patterns114may be based on information stored in a data repository116, described below. In general, in an embodiment, the machine learning engine109uses seasonal patterns112and/or non-seasonal patterns114to predict future system behaviors In an embodiment, the machine learning engine109is configured to account for seasonal patterns112and/or non-seasonal patterns114when predicting future system behaviors and/or the future effectiveness of applying a remedial action to a problematic system behavior. Predicting future system behaviors and/or future effectiveness of applying a remedial action to a problematic system behavior is discussed in further detail below. In an embodiment, the system100includes a data repository116. A data repository116may be configured to store one or more system behavior definitions120. A system behavior definition120corresponds to a system behavior to be treated as a problematic system behavior or a non-problematic system behavior. The system behavior definition120may include data that indicates a kind of system behavior (e.g., processor utilization) and/or a particular system resource (e.g., a particular network interface). Alternatively or additionally, a system behavior definition120may include a threshold metric (e.g., a maximum processor utilization, a minimum amount of free storage space, etc.), at or beyond which the corresponding system behavior is considered problematic. In an embodiment, whether or not a particular system behavior is considered ‘problematic’ is based, at least in part, on seasonal patterns112of system behavior. For example, a system behavior (e.g., high processor usage, a high request rate, high bandwidth saturation, etc.) that may be considered problematic during a non-peak season may be considered non-problematic during a peak season (e.g., during a promotion or holiday season). One or more system behavior definitions120may include different criteria for different seasons and/or kinds of seasons. Alternatively or additionally, a data repository116may be configured to store one or more remedial action definitions122. A remedial action definition122corresponds to behavior that is selectable by the self-healing system to attempt to remediate a problematic system behavior (e.g., as defined by a system behavior definition120). A remedial action definition122may include data that indicates a kind of system behavior (e.g., high processor utilization) to which the remedial action corresponding to the remedial action definition122applies. The remedial action definition122may indicate the applicable system behavior by mapping the remedial action definition122to a corresponding system behavior definition120(e.g., by including a unique identifier of the system behavior definition120in the remedial action definition122, or by some other technique for mapping a remedial action definition122to a system behavior definition120). Alternatively or additionally, a remedial action definition122may indicate the remedial action(s) to be applied. Examples of remedial actions are described above. Alternatively or additionally, a data repository116may be configured to store a remediation history124. A remediation history124includes data corresponding to prior application(s) of one or more remedial actions to one or more problematic system behaviors. The data may include attributes associated with the application(s). For example, an attribute may indicate: a time when a remedial action was applied; a metric associated with the problematic system behavior before the remedial action was applied; a metric associated with the problematic system behavior after the remedial action was applied; a data value indicating whether the application of the remedial action was successful in remediating the problematic system behavior; and/or any other kind of information associated with one or more applications of one or more remedial actions to one or more problematic system behaviors. Alternatively or additionally, a data repository116may be configured to store a remediation configuration126. A remediation configuration126indicates one or more criteria for applying a remedial action to a problematic system behavior. A criterion for applying a remedial action to a problematic system behavior may be different from a threshold metric indicating that the corresponding system behavior is considered problematic. For example, the criterion may indicate that the remedial action should be applied only when the system behavior has been problematic for at least a certain amount of time, or by a certain amount (e.g., a certain amount or percentage above or below the threshold metric). Alternatively or additionally, a remediation configuration126may indicate an order of preference for different remedial actions, times of day when remedial actions may be applied (e.g., restarting the self-healing system102only during off-peak hours), a maximum number of times to attempt a particular remedial action, and/or any other criterion or combination thereof for applying a remedial action to a problematic system behavior. In an embodiment, a system behavior definition120, remedial action definition122, remediation configuration126, and/or other storage item or combination thereof may include one or more performance criteria for a remedial action, corresponding to whether or not the remedial action is considered successful. One or more performance criteria for a remedial action may correspond to a relative improvement for a system behavior or an expected value of that system behavior at some time in the future or within a certain time window into the future. For example, one or more performance criteria may indicate that a remedial action directed to processor utilization is ‘successful’ if the remedial action is expected to reduce processor utilization by at least a certain percentage from a current processor utilization level, optionally for a certain time window into the future. As another example, one or more performance criteria may indicate that a remedial action directed to storage space is ‘successful’ if the remedial action is expected to free up at least a certain percentage of currently utilized storage space, optionally for a certain time window into the future. As yet another example, one or more performance criteria may indicate that a remedial action is ‘successful’ if the system is not predicted to crash or enter a critical operational state that is user-defined (for example, based on the experience, wisdom, or risk-averseness of a system administrator) and/or associated with loads that have been elevated to levels that severely degrade performance (for example, to the point of preventing actual or virtual users of the computer system from obtaining benefits offered or even guaranteed by the computer system, or to the point of costing an owner of the computer system an amount of resources that is unacceptably large in comparison to the purpose and accepted costs of the computer system), optionally for a certain time window into the future (e.g., for the next thirty days). In this example, as long as the system is not expected to enter that critical operational state within the specified time window into the future, the remedial action is successful at delaying any real need to provide a more tailored solution to the underlying problem. Alternatively or additionally, one or more performance criteria for a remedial action may correspond to an absolute metric. For example, one or more performance criteria may indicate that a remedial action directed to processor utilization is ‘successful’ if the remedial action is expected to reduce processor utilization below a certain amount (e.g., fifty percent utilization), optionally for a certain time window into the future. As another example, one or more performance criteria may indicate that a remedial action directed to storage space is ‘successful’ if the remedial action is expected to result in at least a certain amount of storage space (e.g., one terabyte), optionally for a certain time window into the future. Many different kinds of performance criteria and/or combinations thereof may be used to define ‘success’ for a remedial action. In an embodiment, a data repository116is any type of storage unit and/or device (e.g., a file system, database, collection of tables, or any other storage mechanism) for storing data. Further, a data repository116may include multiple different storage units and/or devices. The multiple different storage units and/or devices may or may not be of the same type or located at the same physical site. Further, a data repository116may be implemented or may execute on the same computing system as one or more other components of the system100. Alternatively or additionally, a data repository116may be implemented or executed on a computing system separate from one or more other components of the system100. A data repository116may be communicatively coupled to one or more other components of the system100via a direct connection or via a network. Information describing system behavior definitions120, remedial action definitions122, a remediation history124, and/or a remediation configuration126may be implemented across any of components within the system100. However, this information is illustrated within the data repository116for purposes of clarity and explanation. In an embodiment, one or more components of the system100are implemented on one or more digital devices. The term “digital device” generally refers to any hardware device that includes a processor. A digital device may refer to a physical device executing an application or a virtual machine. Examples of digital devices include a computer, a tablet, a laptop, a desktop, a netbook, a server, a web server, a network policy server, a proxy server, a generic machine, a function-specific hardware device, a hardware router, a hardware switch, a hardware firewall, a hardware firewall, a hardware network address translator (NAT), a hardware load balancer, a mainframe, a television, a content receiver, a set-top box, a printer, a mobile handset, a smartphone, a personal digital assistant (“PDA”), a wireless receiver and/or transmitter, a base station, a communication management device, a router, a switch, a controller, an access point, and/or a client device. 3. Predictive System Remediation FIGS.2A-2Billustrate an example set of operations for predictive system remediation in accordance with one or more embodiments. One or more operations illustrated inFIGS.2A-2Bmay be modified, rearranged, or omitted all together. Accordingly, the particular sequence of operations illustrated inFIGS.2A-2Bshould not be construed as limiting the scope of one or more embodiments. In an embodiment, a system (e.g., self-healing system102ofFIG.1) obtains attributes associated with applications of one or more remedial actions to the system (Operation202). As discussed above, the attributes may include any kind of information associated with prior applications of the remedial action(s), including but not limited to: a time when a remedial action was applied; a metric associated with the problematic system behavior before the remedial action was applied; a metric associated with the problematic system behavior after the remedial action was applied; a data value indicating whether the application of the remedial action was successful in remediating the problematic system behavior; and/or any other kind of information associated with one or more applications of one or more remedial actions to one or more problematic system behaviors. The attributes may include labeled training data specifically designed and loaded into the system to train a machine learning model. Alternatively or additionally, the attributes may include unlabeled data obtained during system operation. In an embodiment, the system trains a machine learning model to predict effectiveness of applying remedial actions (Operation204). As discussed above, the system may train the machine learning model using supervised learning, unsupervised learning, reinforcement learning, and/or another training method or combination thereof. The system may use labeled and/or unlabeled data (e.g., attributes associated with applications of one or more remedial actions to the system) to train the machine learning model. In an embodiment, the system receives user input to configure one or more performance criteria for a remedial action (Operation206). As discussed above, the one or more performance criteria for a remedial action, when compared with a particular instance of applying the remedial action, indicate whether or not the remedial action is considered successful. In an embodiment, the one or more performance criteria include one or more user-defined criteria that the system receives via a user interface (e.g., administrative interface104ofFIG.1). In an embodiment, the system monitors for problematic system behaviors (Operation208). The system may monitor for problematic system behaviors by obtaining data associated with current system behaviors (e.g., using one or more monitoring techniques described above). Based on the data associated with current system behaviors, the system determines whether a problematic system behavior is detected (Operation210). In an embodiment, whether or not a particular system behavior is considered ‘problematic’ is based, at least in part, on seasonal patterns of system behavior. A system behavior (e.g., high processor usage, a high request rate, high bandwidth saturation, etc.) that may be considered problematic during a non-peak season may be considered non-problematic during a peak season (e.g., during a promotion or holiday season). The system may predict that such behaviors that arise during a peak season will subside and/or self-rectify at the end of the season and therefore do not require remedial action. Whether or not a system behavior is considered problematic may be evaluated against one or more system behavior definitions (e.g., system behavior definition120ofFIG.1). Different criteria may apply to different seasons and/or kinds of seasons. In general, in an embodiment, the system monitors for system behaviors that are problematic because they deviate sufficiently (e.g., by a defined amount or degree, such as a certain number of standard deviations) and persistently from expected system behavior. In an embodiment, responsive to determining that a problematic system behavior is detected, the system determines whether any remedial action applicable to the detected problematic system behavior is available (Operation212). Specifically, the system may determine whether any remedial action definition (e.g., remedial action definition122ofFIG.1) maps to the particular problematic system behavior and/or kind of problematic system behavior detected. If no remedial action applicable to the detected problematic system behavior is available, the system may generate a notification (Operation214) indicating that the system is unable to ‘self-heal.’ The system may log the notification and/or transmit the notification to a user (e.g., a system administrator). In an embodiment, responsive to determining that a remedial action applicable to the detected problematic system behavior is available, the system determines a predicted effectiveness of applying the remedial action to the problematic system behavior (Operation216). The system may determine a predicted effectiveness of applying the remedial action to a currently manifested instance of the problematic system behavior. Alternatively, the system may determine a predicted effectiveness of applying the remedial action to one or more future instances of the problematic system behavior. In some cases, applying a remedial action may resolve the problematic system behavior temporarily. However, the problematic system behavior may arise again, and the remedial action may be less successful in subsequent applications. For example, the remedial action may become decreasingly effective over time. In an embodiment, the system uses a machine learning model (e.g., machine learning model110ofFIG.1) to determine the predicted effectiveness of applying the remedial action to the problematic system behavior. The predicted effectiveness may include information about when the remedial action is predicted to no longer be effective (e.g., after a certain amount of time and/or a certain number of instances of the problematic system behavior). The predicted effectiveness may correspond to a metric that indicates an amount or degree to which the remedial action is successful and/or unsuccessful. The system may determine the predicted effectiveness before applying the remedial action to a current instance of the problematic system behavior. Alternatively or additionally, the system may determine the predicted effectiveness of a remedial action preemptively, to determine whether the system has any vulnerabilities to problematic system behaviors that are not currently manifested in the system. In an embodiment, the system determines whether the predicted effectiveness of applying the remedial action satisfies one or more performance criteria (Operation218) that, when compared with a particular instance of applying the remedial action, indicate whether or not the remedial action is considered successful. If the predicted effectiveness does not satisfy the one or more performance criteria or otherwise indicates that the remedial action is predicted to be unsuccessful, the system may generate a notification (Operation220). The notification may include any kind of information associated with the system's prediction, such as: the problematic system behavior; the remedial action; the predicted effectiveness; a timeframe in which the remedial action is predicted to fail the one or more performance criteria; and/or any other kind of information or combination thereof associated with the system's prediction. The system may log the notification and/or transmit the notification to a user (e.g., a system administrator). In an embodiment, the system is configured to transmit the notification within a certain amount of time before the remedial action is predicted to fail to satisfy the one or more performance criteria (e.g., within a week, a month, or any other period of time which may by user-configured), to allow sufficient time for a user to intervene and prevent the predicted failure. The amount of time may be based, at least in part, on an expected amount of time for a user to troubleshoot and/or resolve a problematic system behavior. The amount of time may be user-configurable, for example via a user interface that includes controls for managing the system's self-healing behavior as described herein. In an embodiment, the notification includes a link (e.g., a hyperlink, application launcher, and/or another kind of link) that, when selected by a user, directs the user to a graphical user interface that includes controls for managing the system's self-healing behavior. Alternatively or additionally, the notification itself may include a graphical user interface with such controls. Some examples of user input corresponding to instructions to manage self-healing behavior are discussed below. In an embodiment, the system determines whether another remedial action is applicable to the problematic system behavior (Operation222). The system may predict the effectiveness of applying each remedial action that is applicable to the problematic system behavior. In an embodiment, even if the predicted effectiveness of applying a remedial action (or multiple remedial actions, if applicable) does not satisfy the one or more performance criteria, or the remedial action is otherwise predicted to be unsuccessful, the system nonetheless applies the remedial action to a current instance of the problematic system behavior (Operation224). As discussed above, applying the remedial action may remediate and/or improve the problematic system behavior temporarily. The system may continue applying the remedial action to instances of the problematic system behavior, performing ‘self-healing’ to the best of the system's ability, until further action is taken to address the problematic system behavior. Applying the remedial action, despite the predicted effectiveness not satisfying the one or more performance criteria, may allow the system to continue applying the remedial action during a period of time and/or for instances of the problematic system behavior for which the remedial action still satisfies the one or more performance criteria. The system may continue to apply the remedial action before reaching a point in time and/or instance of the problematic system behavior for which the remedial action's effectiveness fails to satisfy the one or more performance criteria. In an embodiment, the system applies multiple remedial actions to a problematic system behavior. The system may apply the remedial actions in a particular order. For example, the system may apply the remedial actions in order of predicted effectiveness, predicted cost to an entity that operates the system, complexity (e.g., attempting to allocate more resources before applying a software patch, or clearing a cache before allocating more resources), or any other ordering criterion or combination thereof. Alternatively or additionally, the system may apply remedial actions according to a defined order (e.g., a particular order designated by an administrator in the system configuration, via an administrative interface). In an embodiment, the system stores records of which remedial actions, when applied, are most effective, and gives preference to the most effective remedial actions in future instances of problematic behaviors. Alternatively or additionally, the system may track problematic secondary effects of remedial actions (e.g., system downtime when applying a software patch) and disfavor remedial actions with the most problematic secondary effects in future instances of problematic system behaviors. The system may prompt a user for confirmation before applying a remedial action that is known to have problematic secondary effects. In an embodiment, predicting the effectiveness of applying the remedial action, before reaching a point in time and/or instance of the problematic system behavior for which the remedial action does not satisfy the one or more performance criteria, allows a system administrator to take further action (e.g., responsive to a notification from the system) to prevent the system from reaching that point in time and/or instance of the problematic system behavior. Alternatively or additionally, continuing the apply the remedial action to the problematic system behavior may continue to alleviate the problematic system behavior to some extent, even if applying the remedial action does not satisfy the one or more performance criteria. In an embodiment, the system updates the machine learning model based on one or more applications of the remedial action(s) to the problematic system behavior (Operation226). The system may use unsupervised learning to update the machine learning model on an ongoing basis, based on problematic system behaviors that are detected during system operation and/or outcomes of remedial actions that are applied to problematic system behaviors. Attributes associated with applications of remedial actions to problematic system behaviors may be stored as part of the system's remediation history, which the system may use to update the machine learning model. In an embodiment, updating the machine learning model on an ongoing basis improves the system's ability to predict the effectiveness of applying remedial actions to problematic system behaviors. In an embodiment, the system adjusts self-healing based on user input (Operation228). As noted above, a user may supply input in a graphical user interface reached via a link in a notification. A user may supply input to change a system behavior definition, a remedial action definition, remediation configuration, and/or any other kind of data or combination thereof that the system uses for self-healing. For example, the system may notify a user that a predicted effectiveness of applying a remedial action does not satisfy one or more performance criteria. Responsive to the notification, the user may instruct the system to refrain from applying the remedial action, apply the remedial action more frequently, apply a different remedial action (e.g., a remedial action, selected by the user to address the problematic system behavior, that is different from a remedial action selected by the system to address the problematic system behavior), adjust one or more performance criteria, adjust a threshold that defines a problematic system behavior, and/or make any other kind of change or combination thereof to the system's self-healing behavior. A user-selected remedial action may be selected from a set of remedial actions already defined for the system. Alternatively or additionally, a user may define and select a new remedial action, not previously defined for the system, for the system to apply if/when the problematic system behavior recurs. As another example, the system may associate different weights with different remedial actions, where the weights help the system select which remedial action to apply in a particular situation. A user may supply user input to increase or decrease the weight(s) for one or more particular remedial actions, such that the system uses the adjusted weight(s) if/when the problematic system behavior recurs. Alternatively or additionally, a user may supply input that informs the system of a problematic secondary effect (e.g., degrading performance of a system component and/or a component of another system) of a remedial action, so that the system takes the problematic secondary effect into account for future instances of a problematic system behavior. Alternatively or additionally, a user may store information about user-initiated remedial actions (as opposed to system-selected remedial actions) that may have affected system performance. For example, a user may input data corresponding to system maintenance, such as replacing a network cable, upgrading a processor, moving the system to a different geographical location, and/or any other kind of user-initiated remedial action or combination thereof that may increase or mitigate a problematic system behavior. Alternatively or additionally, a user may supply input indicating that one or more problematic system behaviors are resolved. The resolution may be of a kind that the system has not yet detected and/or is not configured to detect. Alternatively or additionally, a user may instruct the system to clear a cache, history, and/or other kind of memory associated with a machine learning engine. For example, the user may instruct the system to clear a machine learning engine's entire memory, or the machine learning engine's memory prior to a particular date. Clearing a machine learning engine's memory prior to a particular date may improve the machine learning engine's ability to make predictions by eliminating out-of-date historical data. In general, in an embodiment, the system adjusts future predictions based on the additional information supplied by the user. Adjusting self-healing based on user input may allow the system to make best efforts to self-heal, while still allowing for users to control the system's operation and make human determinations as to how the system should respond to problematic system behaviors. 4. Illustrative Example A detailed example is described below for purposes of clarity. Components and/or operations described below should be understood as one specific example which may not be applicable to certain embodiments. Accordingly, components and/or operations described below should not be construed as limiting the scope of any of the claims. FIG.3illustrates a graph300of system performance over time. In this example, the predicted effectiveness of applying a remedial action is defined with reference to system performance (e.g., available processor cycles per unit of time, available bandwidth, and/or any other kind of system performance metric or combination thereof). Specifically, the one or more performance criteria for successfully applying the remedial action correspond to restoring the system performance to a threshold level. At the start of the graph, system performance is decreasing over time. At time T1, the system applies a remedial action, resulting in the system performance improving. After time T1, the system performance is above the threshold level but does not reach its previous high. System performance subsequently starts to decrease again, and at time T2, the system applies the remedial action again. At time T3, based on the results of the applications of the remedial action at times T1and T2, the system predicts that the effectiveness of applying the remedial action will not satisfy the one or more performance criteria (i.e., restoring system performance to at least the threshold level after time T5. Despite the prediction, at time T4, the system applies the remedial action again. Applying the remedial action at time T4allows system performance to stay above the threshold level for an additional period of time. At time T5, system performance falls below the threshold level. At time T6, the system applies the remedial action again, but as predicted at time T3, the remedial action does not satisfy the one or more performance criteria, i.e., fails to restore system performance to at least the threshold level. In this example, no further action was taken to prevent system performance from falling below the threshold level at time T5, or to allow the application of the remedial action at time T6to satisfy the one or more performance criteria. However, a system notification generated at time T3may allow an administrator to intervene and take some further action (not shown inFIG.3) to ensure that system performance remains above the threshold level and/or that subsequent applications of remedial actions, if needed, satisfy the one or more performance criteria. 5. Miscellaneous; Extensions Embodiments are directed to a system with one or more devices that include a hardware processor and that are configured to perform any of the operations described herein and/or recited in any of the claims below. In an embodiment, a non-transitory computer readable storage medium comprises instructions which, when executed by one or more hardware processors, causes performance of any of the operations described herein and/or recited in any of the claims. Any combination of the features and functionalities described herein may be used in accordance with one or more embodiments. In the foregoing specification, embodiments have been described with reference to numerous specific details that may vary from implementation to implementation. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense. The sole and exclusive indicator of the scope of the invention, and what is intended by the applicants to be the scope of the invention, is the literal and equivalent scope of the set of claims that issue from this application, in the specific form in which such claims issue, including any subsequent correction. 6. Hardware Overview According to one embodiment, the techniques described herein are implemented by one or more special-purpose computing devices (i.e., computing devices specially configured to perform certain functionality). The special-purpose computing devices may be hard-wired to perform the techniques, or may include digital electronic devices such as one or more application-specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), or network processing units (NPUs) that are persistently programmed to perform the techniques, or may include one or more general purpose hardware processors programmed to perform the techniques pursuant to program instructions in firmware, memory, other storage, or a combination. Such special-purpose computing devices may also combine custom hard-wired logic, ASICs, FPGAs, or NPUs with custom programming to accomplish the techniques. The special-purpose computing devices may be desktop computer systems, portable computer systems, handheld devices, networking devices or any other device that incorporates hard-wired and/or program logic to implement the techniques. For example,FIG.4is a block diagram that illustrates a computer system400upon which an embodiment of the invention may be implemented. Computer system400includes a bus402or other communication mechanism for communicating information, and a hardware processor404coupled with bus402for processing information. Hardware processor404may be, for example, a general purpose microprocessor. Computer system400also includes a main memory406, such as a random access memory (RAM) or other dynamic storage device, coupled to bus402for storing information and instructions to be executed by processor404. Main memory406also may be used for storing temporary variables or other intermediate information during execution of instructions to be executed by processor404. Such instructions, when stored in non-transitory storage media accessible to processor404, render computer system400into a special-purpose machine that is customized to perform the operations specified in the instructions. Computer system400further includes a read only memory (ROM)408or other static storage device coupled to bus402for storing static information and instructions for processor404. A storage device410, such as a magnetic disk or optical disk, is provided and coupled to bus402for storing information and instructions. Computer system400may be coupled via bus402to a display412, such as a liquid crystal display (LCD), plasma display, electronic ink display, cathode ray tube (CRT) monitor, or any other kind of device for displaying information to a computer user. An input device414, including alphanumeric and other keys, may be coupled to bus402for communicating information and command selections to processor404. Alternatively or in addition, the computer system400may receive user input via a cursor control416, such as a mouse, a trackball, a trackpad, a touchscreen, or cursor direction keys for communicating direction information and command selections to processor404and for controlling cursor movement on display412. This input device typically has two degrees of freedom in two axes, a first axis (e.g., x) and a second axis (e.g., y), that allows the device to specify positions in a plane. The display412may be configured to receive user input via one or more pressure-sensitive sensors, multi-touch sensors, and/or gesture sensors. Alternatively or in addition, the computer system400may receive user input via a microphone, video camera, and/or some other kind of user input device (not shown). Computer system400may implement the techniques described herein using customized hard-wired logic, one or more ASICs or FPGAs, firmware and/or program logic which in combination with the computer system causes or programs computer system400to be a special-purpose machine. According to one embodiment, the techniques herein are performed by computer system400in response to processor404executing one or more sequences of one or more instructions contained in main memory406. Such instructions may be read into main memory406from another storage medium, such as storage device410. Execution of the sequences of instructions contained in main memory406causes processor404to perform the process steps described herein. In alternative embodiments, hard-wired circuitry may be used in place of or in combination with software instructions. The term “storage media” as used herein refers to any non-transitory media that store data and/or instructions that cause a machine to operate in a specific fashion. Such storage media may comprise non-volatile media and/or volatile media. Non-volatile media includes, for example, optical or magnetic disks, such as storage device410. Volatile media includes dynamic memory, such as main memory406. Common forms of storage media include, for example, a floppy disk, a flexible disk, hard disk, solid state drive, magnetic tape, or any other magnetic data storage medium, a CD-ROM, any other optical data storage medium, any physical medium with patterns of holes, a RAM, a programmable read-only memory (PROM), and erasable PROM (EPROM), a FLASH-EPROM, non-volatile random-access memory (NVRAM), any other memory chip or cartridge, content-addressable memory (CAM), and ternary content-addressable memory (TCAM). Storage media is distinct from but may be used in conjunction with transmission media. Transmission media participates in transferring information between storage media. For example, transmission media includes coaxial cables, copper wire and fiber optics, including the wires that comprise bus402. Transmission media can also take the form of acoustic or light waves, such as those generated during radio-wave and infra-red data communications. Various forms of media may be involved in carrying one or more sequences of one or more instructions to processor404for execution. For example, the instructions may initially be carried on a magnetic disk or solid state drive of a remote computer. The remote computer can load the instructions into its dynamic memory and send the instructions over a network, via a network interface controller (NIC), such as an Ethernet controller or Wi-Fi controller. A NIC local to computer system400can receive the data from the network and place the data on bus402. Bus402carries the data to main memory406, from which processor404retrieves and executes the instructions. The instructions received by main memory406may optionally be stored on storage device410either before or after execution by processor404. Computer system400also includes a communication interface418coupled to bus402. Communication interface418provides a two-way data communication coupling to a network link420that is connected to a local network422. For example, communication interface418may be an integrated services digital network (ISDN) card, cable modem, satellite modem, or a modem to provide a data communication connection to a corresponding type of telephone line. As another example, communication interface418may be a local area network (LAN) card to provide a data communication connection to a compatible LAN. Wireless links may also be implemented. In any such implementation, communication interface418sends and receives electrical, electromagnetic or optical signals that carry digital data streams representing various types of information. Network link420typically provides data communication through one or more networks to other data devices. For example, network link420may provide a connection through local network422to a host computer424or to data equipment operated by an Internet Service Provider (ISP)426. ISP426in turn provides data communication services through the world wide packet data communication network now commonly referred to as the “Internet”428. Local network422and Internet428both use electrical, electromagnetic or optical signals that carry digital data streams. The signals through the various networks and the signals on network link420and through communication interface418, which carry the digital data to and from computer system400, are example forms of transmission media. Computer system400can send messages and receive data, including program code, through the network(s), network link420and communication interface418. In the Internet example, a server430might transmit a requested code for an application program through Internet428, ISP426, local network422and communication interface418. The received code may be executed by processor404as it is received, and/or stored in storage device410, or other non-volatile storage for later execution. 7. Computer Networks and Cloud Networks In one or more embodiments, a computer network provides connectivity among a set of nodes running software that utilizes techniques as described herein. The nodes may be local to and/or remote from each other. The nodes are connected by a set of links. Examples of links include a coaxial cable, an unshielded twisted cable, a copper cable, an optical fiber, and a virtual link. A subset of nodes implements the computer network. Examples of such nodes include a switch, a router, a firewall, and a network address translator (NAT). Another subset of nodes uses the computer network. Such nodes (also referred to as “hosts”) may execute a client process and/or a server process. A client process makes a request for a computing service (such as, execution of a particular application, and/or storage of a particular amount of data). A server process responds by executing the requested service and/or returning corresponding data. A computer network may be a physical network, including physical nodes connected by physical links. A physical node is any digital device. A physical node may be a function-specific hardware device, such as a hardware switch, a hardware router, a hardware firewall, and a hardware NAT. Additionally or alternatively, a physical node may be any physical resource that provides compute power to perform a task, such as one that is configured to execute various virtual machines and/or applications performing respective functions. A physical link is a physical medium connecting two or more physical nodes. Examples of links include a coaxial cable, an unshielded twisted cable, a copper cable, and an optical fiber. A computer network may be an overlay network. An overlay network is a logical network implemented on top of another network (such as, a physical network). Each node in an overlay network corresponds to a respective node in the underlying network. Hence, each node in an overlay network is associated with both an overlay address (to address to the overlay node) and an underlay address (to address the underlay node that implements the overlay node). An overlay node may be a digital device and/or a software process (such as, a virtual machine, an application instance, or a thread) A link that connects overlay nodes is implemented as a tunnel through the underlying network. The overlay nodes at either end of the tunnel treat the underlying multi-hop path between them as a single logical link. Tunneling is performed through encapsulation and decapsulation. In an embodiment, a client may be local to and/or remote from a computer network. The client may access the computer network over other computer networks, such as a private network or the Internet. The client may communicate requests to the computer network using a communications protocol, such as Hypertext Transfer Protocol (HTTP). The requests are communicated through an interface, such as a client interface (such as a web browser), a program interface, or an application programming interface (API). In an embodiment, a computer network provides connectivity between clients and network resources. Network resources include hardware and/or software configured to execute server processes. Examples of network resources include a processor, a data storage, a virtual machine, a container, and/or a software application. Network resources are shared amongst multiple clients. Clients request computing services from a computer network independently of each other. Network resources are dynamically assigned to the requests and/or clients on an on-demand basis. Network resources assigned to each request and/or client may be scaled up or down based on, for example, (a) the computing services requested by a particular client, (b) the aggregated computing services requested by a particular tenant, and/or (c) the aggregated computing services requested of the computer network. Such a computer network may be referred to as a “cloud network.” In an embodiment, a service provider provides a cloud network to one or more end users. Various service models may be implemented by the cloud network, including but not limited to Software-as-a-Service (SaaS), Platform-as-a-Service (PaaS), and Infrastructure-as-a-Service (IaaS). In SaaS, a service provider provides end users the capability to use the service provider's applications, which are executing on the network resources. In PaaS, the service provider provides end users the capability to deploy custom applications onto the network resources. The custom applications may be created using programming languages, libraries, services, and tools supported by the service provider. In IaaS, the service provider provides end users the capability to provision processing, storage, networks, and other fundamental computing resources provided by the network resources. Any applications, including an operating system, may be deployed on the network resources. In an embodiment, various deployment models may be implemented by a computer network, including but not limited to a private cloud, a public cloud, and a hybrid cloud. In a private cloud, network resources are provisioned for exclusive use by a particular group of one or more entities (the term “entity” as used herein refers to a corporation, organization, person, or other entity). The network resources may be local to and/or remote from the premises of the particular group of entities. In a public cloud, cloud resources are provisioned for multiple entities that are independent from each other (also referred to as “tenants” or “customers”). The computer network and the network resources thereof are accessed by clients corresponding to different tenants. Such a computer network may be referred to as a “multi-tenant computer network.” Several tenants may use a same particular network resource at different times and/or at the same time. The network resources may be local to and/or remote from the premises of the tenants. In a hybrid cloud, a computer network comprises a private cloud and a public cloud. An interface between the private cloud and the public cloud allows for data and application portability. Data stored at the private cloud and data stored at the public cloud may be exchanged through the interface. Applications implemented at the private cloud and applications implemented at the public cloud may have dependencies on each other. A call from an application at the private cloud to an application at the public cloud (and vice versa) may be executed through the interface. In an embodiment, tenants of a multi-tenant computer network are independent of each other. For example, one tenant (through operation, tenant-specific practices, employees, and/or identification to the external world) may be separate from another tenant. Different tenants may demand different network requirements for the computer network. Examples of network requirements include processing speed, amount of data storage, security requirements, performance requirements, throughput requirements, latency requirements, resiliency requirements, Quality of Service (QoS) requirements, tenant isolation, and/or consistency. The same computer network may need to implement different network requirements demanded by different tenants. In one or more embodiments, in a multi-tenant computer network, tenant isolation is implemented to ensure that the applications and/or data of different tenants are not shared with each other. Various tenant isolation approaches may be used. In an embodiment, each tenant is associated with a tenant ID. Each network resource of the multi-tenant computer network is tagged with a tenant ID. A tenant is permitted access to a particular network resource only if the tenant and the particular network resources are associated with a same tenant ID. In an embodiment, each tenant is associated with a tenant ID. Each application, implemented by the computer network, is tagged with a tenant ID. Additionally or alternatively, each data structure and/or dataset, stored by the computer network, is tagged with a tenant ID. A tenant is permitted access to a particular application, data structure, and/or dataset only if the tenant and the particular application, data structure, and/or dataset are associated with a same tenant ID. As an example, each database implemented by a multi-tenant computer network may be tagged with a tenant ID. Only a tenant associated with the corresponding tenant ID may access data of a particular database. As another example, each entry in a database implemented by a multi-tenant computer network may be tagged with a tenant ID. Only a tenant associated with the corresponding tenant ID may access data of a particular entry. However, the database may be shared by multiple tenants. In an embodiment, a subscription list indicates which tenants have authorization to access which applications. For each application, a list of tenant IDs of tenants authorized to access the application is stored. A tenant is permitted access to a particular application only if the tenant ID of the tenant is included in the subscription list corresponding to the particular application. In an embodiment, network resources (such as digital devices, virtual machines, application instances, and threads) corresponding to different tenants are isolated to tenant-specific overlay networks maintained by the multi-tenant computer network. As an example, packets from any source device in a tenant overlay network may only be transmitted to other devices within the same tenant overlay network. Encapsulation tunnels are used to prohibit any transmissions from a source device on a tenant overlay network to devices in other tenant overlay networks. Specifically, the packets, received from the source device, are encapsulated within an outer packet. The outer packet is transmitted from a first encapsulation tunnel endpoint (in communication with the source device in the tenant overlay network) to a second encapsulation tunnel endpoint (in communication with the destination device in the tenant overlay network). The second encapsulation tunnel endpoint decapsulates the outer packet to obtain the original packet transmitted by the source device. The original packet is transmitted from the second encapsulation tunnel endpoint to the destination device in the same particular overlay network. 8. Microservice Applications According to one or more embodiments, the techniques described herein are implemented in a microservice architecture. A microservice in this context refers to software logic designed to be independently deployable, having endpoints that may be logically coupled to other microservices to build a variety of applications. Applications built using microservices are distinct from monolithic applications, which are designed as a single fixed unit and generally comprise a single logical executable. With microservice applications, different microservices are independently deployable as separate executables. Microservices may communicate using HyperText Transfer Protocol (HTTP) messages and/or according to other communication protocols via API endpoints. Microservices may be managed and updated separately, written in different languages, and be executed independently from other microservices. Microservices provide flexibility in managing and building applications. Different applications may be built by connecting different sets of microservices without changing the source code of the microservices. Thus, the microservices act as logical building blocks that may be arranged in a variety of ways to build different applications. Microservices may provide monitoring services that notify a microservices manager (such as If-This-Then-That (IFTTT), Zapier, or Oracle Self-Service Automation (OSSA)) when trigger events from a set of trigger events exposed to the microservices manager occur. Microservices exposed for an application may alternatively or additionally provide action services that perform an action in the application (controllable and configurable via the microservices manager by passing in values, connecting the actions to other triggers and/or data passed along from other actions in the microservices manager) based on data received from the microservices manager. The microservice triggers and/or actions may be chained together to form recipes of actions that occur in optionally different applications that are otherwise unaware of or have no control or dependency on each other. These managed applications may be authenticated or plugged in to the microservices manager, for example, with user-supplied application credentials to the manager, without requiring reauthentication each time the managed application is used alone or in combination with other applications. In one or more embodiments, microservices may be connected via a GUI. For example, microservices may be displayed as logical blocks within a window, frame, other element of a GUI. A user may drag and drop microservices into an area of the GUI used to build an application. The user may connect the output of one microservice into the input of another microservice using directed arrows or any other GUI element. The application builder may run verification tests to confirm that the output and inputs are compatible (e.g., by checking the datatypes, size restrictions, etc.) Triggers The techniques described above may be encapsulated into a microservice, according to one or more embodiments. In other words, a microservice may trigger a notification (into the microservices manager for optional use by other plugged in applications, herein referred to as the “target” microservice) based on the above techniques and/or may be represented as a GUI block and connected to one or more other microservices. The trigger condition may include absolute or relative thresholds for values, and/or absolute or relative thresholds for the amount or duration of data to analyze, such that the trigger to the microservices manager occurs whenever a plugged-in microservice application detects that a threshold is crossed. For example, a user may request a trigger into the microservices manager when the microservice application detects a value has crossed a triggering threshold. In one embodiment, the trigger, when satisfied, might output data for consumption by the target microservice. In another embodiment, the trigger, when satisfied, outputs a binary value indicating the trigger has been satisfied, or outputs the name of the field or other context information for which the trigger condition was satisfied. Additionally or alternatively, the target microservice may be connected to one or more other microservices such that an alert is input to the other microservices. Other microservices may perform responsive actions based on the above techniques, including, but not limited to, deploying additional resources, adjusting system configurations, and/or generating GUIs. Actions In one or more embodiments, a plugged-in microservice application may expose actions to the microservices manager. The exposed actions may receive, as input, data or an identification of a data object or location of data, that causes data to be moved into a data cloud. In one or more embodiments, the exposed actions may receive, as input, a request to increase or decrease existing alert thresholds. The input might identify existing in-application alert thresholds and whether to increase or decrease, or delete the threshold. Additionally or alternatively, the input might request the microservice application to create new in-application alert thresholds. The in-application alerts may trigger alerts to the user while logged into the application, or may trigger alerts to the user using default or user-selected alert mechanisms available within the microservice application itself, rather than through other applications plugged into the microservices manager. In one or more embodiments, the microservice application may generate and provide an output based on input that identifies, locates, or provides historical data, and defines the extent or scope of the requested output. The action, when triggered, causes the microservice application to provide, store, or display the output, for example, as a data model or as aggregate data that describes a data model. In the foregoing specification, embodiments of the invention have been described with reference to numerous specific details that may vary from implementation to implementation. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense. The sole and exclusive indicator of the scope of the invention, and what is intended by the applicants to be the scope of the invention, is the literal and equivalent scope of the set of claims that issue from this application, in the specific form in which such claims issue, including any subsequent correction.
74,504
11860730
DETAILED DESCRIPTION Example methods and systems are contemplated herein. Any example embodiment or feature described herein is not necessarily to be construed as preferred or advantageous over other embodiments or features. Further, the example embodiments described herein are not meant to be limiting. It will be readily understood that certain aspects of the disclosed systems and methods can be arranged and combined in a wide variety of different configurations, all of which are contemplated herein. In addition, the particular arrangements shown in the figures should not be viewed as limiting. It should be understood that other embodiments might include more or less of each element shown in a given figure. Additionally, some of the illustrated elements may be combined or omitted. Yet further, an example embodiment may include elements that are not illustrated in the figures. Throughout this disclosure, SPI interactions are described. In describing these interactions, the terms “controller” and “subcontroller” are used. In general, the term “controller” is used herein when describing SPI as a stand-in for the term of art of a “master.” Likewise, the term “subcontroller” is generally used herein when describing SPI as a stand-in for the term of art of a “slave.” As such, when describing SPI interactions herein, the terms “controller” and “subcontroller” are meant to include the attributes of such SPI terms of art described above. It is understood, however, that in some embodiments (e.g., when context dictates otherwise, etc.) these terms may encompass additional or alternative meanings to the corresponding SPI terms of art. Lidar devices as described herein can include one or more light emitters and one or more detectors used for detecting light that is emitted by the one or more light emitters and reflected by one or more objects in an environment surrounding the lidar device. As an example, the surrounding environment could include an interior or exterior environment, such as an inside of a building or an outside of a building. Additionally or alternatively, the surrounding environment could include an interior of a vehicle. Still further, the surrounding environment could include a vicinity around and/or on a roadway. Examples of objects in the surrounding environment include, but are not limited to, other vehicles, traffic signs, pedestrians, bicyclists, roadway surfaces, buildings, terrain, etc. Additionally, the one or more light emitters could emit light into a local environment of the lidar system itself. For example, light emitted from the one or more light emitters could interact with a housing of the lidar system and/or surfaces or structures coupled to the lidar system. In some cases, the lidar system could be mounted to a vehicle, in which case the one or more light emitters could be configured to emit light that interacts with objects within a vicinity of the vehicle. Further, the light emitters could include optical fiber amplifiers, laser diodes, light-emitting diodes (LEDs), among other possibilities. A lidar device can determine distances to environmental features while scanning through a scene (e.g., the portion of the surrounding environment observable from the perspective of the lidar device) to collect data that can be assembled into a “point cloud” indicative of reflective surfaces in the surrounding environment. Individual points in the point cloud can be determined, for example, by emitting a laser pulse and detecting a returning pulse, if any, reflected from an object in the surrounding environment, and then determining a distance to the object according to a time delay between the emission of the pulse and the reception of the reflected pulse. As a result, for example, a three-dimensional map of points indicative of locations of reflective features in the surrounding environment can be generated. In some lidar devices, a series of laser pulses may be emitted (e.g., simultaneously, sequentially, etc.) by one or more light emitters (e.g., lasers, etc.) and a corresponding series of return pulses may be detected by one or more light detectors (e.g., photodiodes, etc.). In some embodiments, the light emitters used to emit one or more pulses in a lidar device may include one or more lasers (e.g., laser diodes, etc.). Such lasers may receive power from one or more capacitors that selectively store and discharge energy. Further, which lasers fire during a given firing cycle of the lidar device and what power is used to fire those lasers may be important (e.g., based on eye-safety requirements for laser-emission power, etc.). In order to ensure the proper firing sequence and associated firing power is used for the light emitters, a lidar device may include a laser driver. The laser driver may receive instructions from one or more central computing devices (e.g., a central computer used to control an entire autonomous vehicle or semi-autonomous vehicle, and/or a computing device used to control the entire lidar device, etc.). Based on the received instructions, the laser driver may set emission characteristics for the associated light emitters. For example, the laser driver may control: the charging time for capacitors associated with the light emitters, the charging current for capacitors associated with the light emitters, the charging voltage for capacitors associated with the light emitters, selective discharging of the capacitors through one or more light emitters (e.g., to cause one or more lasers within the lidar device to fire during a given time interval as opposed to discharging the capacitors to ground to prevent a given laser from firing, etc.), etc. Because it may be important that specific light emitters fire at specific powers, it may be desired that the laser driver receives proper instructions (e.g., uncorrupted instructions, etc.) from one or more central computing devices. In order to ensure the instructions are correctly received (e.g., the received instructions are not jumbled due to a bit-flip or a transposition, etc.), embodiments described herein provide a communication protocol between a serial peripheral interface (SPI) subcontroller and a SPI controller. The communication protocol described herein may provide that, during a write transaction (e.g., when the SPI controller is issuing a write command to the SPI subcontroller, etc.), the SPI subcontroller only writes payload data when the error-checking process confirms that the data is correct (e.g., correct to within a threshold probability, etc.). Likewise, the communication protocol described herein may provide that, during a write transaction, the SPI controller can use one or more communication channels to receive information from the SPI subcontroller that also indicates whether the instructions were correctly received (e.g., the SPI controller can receive an ACK or a NACK from the SPI subcontroller, etc.). Yet further, the communication protocol described herein may provide that a specific communication is sent from the SPI controller to the SPI subcontroller for a read transaction (e.g., when the SPI controller is issuing a read command to the SPI subcontroller, etc.) that prevents the SPI subcontroller (e.g., to within a threshold probability, etc.) from misinterpreting the communication as a write transaction (e.g., based on data corruption, etc.). Each of the capabilities of example embodiments described above may prevent (e.g., in the case of a lidar device that uses SPI communications between one or more central computers and the laser driver, etc.) an SPI subcontroller (e.g., laser driver, etc.) from executing unintended commands (e.g., causing one or more light emitters to fire when they should not or causing one or more light emitters to fire at an incorrect power, etc.). Further, each of the capabilities of example embodiments described above may allow an SPI controller (e.g., the central computer of a lidar device, etc.) to verify whether the SPI subcontroller received uncorrupted instructions (e.g., the central computer of the lidar device can ensure the laser driver did not cause any light emitters to fire improperly, etc.). One example embodiment includes an SPI controller (e.g., central computer of a lidar device, etc.) connected to an SPI subcontroller (e.g., laser driver, etc.) through four communication channels (e.g., a four-wire interface, etc.). The communication channels may include a Slave Select channel (SSN), a clock channel (SCLK), a Master In Slave Out channel (MISO), and a Master Out Slave In (MOSI) channel. The communication packets transmitted by the SPI controller to the SPI subcontroller (e.g., over the MOSI channel, etc.) and the communication protocol may be 4 bytes (i.e., 32 bits) in size (e.g., with each byte beginning with the most-significant bit, etc.). Each communication packet may include: a read/write bit (e.g., with a “0” indicating read and a “1” indicating write, etc.), a 7-bit address segment, a 2-byte payload data segment, and a 1-byte forward error-checking code segment. Each bit in the 4-byte packet may be transmitted along a clock-edge of the SCLK (e.g., a rising clock-edge or a falling clock-edge signal, etc.). While receiving the payload data segment, the SPI subcontroller (e.g., an integrated circuit, such as a processor, of the SPI subcontroller, etc.) may calculate a reverse error-checking code. The forward error-checking code and/or the reverse error-checking code may include a cyclic redundancy check (e.g., a 9-bit polynomial, CRC-8, such as the 1-Wire 0x31 polynomial, etc.). Such cyclic redundancy checks may be computed using hardware logic, for example. In the case of a write transaction (e.g., when the read/write bit indicates a write transaction is to occur, etc.), the SPI subcontroller may compare the computed reverse error-checking code to the received forward error-checking code. If the two error-checking codes match, the SPI subcontroller (e.g., an integrated circuit of the SPI subcontroller, etc.) may proceed to write the payload data to the address (e.g., resulting in the emission of light from one or more light emitters in the case of a laser driver of a lidar device, etc.). If the two error-checking codes do not match, however, the SPI subcontroller (e.g., an integrated circuit of the SPI subcontroller, etc.) may refrain from writing the payload data to the address. Regardless of whether the two error-checking codes match, the SPI controller may receive the computed reverse error-checking code from the SPI subcontroller (e.g., over the MISO channel, etc.). In this way, the SPI controller may determine whether a proper write transaction was executed (e.g., by comparing the transmitted forward error-checking code to the received reverse error-checking code, etc.). If a write transaction failed (e.g., because the two error-checking codes do not match, etc.), the SPI controller can take remedial action. For example, the SPI controller may retransmit the previous packet or may alert an upstream computing device (e.g., a central computer of an autonomous vehicle to which the lidar device is attached, etc.) that a write did not occur (e.g., in the case of a lidar device, that the light emitters did not emit light during one or more time segments, etc.). In a read transaction (e.g., when the read/write bit indicates a read transaction is to occur, etc.), like in a write transaction, the SPI subcontroller may calculate a reverse error-checking code. Likewise, regardless if the two error-checking codes match, the SPI controller may receive the computed reverse error-checking code from the SPI subcontroller (e.g., also over the MISO channel, etc.). However, unlike in a write transaction and because a read transaction may be less likely to be seriously detrimental if performed unintendedly (e.g., because a read transaction may not inherently cause one or more light emitters to fire in a lidar device, etc.), the SPI subcontroller may provide data read from the address to the SPI controller over the MISO channel even if the computed reverse error-checking code does not match the forward error-checking code. In fact, in some embodiments, the SPI subcontroller may not even compare the computed reverse error-checking code to the forward error-checking code, at all. Further, as a result of this, the SPI subcontroller may begin providing the data read from the address to the SPI controller as soon as the portion of the communication packet corresponding to the address is received. In this way, the read transaction can be expedited (e.g., because the SPI subcontroller does not need to wait for a reverse error-checking code to be computed/compared to a received forward error-checking code, etc.). Further, because in a read transaction there is no payload data to be transmitted in the packet communicated from the SPI controller to the SPI subcontroller, the 2-bytes of the packet dedicated to payload data may be superfluous. Similarly, in embodiments where the SPI subcontroller does not perform a comparison of the forward error-checking code to the calculated reverse error-checking code for read transactions, a provision of the forward error-checking code by the SPI controller may also be superfluous. As such, to further enhance robustness against improper read/write transactions, the SPI controller (e.g., an integrated circuit of the SPI controller, etc.) may assign specified values to the payload data segment and to the forward error-checking code segment that are unlikely to result in the SPI subcontroller performing a write transaction, even if the read/write bit were somehow flipped. For example, the SPI controller may assign a value to the forward error-checking code that intentionally does not correspond to the CRC-8 calculated for the given address segment along with the payload data segment when the read/write bit indicates that a write transaction is to occur (e.g., as a result of an unintended bit-flip, etc.). In this way, even if a read/write bit-flip occurred, the calculated reverse error-checking code would not match the forward error-checking code provided, and the SPI subcontroller would refrain from performing an unintended write transaction. While the embodiments described above include the SPI subcontroller providing the read data from the address regardless of whether the forward error-checking code and the reverse error-checking code match, other embodiments are also possible and are contemplated herein. For example, in some embodiments (e.g., embodiments where a read transaction may also cause potentially harmful results, etc.), the SPI subcontroller may wait until a reverse error-checking code is calculated and a comparison to a received forward error-checking code is performed to proceed with a read routine. In such embodiments, if the two error-checking codes do not match, the SPI subcontroller may refrain from providing data read from the address to the SPI controller (e.g., over the MISO channel, etc.). While example embodiments are described in terms of lidar devices and laser drivers/controllers, it is understood that the techniques described herein are broadly applicable to a multitude of SPIs. Any SPI may employ the error-checking techniques described herein to allow for a bidirectional communication between controller and subcontroller that alerts both controller and subcontroller when one or more communication errors has occurred. Further, the number of connections (e.g., communication channels, etc.) between controller and subcontroller and the specific error-checking codes (e.g., cyclic redundancy checks, etc.) used are provided solely as examples. It is understood that other numbers of SPI connections and/or other types of error-checking codes could additionally or alternatively be employed. Such additional or alternative embodiments are contemplated herein. The following description and accompanying drawings will elucidate features of various example embodiments. The embodiments provided are by way of example, and are not intended to be limiting. As such, the dimensions of the drawings are not necessarily to scale. Example systems within the scope of the present disclosure will now be described in greater detail. An example system may be implemented in or may take the form of an automobile. Additionally, an example system may also be implemented in or take the form of various vehicles, such as cars, trucks, motorcycles, buses, airplanes, helicopters, drones, lawn mowers, earth movers, boats, submarines, all-terrain vehicles, snowmobiles, aircraft, recreational vehicles, amusement park vehicles, farm equipment or vehicles, construction equipment or vehicles, warehouse equipment or vehicles, factory equipment or vehicles, trams, golf carts, trains, trolleys, sidewalk delivery vehicles, robot devices, etc. Other vehicles are possible as well. Further, in some embodiments, example systems might not include a vehicle. Referring now to the figures,FIG.1is a functional block diagram illustrating example vehicle100, which may be configured to operate fully or partially in an autonomous mode. More specifically, vehicle100may operate in an autonomous mode without human interaction through receiving control instructions from a computing system. As part of operating in the autonomous mode, vehicle100may use sensors to detect and possibly identify objects of the surrounding environment to enable safe navigation. Additionally, example vehicle100may operate in a partially autonomous (i.e., semi-autonomous) mode in which some functions of the vehicle100are controlled by a human driver of the vehicle100and some functions of the vehicle100are controlled by the computing system. For example, vehicle100may also include subsystems that enable the driver to control operations of vehicle100such as steering, acceleration, and braking, while the computing system performs assistive functions such as lane-departure warnings/lane-keeping assist or adaptive cruise control based on other objects (e.g., vehicles, etc.) in the surrounding environment. As described herein, in a partially autonomous driving mode, even though the vehicle assists with one or more driving operations (e.g., steering, braking and/or accelerating to perform lane centering, adaptive cruise control, advanced driver assistance systems (ADAS), emergency braking, etc.), the human driver is expected to be situationally aware of the vehicle's surroundings and supervise the assisted driving operations. Here, even though the vehicle may perform all driving tasks in certain situations, the human driver is expected to be responsible for taking control as needed. Although, for brevity and conciseness, various systems and methods are described below in conjunction with autonomous vehicles, these or similar systems and methods can be used in various driver assistance systems that do not rise to the level of fully autonomous driving systems (i.e. partially autonomous driving systems). In the United States, the Society of Automotive Engineers (SAE) have defined different levels of automated driving operations to indicate how much, or how little, a vehicle controls the driving, although different organizations, in the United States or in other countries, may categorize the levels differently. More specifically, the disclosed systems and methods can be used in SAE Level2driver assistance systems that implement steering, braking, acceleration, lane centering, adaptive cruise control, etc., as well as other driver support. The disclosed systems and methods can be used in SAE Level3driving assistance systems capable of autonomous driving under limited (e.g., highway, etc.) conditions. Likewise, the disclosed systems and methods can be used in vehicles that use SAE Level4self-driving systems that operate autonomously under most regular driving situations and require only occasional attention of the human operator. In all such systems, accurate lane estimation can be performed automatically without a driver input or control (e.g., while the vehicle is in motion, etc.) and result in improved reliability of vehicle positioning and navigation and the overall safety of autonomous, semi-autonomous, and other driver assistance systems. As previously noted, in addition to the way in which SAE categorizes levels of automated driving operations, other organizations, in the United States or in other countries, may categorize levels of automated driving operations differently. Without limitation, the disclosed systems and methods herein can be used in driving assistance systems defined by these other organizations' levels of automated driving operations. As shown inFIG.1, vehicle100may include various subsystems, such as propulsion system102, sensor system104, control system106, one or more peripherals108, power supply110, computer system112(which could also be referred to as a computing system) with data storage114, and user interface116. In other examples, vehicle100may include more or fewer subsystems, which can each include multiple elements. The subsystems and components of vehicle100may be interconnected in various ways. In addition, functions of vehicle100described herein can be divided into additional functional or physical components, or combined into fewer functional or physical components within embodiments. For instance, the control system106and the computer system112may be combined into a single system that operates the vehicle100in accordance with various operations. Propulsion system102may include one or more components operable to provide powered motion for vehicle100and can include an engine/motor118, an energy source119, a transmission120, and wheels/tires121, among other possible components. For example, engine/motor118may be configured to convert energy source119into mechanical energy and can correspond to one or a combination of an internal combustion engine, an electric motor, steam engine, or Stirling engine, among other possible options. For instance, in some embodiments, propulsion system102may include multiple types of engines and/or motors, such as a gasoline engine and an electric motor. Energy source119represents a source of energy that may, in full or in part, power one or more systems of vehicle100(e.g., engine/motor118, etc.). For instance, energy source119can correspond to gasoline, diesel, other petroleum-based fuels, propane, other compressed gas-based fuels, ethanol, solar panels, batteries, and/or other sources of electrical power. In some embodiments, energy source119may include a combination of fuel tanks, batteries, capacitors, and/or flywheels. Transmission120may transmit mechanical power from engine/motor118to wheels/tires121and/or other possible systems of vehicle100. As such, transmission120may include a gearbox, a clutch, a differential, and a drive shaft, among other possible components. A drive shaft may include axles that connect to one or more wheels/tires121. Wheels/tires121of vehicle100may have various configurations within example embodiments. For instance, vehicle100may exist in a unicycle, bicycle/motorcycle, tricycle, or car/truck four-wheel format, among other possible configurations. As such, wheels/tires121may connect to vehicle100in various ways and can exist in different materials, such as metal and rubber. Sensor system104can include various types of sensors, such as Global Positioning System (GPS)122, inertial measurement unit (IMU)124, radar126, laser rangefinder/lidar128, camera130, steering sensor123, and throttle/brake sensor125, among other possible sensors. In some embodiments, sensor system104may also include sensors configured to monitor internal systems of the vehicle100(e.g.,02monitor, fuel gauge, engine oil temperature, brake wear, etc.). GPS122may include a transceiver operable to provide information regarding the position of vehicle100with respect to the Earth. IMU124may have a configuration that uses one or more accelerometers and/or gyroscopes and may sense position and orientation changes of vehicle100based on inertial acceleration. For example, IMU124may detect a pitch and yaw of the vehicle100while vehicle100is stationary or in motion. Radar126may represent one or more systems configured to use radio signals to sense objects, including the speed and heading of the objects, within the surrounding environment of vehicle100. As such, radar126may include antennas configured to transmit and receive radio signals. In some embodiments, radar126may correspond to a mountable radar system configured to obtain measurements of the surrounding environment of vehicle100. Laser rangefinder/lidar128may include one or more laser sources, a laser scanner, and one or more detectors, among other system components, and may operate in a coherent mode (e.g., using heterodyne detection, etc.) or in an incoherent detection mode (i.e., time-of-flight mode). In some embodiments, the one or more detectors of the laser rangefinder/lidar128may include one or more photodetectors, which may be especially sensitive detectors (e.g., avalanche photodiodes, etc.). In some examples, such photodetectors may be capable of detecting single photons (e.g., single-photon avalanche diodes (SPADs), etc.). Further, such photodetectors can be arranged (e.g., through an electrical connection in series, etc.) into an array (e.g., as in a silicon photomultiplier (SiPM), etc.). In some examples, the one or more photodetectors are Geiger-mode operated devices and the lidar includes subcomponents designed for such Geiger-mode operation. Camera130may include one or more devices (e.g., still camera, video camera, a thermal imaging camera, a stereo camera, a night vision camera, etc.) configured to capture images of the surrounding environment of vehicle100. Steering sensor123may sense a steering angle of vehicle100, which may involve measuring an angle of the steering wheel or measuring an electrical signal representative of the angle of the steering wheel. In some embodiments, steering sensor123may measure an angle of the wheels of the vehicle100, such as detecting an angle of the wheels with respect to a forward axis of the vehicle100. Steering sensor123may also be configured to measure a combination (or a subset) of the angle of the steering wheel, electrical signal representing the angle of the steering wheel, and the angle of the wheels of vehicle100. Throttle/brake sensor125may detect the position of either the throttle position or brake position of vehicle100. For instance, throttle/brake sensor125may measure the angle of both the gas pedal (throttle) and brake pedal or may measure an electrical signal that could represent, for instance, an angle of a gas pedal (throttle) and/or an angle of a brake pedal. Throttle/brake sensor125may also measure an angle of a throttle body of vehicle100, which may include part of the physical mechanism that provides modulation of energy source119to engine/motor118(e.g., a butterfly valve, a carburetor, etc.). Additionally, throttle/brake sensor125may measure a pressure of one or more brake pads on a rotor of vehicle100or a combination (or a subset) of the angle of the gas pedal (throttle) and brake pedal, electrical signal representing the angle of the gas pedal (throttle) and brake pedal, the angle of the throttle body, and the pressure that at least one brake pad is applying to a rotor of vehicle100. In other embodiments, throttle/brake sensor125may be configured to measure a pressure applied to a pedal of the vehicle, such as a throttle or brake pedal. Control system106may include components configured to assist in navigating vehicle100, such as steering unit132, throttle134, brake unit136, sensor fusion algorithm138, computer vision system140, navigation/pathing system142, and obstacle avoidance system144. More specifically, steering unit132may be operable to adjust the heading of vehicle100, and throttle134may control the operating speed of engine/motor118to control the acceleration of vehicle100. Brake unit136may decelerate vehicle100, which may involve using friction to decelerate wheels/tires121. In some embodiments, brake unit136may convert kinetic energy of wheels/tires121to electric current for subsequent use by a system or systems of vehicle100. Sensor fusion algorithm138may include a Kalman filter, Bayesian network, or other algorithms that can process data from sensor system104. In some embodiments, sensor fusion algorithm138may provide assessments based on incoming sensor data, such as evaluations of individual objects and/or features, evaluations of a particular situation, and/or evaluations of potential impacts within a given situation. Computer vision system140may include hardware and software (e.g., a general purpose processor, an application-specific integrated circuit (ASIC), a volatile memory, a non-volatile memory, one or more machine-learned models, etc.) operable to process and analyze images in an effort to determine objects that are in motion (e.g., other vehicles, pedestrians, bicyclists, animals, etc.) and objects that are not in motion (e.g., traffic lights, roadway boundaries, speedbumps, potholes, etc.). As such, computer vision system140may use object recognition, Structure From Motion (SFM), video tracking, and other algorithms used in computer vision, for instance, to recognize objects, map an environment, track objects, estimate the speed of objects, etc. Navigation/pathing system142may determine a driving path for vehicle100, which may involve dynamically adjusting navigation during operation. As such, navigation/pathing system142may use data from sensor fusion algorithm138, GPS122, and maps, among other sources to navigate vehicle100. Obstacle avoidance system144may evaluate potential obstacles based on sensor data and cause systems of vehicle100to avoid or otherwise negotiate the potential obstacles. As shown inFIG.1, vehicle100may also include peripherals108, such as wireless communication system146, touchscreen148, microphone150, and/or speaker152. Peripherals108may provide controls or other elements for a user to interact with user interface116. For example, touchscreen148may provide information to users of vehicle100. User interface116may also accept input from the user via touchscreen148. Peripherals108may also enable vehicle100to communicate with devices, such as other vehicle devices. Wireless communication system146may wirelessly communicate with one or more devices directly or via a communication network. For example, wireless communication system146could use 3G cellular communication, such as code-division multiple access (CDMA), evolution-data optimized (EVDO), global system for mobile communications (GSM)/general packet radio service (GPRS), or cellular communication, such as 4G worldwide interoperability for microwave access (WiMAX) or long-term evolution (LTE), or 5G. Alternatively, wireless communication system146may communicate with a wireless local area network (WLAN) using WIFI® or other possible connections. Wireless communication system146may also communicate directly with a device using an infrared link, Bluetooth, or ZigBee, for example. Other wireless protocols, such as various vehicular communication systems, are possible within the context of the disclosure. For example, wireless communication system146may include one or more dedicated short-range communications (DSRC) devices that could include public and/or private data communications between vehicles and/or roadside stations. Vehicle100may include power supply110for powering components. Power supply110may include a rechargeable lithium-ion or lead-acid battery in some embodiments. For instance, power supply110may include one or more batteries configured to provide electrical power. Vehicle100may also use other types of power supplies. In an example embodiment, power supply110and energy source119may be integrated into a single energy source. Vehicle100may also include computer system112to perform operations, such as operations described therein. As such, computer system112may include at least one processor113(which could include at least one microprocessor) operable to execute instructions115stored in a non-transitory, computer-readable medium, such as data storage114. In some embodiments, computer system112may represent a plurality of computing devices that may serve to control individual components or subsystems of vehicle100in a distributed fashion. In some embodiments, data storage114may contain instructions115(e.g., program logic, etc.) executable by processor113to execute various functions of vehicle100, including those described above in connection withFIG.1. Data storage114may contain additional instructions as well, including instructions to transmit data to, receive data from, interact with, and/or control one or more of propulsion system102, sensor system104, control system106, and peripherals108. In addition to instructions115, data storage114may store data such as roadway maps, path information, among other information. Such information may be used by vehicle100and computer system112during the operation of vehicle100in the autonomous, semi-autonomous, and/or manual modes. Vehicle100may include user interface116for providing information to or receiving input from a user of vehicle100. User interface116may control or enable control of content and/or the layout of interactive images that could be displayed on touchscreen148. Further, user interface116could include one or more input/output devices within the set of peripherals108, such as wireless communication system146, touchscreen148, microphone150, and speaker152. Computer system112may control the function of vehicle100based on inputs received from various subsystems (e.g., propulsion system102, sensor system104, control system106, etc.), as well as from user interface116. For example, computer system112may utilize input from sensor system104in order to estimate the output produced by propulsion system102and control system106. Depending upon the embodiment, computer system112could be operable to monitor many aspects of vehicle100and its subsystems. In some embodiments, computer system112may disable some or all functions of the vehicle100based on signals received from sensor system104. The components of vehicle100could be configured to work in an interconnected fashion with other components within or outside their respective systems. For instance, in an example embodiment, camera130could capture a plurality of images that could represent information about a state of a surrounding environment of vehicle100operating in an autonomous or semi-autonomous mode. The state of the surrounding environment could include parameters of the road on which the vehicle is operating. For example, computer vision system140may be able to recognize the slope (grade) or other features based on the plurality of images of a roadway. Additionally, the combination of GPS122and the features recognized by computer vision system140may be used with map data stored in data storage114to determine specific road parameters. Further, radar126and/or laser rangefinder/lidar128, and/or some other environmental mapping, ranging, and/or positioning sensor system may also provide information about the surroundings of the vehicle. In other words, a combination of various sensors (which could be termed input-indication and output-indication sensors) and computer system112could interact to provide an indication of an input provided to control a vehicle or an indication of the surroundings of a vehicle. In some embodiments, computer system112may make a determination about various objects based on data that is provided by systems other than the radio system. For example, vehicle100may have lasers or other optical sensors configured to sense objects in a field of view of the vehicle. Computer system112may use the outputs from the various sensors to determine information about objects in a field of view of the vehicle, and may determine distance and direction information to the various objects. Computer system112may also determine whether objects are desirable or undesirable based on the outputs from the various sensors. AlthoughFIG.1shows various components of vehicle100(i.e., wireless communication system146, computer system112, data storage114, and user interface116) as being integrated into the vehicle100, one or more of these components could be mounted or associated separately from vehicle100. For example, data storage114could, in part or in full, exist separate from vehicle100. Thus, vehicle100could be provided in the form of device elements that may be located separately or together. The device elements that make up vehicle100could be communicatively coupled together in a wired and/or wireless fashion. FIGS.2A-2Eshows an example vehicle200(e.g., a fully autonomous vehicle or semi-autonomous vehicle, etc.) that can include some or all of the functions described in connection with vehicle100in reference toFIG.1. Although vehicle200is illustrated inFIGS.2A-2Eas a van with side view mirrors216for illustrative purposes, the present disclosure is not so limited. For instance, the vehicle200can represent a truck, a car, a semi-trailer truck, a motorcycle, a golf cart, an off-road vehicle, a farm vehicle, or any other vehicle that is described elsewhere herein (e.g., buses, boats, airplanes, helicopters, drones, lawn mowers, earth movers, submarines, all-terrain vehicles, snowmobiles, aircraft, recreational vehicles, amusement park vehicles, farm equipment, construction equipment or vehicles, warehouse equipment or vehicles, factory equipment or vehicles, trams, trains, trolleys, sidewalk delivery vehicles, and robot devices, etc.). The example vehicle200may include one or more sensor systems202,204,206,208,210,212,214, and218. In some embodiments, sensor systems202,204,206,208,210,212,214, and/or218could represent one or more optical systems (e.g. cameras, etc.), one or more lidars, one or more radars, one or more range finders, one or more inertial sensors, one or more humidity sensors, one or more acoustic sensors (e.g., microphones, sonar devices, etc.), or one or more other sensors configured to sense information about an environment surrounding the vehicle200. In other words, any sensor system now known or later created could be coupled to the vehicle200and/or could be utilized in conjunction with various operations of the vehicle200. As an example, a lidar system could be utilized in self-driving or other types of navigation, planning, perception, and/or mapping operations of the vehicle200. In addition, sensor systems202,204,206,208,210,212,214, and/or218could represent a combination of sensors described herein (e.g., one or more lidars and radars; one or more lidars and cameras; one or more cameras and radars; one or more lidars, cameras, and radars; etc.). Note that the number, location, and type of sensor systems (e.g.,202,204, etc.) depicted inFIGS.2A-Eare intended as a non-limiting example of the location, number, and type of such sensor systems of an autonomous or semi-autonomous vehicle. Alternative numbers, locations, types, and configurations of such sensors are possible (e.g., to comport with vehicle size, shape, aerodynamics, fuel economy, aesthetics, or other conditions, to reduce cost, to adapt to specialized environmental or application circumstances, etc.). For example, the sensor systems (e.g.,202,204, etc.) could be disposed in various other locations on the vehicle (e.g., at location216, etc.) and could have fields of view that correspond to internal and/or surrounding environments of the vehicle200. The sensor system202may be mounted atop the vehicle200and may include one or more sensors configured to detect information about an environment surrounding the vehicle200, and output indications of the information. For example, sensor system202can include any combination of cameras, radars, lidars, range finders, inertial sensors, humidity sensors, and acoustic sensors (e.g., microphones, sonar devices, etc.). The sensor system202can include one or more movable mounts that could be operable to adjust the orientation of one or more sensors in the sensor system202. In one embodiment, the movable mount could include a rotating platform that could scan sensors so as to obtain information from each direction around the vehicle200. In another embodiment, the movable mount of the sensor system202could be movable in a scanning fashion within a particular range of angles and/or azimuths and/or elevations. The sensor system202could be mounted atop the roof of a car, although other mounting locations are possible. Additionally, the sensors of sensor system202could be distributed in different locations and need not be collocated in a single location. Furthermore, each sensor of sensor system202can be configured to be moved or scanned independently of other sensors of sensor system202. Additionally or alternatively, multiple sensors may be mounted at one or more of the sensor locations202,204,206,208,210,212,214, and/or218. For example, there may be two lidar devices mounted at a sensor location and/or there may be one lidar device and one radar mounted at a sensor location. The one or more sensor systems202,204,206,208,210,212,214, and/or218could include one or more lidar sensors. For example, the lidar sensors could include a plurality of light-emitter devices arranged over a range of angles with respect to a given plane (e.g., the x-y plane, etc.). For example, one or more of the sensor systems202,204,206,208,210,212,214, and/or218may be configured to rotate or pivot about an axis (e.g., the z-axis, etc.) perpendicular to the given plane so as to illuminate an environment surrounding the vehicle200with light pulses. Based on detecting various aspects of reflected light pulses (e.g., the elapsed time of flight, polarization, intensity, etc.), information about the surrounding environment may be determined. In an example embodiment, sensor systems202,204,206,208,210,212,214, and/or218may be configured to provide respective point cloud information that may relate to physical objects within the surrounding environment of the vehicle200. While vehicle200and sensor systems202,204,206,208,210,212,214, and218are illustrated as including certain features, it will be understood that other types of sensor systems are contemplated within the scope of the present disclosure. Further, the example vehicle200can include any of the components described in connection with vehicle100ofFIG.1. In an example configuration, one or more radars can be located on vehicle200. Similar to radar126described above, the one or more radars may include antennas configured to transmit and receive radio waves (e.g., electromagnetic waves having frequencies between 30 Hz and 300 GHz, etc.). Such radio waves may be used to determine the distance to and/or velocity of one or more objects in the surrounding environment of the vehicle200. For example, one or more sensor systems202,204,206,208,210,212,214, and/or218could include one or more radars. In some examples, one or more radars can be located near the rear of the vehicle200(e.g., sensor systems208,210, etc.), to actively scan the environment near the back of the vehicle200for the presence of radio-reflective objects. Similarly, one or more radars can be located near the front of the vehicle200(e.g., sensor systems212,214, etc.) to actively scan the environment near the front of the vehicle200. A radar can be situated, for example, in a location suitable to illuminate a region including a forward-moving path of the vehicle200without occlusion by other features of the vehicle200. For example, a radar can be embedded in and/or mounted in or near the front bumper, front headlights, cowl, and/or hood, etc. Furthermore, one or more additional radars can be located to actively scan the side and/or rear of the vehicle200for the presence of radio-reflective objects, such as by including such devices in or near the rear bumper, side panels, rocker panels, and/or undercarriage, etc. The vehicle200can include one or more cameras. For example, the one or more sensor systems202,204,206,208,210,212,214, and/or218could include one or more cameras. The camera can be a photosensitive instrument, such as a still camera, a video camera, a thermal imaging camera, a stereo camera, a night vision camera, etc., that is configured to capture a plurality of images of the surrounding environment of the vehicle200. To this end, the camera can be configured to detect visible light, and can additionally or alternatively be configured to detect light from other portions of the spectrum, such as infrared or ultraviolet light. The camera can be a two-dimensional detector, and can optionally have a three-dimensional spatial range of sensitivity. In some embodiments, the camera can include, for example, a range detector configured to generate a two-dimensional image indicating distance from the camera to a number of points in the surrounding environment. To this end, the camera may use one or more range detecting techniques. For example, the camera can provide range information by using a structured light technique in which the vehicle200illuminates an object in the surrounding environment with a predetermined light pattern, such as a grid or checkerboard pattern and uses the camera to detect a reflection of the predetermined light pattern from environmental surroundings. Based on distortions in the reflected light pattern, the vehicle200can determine the distance to the points on the object. The predetermined light pattern may comprise infrared light, or radiation at other suitable wavelengths for such measurements. In some examples, the camera can be mounted inside a front windshield of the vehicle200. Specifically, the camera can be situated to capture images from a forward-looking view with respect to the orientation of the vehicle200. Other mounting locations and viewing angles of the camera can also be used, either inside or outside the vehicle200. Further, the camera can have associated optics operable to provide an adjustable field of view. Still further, the camera can be mounted to vehicle200with a movable mount to vary a pointing angle of the camera, such as via a pan/tilt mechanism. The vehicle200may also include one or more acoustic sensors (e.g., one or more of the sensor systems202,204,206,208,210,212,214,216,218may include one or more acoustic sensors, etc.) used to sense a surrounding environment of vehicle200. Acoustic sensors may include microphones (e.g., piezoelectric microphones, condenser microphones, ribbon microphones, microelectromechanical systems (MEMS) microphones, etc.) used to sense acoustic waves (i.e., pressure differentials) in a fluid (e.g., air, etc.) of the environment surrounding the vehicle200. Such acoustic sensors may be used to identify sounds in the surrounding environment (e.g., sirens, human speech, animal sounds, alarms, etc.) upon which control strategy for vehicle200may be based. For example, if the acoustic sensor detects a siren (e.g., an ambulatory siren, a fire engine siren, etc.), vehicle200may slow down and/or navigate to the edge of a roadway. Although not shown inFIGS.2A-2E, the vehicle200can include a wireless communication system (e.g., similar to the wireless communication system146ofFIG.1and/or in addition to the wireless communication system146ofFIG.1, etc.). The wireless communication system may include wireless transmitters and receivers that could be configured to communicate with devices external or internal to the vehicle200. Specifically, the wireless communication system could include transceivers configured to communicate with other vehicles and/or computing devices, for instance, in a vehicular communication system or a roadway station. Examples of such vehicular communication systems include DSRC, radio frequency identification (RFID), and other proposed communication standards directed towards intelligent transport systems. The vehicle200may include one or more other components in addition to or instead of those shown. The additional components may include electrical or mechanical functionality. A control system of the vehicle200may be configured to control the vehicle200in accordance with a control strategy from among multiple possible control strategies. The control system may be configured to receive information from sensors coupled to the vehicle200(on or off the vehicle200), modify the control strategy (and an associated driving behavior) based on the information, and control the vehicle200in accordance with the modified control strategy. The control system further may be configured to monitor the information received from the sensors, and continuously evaluate driving conditions; and also may be configured to modify the control strategy and driving behavior based on changes in the driving conditions. For example, a route taken by a vehicle from one destination to another may be modified based on driving conditions. Additionally or alternatively, the velocity, acceleration, turn angle, follow distance (i.e., distance to a vehicle ahead of the present vehicle), lane selection, etc. could all be modified in response to changes in the driving conditions. FIG.3is a conceptual illustration of wireless communication between various computing systems related to an autonomous or semi-autonomous vehicle, according to example embodiments. In particular, wireless communication may occur between remote computing system302and vehicle200via network304. Wireless communication may also occur between server computing system306and remote computing system302, and between server computing system306and vehicle200. Vehicle200can correspond to various types of vehicles capable of transporting passengers or objects between locations, and may take the form of any one or more of the vehicles discussed above. In some instances, vehicle200may operate in an autonomous or semi-autonomous mode that enables a control system to safely navigate vehicle200between destinations using sensor measurements. When operating in an autonomous or semi-autonomous mode, vehicle200may navigate with or without passengers. As a result, vehicle200may pick up and drop off passengers between desired destinations. Remote computing system302may represent any type of device related to remote assistance techniques, including but not limited to those described herein. Within examples, remote computing system302may represent any type of device configured to (i) receive information related to vehicle200, (ii) provide an interface through which a human operator can in turn perceive the information and input a response related to the information, and (iii) transmit the response to vehicle200or to other devices. Remote computing system302may take various forms, such as a workstation, a desktop computer, a laptop, a tablet, a mobile phone (e.g., a smart phone, etc.), and/or a server. In some examples, remote computing system302may include multiple computing devices operating together in a network configuration. Remote computing system302may include one or more subsystems and components similar or identical to the subsystems and components of vehicle200. At a minimum, remote computing system302may include a processor configured for performing various operations described herein. In some embodiments, remote computing system302may also include a user interface that includes input/output devices, such as a touchscreen and a speaker. Other examples are possible as well. Network304represents infrastructure that enables wireless communication between remote computing system302and vehicle200. Network304also enables wireless communication between server computing system306and remote computing system302, and between server computing system306and vehicle200. The position of remote computing system302can vary within examples. For instance, remote computing system302may have a remote position from vehicle200that has a wireless communication via network304. In another example, remote computing system302may correspond to a computing device within vehicle200that is separate from vehicle200, but with which a human operator can interact while a passenger or driver of vehicle200. In some examples, remote computing system302may be a computing device with a touchscreen operable by the passenger of vehicle200. In some embodiments, operations described herein that are performed by remote computing system302may be additionally or alternatively performed by vehicle200(i.e., by any system(s) or subsystem(s) of vehicle200). In other words, vehicle200may be configured to provide a remote assistance mechanism with which a driver or passenger of the vehicle can interact. Server computing system306may be configured to wirelessly communicate with remote computing system302and vehicle200via network304(or perhaps directly with remote computing system302and/or vehicle200). Server computing system306may represent any computing device configured to receive, store, determine, and/or send information relating to vehicle200and the remote assistance thereof. As such, server computing system306may be configured to perform any operation(s), or portions of such operation(s), that is/are described herein as performed by remote computing system302and/or vehicle200. Some embodiments of wireless communication related to remote assistance may utilize server computing system306, while others may not. Server computing system306may include one or more subsystems and components similar or identical to the subsystems and components of remote computing system302and/or vehicle200, such as a processor configured for performing various operations described herein, and a wireless communication interface for receiving information from, and providing information to, remote computing system302and vehicle200. The various systems described above may perform various operations. These operations and related features will now be described. In line with the discussion above, a computing system (e.g., remote computing system302, server computing system306, a computing system local to vehicle200, etc.) may operate to use a camera to capture images of the surrounding environment of an autonomous or semi-autonomous vehicle. In general, at least one computing system will be able to analyze the images and possibly control the autonomous or semi-autonomous vehicle. In some embodiments, to facilitate autonomous or semi-autonomous operation, a vehicle (e.g., vehicle200, etc.) may receive data representing objects in an environment surrounding the vehicle (also referred to herein as “environment data”) in a variety of ways. A sensor system on the vehicle may provide the environment data representing objects of the surrounding environment. For example, the vehicle may have various sensors, including a camera, a radar unit, a laser range finder, a microphone, a radio unit, and other sensors. Each of these sensors may communicate environment data to a processor in the vehicle about information each respective sensor receives. In one example, a camera may be configured to capture still images and/or video. In some embodiments, the vehicle may have more than one camera positioned in different orientations. Also, in some embodiments, the camera may be able to move to capture images and/or video in different directions. The camera may be configured to store captured images and video to a memory for later processing by a processing system of the vehicle. The captured images and/or video may be the environment data. Further, the camera may include an image sensor as described herein. In another example, a radar unit may be configured to transmit an electromagnetic signal that will be reflected by various objects near the vehicle, and then capture electromagnetic signals that reflect off the objects. The captured reflected electromagnetic signals may enable the radar system (or processing system) to make various determinations about objects that reflected the electromagnetic signal. For example, the distances to and positions of various reflecting objects may be determined. In some embodiments, the vehicle may have more than one radar in different orientations. The radar system may be configured to store captured information to a memory for later processing by a processing system of the vehicle. The information captured by the radar system may be environment data. In another example, a laser range finder may be configured to transmit an electromagnetic signal (e.g., infrared light, such as that from a gas or diode laser, or other possible light source) that will be reflected by target objects near the vehicle. The laser range finder may be able to capture the reflected electromagnetic (e.g., infrared light, etc.) signals. The captured reflected electromagnetic signals may enable the range-finding system (or processing system) to determine a range to various objects. The laser range finder may also be able to determine a velocity or speed of target objects and store it as environment data. Additionally, in an example, a microphone may be configured to capture audio of the environment surrounding the vehicle. Sounds captured by the microphone may include emergency vehicle sirens and the sounds of other vehicles. For example, the microphone may capture the sound of the siren of an ambulance, fire engine, or police vehicle. A processing system may be able to identify that the captured audio signal is indicative of an emergency vehicle. In another example, the microphone may capture the sound of an exhaust of another vehicle, such as that from a motorcycle. A processing system may be able to identify that the captured audio signal is indicative of a motorcycle. The data captured by the microphone may form a portion of the environment data. In yet another example, the radio unit may be configured to transmit an electromagnetic signal that may take the form of a Bluetooth signal, 802.11 signal, and/or other radio technology signal. The first electromagnetic radiation signal may be transmitted via one or more antennas located in a radio unit. Further, the first electromagnetic radiation signal may be transmitted with one of many different radio-signaling modes. However, in some embodiments it is desirable to transmit the first electromagnetic radiation signal with a signaling mode that requests a response from devices located near the autonomous or semi-autonomous vehicle. The processing system may be able to detect nearby devices based on the responses communicated back to the radio unit and use this communicated information as a portion of the environment data. In some embodiments, the processing system may be able to combine information from the various sensors in order to make further determinations of the surrounding environment of the vehicle. For example, the processing system may combine data from both radar information and a captured image to determine if another vehicle or pedestrian is in front of the autonomous or semi-autonomous vehicle. In other embodiments, other combinations of sensor data may be used by the processing system to make determinations about the surrounding environment. While operating in an autonomous mode (or semi-autonomous mode), the vehicle may control its operation with little-to-no human input. For example, a human-operator may enter an address into the vehicle and the vehicle may then be able to drive, without further input from the human (e.g., the human does not have to steer or touch the brake/gas pedals, etc.), to the specified destination. Further, while the vehicle is operating autonomously or semi-autonomously, the sensor system may be receiving environment data. The processing system of the vehicle may alter the control of the vehicle based on environment data received from the various sensors. In some examples, the vehicle may alter a velocity of the vehicle in response to environment data from the various sensors. The vehicle may change velocity in order to avoid obstacles, obey traffic laws, etc. When a processing system in the vehicle identifies objects near the vehicle, the vehicle may be able to change velocity, or alter the movement in another way. When the vehicle detects an object but is not highly confident in the detection of the object, the vehicle can request a human operator (or a more powerful computer) to perform one or more remote assistance tasks, such as (i) confirm whether the object is in fact present in the surrounding environment (e.g., if there is actually a stop sign or if there is actually no stop sign present, etc.), (ii) confirm whether the vehicle's identification of the object is correct, (iii) correct the identification if the identification was incorrect and/or (iv) provide a supplemental instruction (or modify a present instruction) for the autonomous or semi-autonomous vehicle. Remote assistance tasks may also include the human operator providing an instruction to control operation of the vehicle (e.g., instruct the vehicle to stop at a stop sign if the human operator determines that the object is a stop sign, etc.), although in some scenarios, the vehicle itself may control its own operation based on the human operator's feedback related to the identification of the object. To facilitate this, the vehicle may analyze the environment data representing objects of the surrounding environment to determine at least one object having a detection confidence below a threshold. A processor in the vehicle may be configured to detect various objects of the surrounding environment based on environment data from various sensors. For example, in one embodiment, the processor may be configured to detect objects that may be important for the vehicle to recognize. Such objects may include pedestrians, bicyclists, street signs, other vehicles, indicator signals on other vehicles, and other various objects detected in the captured environment data. The detection confidence may be indicative of a likelihood that the determined object is correctly identified in the surrounding environment, or is present in the surrounding environment. For example, the processor may perform object detection of objects within image data in the received environment data, and determine that at least one object has the detection confidence below the threshold based on being unable to identify the object with a detection confidence above the threshold. If a result of an object detection or object recognition of the object is inconclusive, then the detection confidence may be low or below the set threshold. The vehicle may detect objects of the surrounding environment in various ways depending on the source of the environment data. In some embodiments, the environment data may come from a camera and be image or video data. In other embodiments, the environment data may come from a lidar unit. The vehicle may analyze the captured image or video data to identify objects in the image or video data. The methods and apparatuses may be configured to monitor image and/or video data for the presence of objects of the surrounding environment. In other embodiments, the environment data may be radar, audio, or other data. The vehicle may be configured to identify objects of the surrounding environment based on the radar, audio, or other data. In some embodiments, the techniques the vehicle uses to detect objects may be based on a set of known data. For example, data related to environmental objects may be stored to a memory located in the vehicle. The vehicle may compare received data to the stored data to determine objects. In other embodiments, the vehicle may be configured to determine objects based on the context of the data. For example, street signs related to construction may generally have an orange color. Accordingly, the vehicle may be configured to detect objects that are orange, and located near the side of roadways as construction-related street signs. Additionally, when the processing system of the vehicle detects objects in the captured data, it also may calculate a confidence for each object. Further, the vehicle may also have a confidence threshold. The confidence threshold may vary depending on the type of object being detected. For example, the confidence threshold may be lower for an object that may require a quick responsive action from the vehicle, such as brake lights on another vehicle. However, in other embodiments, the confidence threshold may be the same for all detected objects. When the confidence associated with a detected object is greater than the confidence threshold, the vehicle may assume the object was correctly recognized and responsively adjust the control of the vehicle based on that assumption. When the confidence associated with a detected object is less than the confidence threshold, the actions that the vehicle takes may vary. In some embodiments, the vehicle may react as if the detected object is present despite the low confidence level. In other embodiments, the vehicle may react as if the detected object is not present. When the vehicle detects an object of the surrounding environment, it may also calculate a confidence associated with the specific detected object. The confidence may be calculated in various ways depending on the embodiment. In one example, when detecting objects of the surrounding environment, the vehicle may compare environment data to predetermined data relating to known objects. The closer the match between the environment data and the predetermined data, the higher the confidence. In other embodiments, the vehicle may use mathematical analysis of the environment data to determine the confidence associated with the objects. In response to determining that an object has a detection confidence that is below the threshold, the vehicle may transmit, to the remote computing system, a request for remote assistance with the identification of the object. As discussed above, the remote computing system may take various forms. For example, the remote computing system may be a computing device within the vehicle that is separate from the vehicle, but with which a human operator can interact while a passenger or driver of the vehicle, such as a touchscreen interface for displaying remote assistance information. Additionally or alternatively, as another example, the remote computing system may be a remote computer terminal or other device that is located at a location that is not near the vehicle. The request for remote assistance may include the environment data that includes the object, such as image data, audio data, etc. The vehicle may transmit the environment data to the remote computing system over a network (e.g., network304, etc.), and in some embodiments, via a server (e.g., server computing system306, etc.). The human operator of the remote computing system may in turn use the environment data as a basis for responding to the request. In some embodiments, when the object is detected as having a confidence below the confidence threshold, the object may be given a preliminary identification, and the vehicle may be configured to adjust the operation of the vehicle in response to the preliminary identification. Such an adjustment of operation may take the form of stopping the vehicle, switching the vehicle to a human-controlled mode, changing a velocity of the vehicle (e.g., a speed and/or direction, etc.), among other possible adjustments. In other embodiments, even if the vehicle detects an object having a confidence that meets or exceeds the threshold, the vehicle may operate in accordance with the detected object (e.g., come to a stop if the object is identified with high confidence as a stop sign, etc.), but may be configured to request remote assistance at the same time as (or at a later time from) when the vehicle operates in accordance with the detected object. FIG.4Ais a block diagram of a system, according to example embodiments. In particular,FIG.4Ashows a system400that includes a system controller402, a lidar device410, a plurality of sensors412, and a plurality of controllable components414. System controller402includes processor(s)404, a memory406, and instructions408stored on the memory406and executable by the processor(s)404to perform functions. The processor(s)404can include one or more processors, such as one or more general-purpose microprocessors (e.g., having a single core or multiple cores, etc.) and/or one or more special purpose microprocessors. The one or more processors may include, for instance, one or more central processing units (CPUs), one or more microcontrollers, one or more graphical processing units (GPUs), one or more tensor processing units (TPUs), one or more ASICs, and/or one or more field-programmable gate arrays (FPGAs). Other types of processors, computers, or devices configured to carry out software instructions are also contemplated herein. The memory406may include a computer-readable medium, such as a non-transitory, computer-readable medium, which may include without limitation, read-only memory (ROM), programmable read-only memory (PROM), erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), non-volatile random-access memory (e.g., flash memory, etc.), a solid state drive (SSD), a hard disk drive (HDD), a Compact Disc (CD), a Digital Video Disk (DVD), a digital tape, read/write (R/W) CDs, R/W DVDs, etc. The lidar device410, described further below, includes a plurality of light emitters configured to emit light (e.g., in light pulses, etc.) and one or more light detectors configured to detect light (e.g., reflected portions of the light pulses, etc.). The lidar device410may generate three-dimensional (3D) point cloud data from outputs of the light detector(s), and provide the 3D point cloud data to the system controller402. The system controller402, in turn, may perform operations on the 3D point cloud data to determine the characteristics of a surrounding environment (e.g., relative positions of objects within a surrounding environment, edge detection, object detection, proximity sensing, etc.). Similarly, the system controller402may use outputs from the plurality of sensors412to determine the characteristics of the system400and/or characteristics of the surrounding environment. For example, the sensors412may include one or more of a GPS, an IMU, an image capture device (e.g., a camera, etc.), a light sensor, a heat sensor, and other sensors indicative of parameters relevant to the system400and/or the surrounding environment. The lidar device410is depicted as separate from the sensors412for purposes of example, and may be considered as part of or as the sensors412in some examples. Based on characteristics of the system400and/or the surrounding environment determined by the system controller402based on the outputs from the lidar device410and the sensors412, the system controller402may control the controllable components414to perform one or more actions. For example, the system400may correspond to a vehicle, in which case the controllable components414may include a braking system, a turning system, and/or an accelerating system of the vehicle, and the system controller402may change aspects of these controllable components based on characteristics determined from the lidar device410and/or sensors412(e.g., when the system controller402controls the vehicle in an autonomous or semi-autonomous mode, etc.). Within examples, the lidar device410and the sensors412are also controllable by the system controller402. FIG.4Bis a block diagram of a lidar device, according to an example embodiment. In particular,FIG.4Bshows a lidar device410, having a controller416configured to control a plurality of light emitters424and one or more light detector(s), e.g., a plurality of light detectors426, etc. The lidar device410further includes a firing circuit428configured to select and provide power to respective light emitters of the plurality of light emitters424and may include a selector circuit430configured to select respective light detectors of the plurality of light detectors426. The controller416includes processor(s)418, a memory420, and instructions422stored on the memory420. Similar to processor(s)404, the processor(s)418can include one or more processors, such as one or more general-purpose microprocessors and/or one or more special purpose microprocessors. The one or more processors may include, for instance, one or more CPUs, one or more microcontrollers, one or more GPUs, one or more TPUs, one or more ASICs, and/or one or more FPGAs. Other types of processors, computers, or devices configured to carry out software instructions are also contemplated herein. Similar to memory406, the memory420may include a computer-readable medium, such as a non-transitory, computer-readable medium, such as, but not limited to, ROM, PROM, EPROM, EEPROM, non-volatile random-access memory (e.g., flash memory, etc.), a SSD, a HDD, a CD, a DVD, a digital tape, R/W CDs, R/W DVDs, etc. The instructions422are stored on memory420and executable by the processor(s)418to perform functions related to controlling the firing circuit428and the selector circuit430, for generating 3D point cloud data, and for processing the 3D point cloud data (or perhaps facilitating processing the 3D point cloud data by another computing device, such as the system controller402). The controller416can determine 3D point cloud data by using the light emitters424to emit pulses of light. A time of emission is established for each light emitter and a relative location at the time of emission is also tracked. Aspects of a surrounding environment of the lidar device410, such as various objects, reflect the pulses of light. For example, when the lidar device410is in a surrounding environment that includes a road, such objects may include vehicles, signs, pedestrians, road surfaces, construction cones, etc. Some objects may be more reflective than others, such that an intensity of reflected light may indicate a type of object that reflects the light pulses. Further, surfaces of objects may be at different positions relative to the lidar device410, and thus take more or less time to reflect portions of light pulses back to the lidar device410. Accordingly, the controller416may track a detection time at which a reflected light pulse is detected by a light detector and a relative position of the light detector at the detection time. By measuring time differences between emission times and detection times, the controller416can determine how far the light pulses travel prior to being received, and thus a relative distance of a corresponding object. By tracking relative positions at the emission times and detection times the controller416can determine an orientation of the light pulse and reflected light pulse relative to the lidar device410, and thus a relative orientation of the object. By tracking intensities of received light pulses, the controller416can determine how reflective the object is. The 3D point cloud data determined based on this information may thus indicate relative positions of detected reflected light pulses (e.g., within a coordinate system, such as a Cartesian coordinate system, etc.) and intensities of each reflected light pulse. The firing circuit428is used for selecting light emitters for emitting light pulses. The selector circuit430similarly is used for sampling outputs from light detectors. FIG.5illustrates a system500. The system500may correspond to a portion of the lidar device410illustrated inFIG.4B. For example, the system may include a controller416(e.g., an SPI controller, etc.). Though not illustrated, the SPI controller416may include the processor418, the memory420, and/or the instructions422illustrated inFIG.4B. In addition, the system500may include a firing circuit428. Although not illustrated inFIG.5, the firing circuit428may be connected to one or more light emitters424(e.g., LEDs, laser diodes, etc.), as illustrated inFIG.4B. The firing circuit428may include an SPI subcontroller510. In some embodiments, the SPI subcontroller510may correspond to a laser controller/firing controller. For example, the SPI subcontroller510may be primarily responsible for driving laser diodes and the SPI controller416may generate a time pulse on a dedicated trigger pin that is used by the SPI subcontroller510. Additionally, like the SPI controller416, the SPI subcontroller510may include a processor, a memory, and/or instructions. The SPI controller416may also receive feedback from the SPI subcontroller510, such as feedback usable to measure pulse width, which may allow for closed-loop control of the trigger signal pulse. Further, in some embodiments, the SPI controller416may control multiple firing circuits/SPI subcontrollers. Still further, in some embodiments, multiple SPI controllers416(each potentially having one or more associated firing circuits/SPI subcontrollers) may be daisy-chained together to form a lidar device. Further, the SPI controller416and the SPI subcontroller510may communicate with one another over an SPI. As illustrated, the SPI may include a MOSI channel502, a MISO channel504, a SSNchannel506, and a SCLK channel508over which the SPI controller416and the SPI subcontroller510communicate. In other words, the SPI may be a four-wire interface. In some embodiments, the SPI controller416may transmit commands (e.g., read commands, write commands, etc.) to the SPI subcontroller510for execution by the SPI subcontroller510over the SPI. For instance, the SPI controller416may transmit a write command that, when executed by the SPI subcontroller510, causes one or more associated light emitters to fire using one or more elements (e.g., capacitors, diodes, switches, transistors, etc.) of the firing circuit428. The MOSI channel502may be a channel over which bits representing data are communicated from the SPI controller416to the SPI subcontroller510. For example, forward error-checking codes, target addresses, and/or write payload data may be communicated from the SPI controller416to the SPI subcontroller510over the MOSI channel502as a series of bits. Additionally or alternatively, indications of whether a read command or a write command is to be executed may be communicated over the MOSI channel502(e.g., as a read/write bit, etc.). Other information may be communicated from the SPI controller416to the SPI subcontroller510as well, and is contemplated herein (e.g., a status of the SPI controller416, a power on/off command, etc.). The MISO channel504may be a channel over which bits representing data are communicated from the SPI subcontroller510back to the SPI controller416. For example, reverse error-checking codes and/or read payload data may be communicated from the SPI subcontroller510to the SPI controller416over the MOSI channel502. Additional or alternative data can be communicated, as well, and is contemplated herein. The SSNchannel506may be a channel by which the SPI controller416selects an SPI subcontroller510for interaction. In embodiments where the system500has only a single SPI subcontroller510(e.g., as illustrated inFIG.5, etc.), such a channel may still be used by the SPI controller416to trigger a transaction with the SPI subcontroller510. In some embodiments, though, there may be multiple SPI subcontrollers (e.g., associated with multiple firing circuits, etc.). The MOSI channels of each of the SPI subcontrollers may all be tied to a single MOSI channel of the SPI controller. Likewise, the MISO channels of each of the SPI subcontrollers may all be tied to a single MISO channel of the SPI controller. Further, the SCLK channels of each of the SPI subcontrollers may all be tied to a single SCLK channel of the SPI controller. However, in such embodiments, the SPI controller may include multiple SSNchannels/ports, each tied to the SSNport of a different SPI subcontroller. Further, the SPI controller may transmit a communication indication signal (e.g., a “0” or a low signal, etc.) to the SSNchannel of the SPI subcontroller with which the SPI controller wishes to interact while transmitting a non-communication indication signal (e.g., a “1” or a high signal, etc.) to the SSNchannels of the remaining SPI subcontrollers. In this way, the SPI subcontrollers may be alerted as to whether communications originating from the SPI controller (e.g., over the MOSI channel, etc.) are intended for that SPI subcontroller and/or the SPI subcontrollers may be alerted as to whether that SPI subcontroller is requested to transmit data (e.g., over the MISO channel, etc.) back to the SPI controller. The SCLK channel508may represent a clock signal transmitted from the SPI controller416to the SPI subcontroller510. The clock signal may be used to ensure that the clock of the SPI subcontroller510is synchronized with the clock of the SPI controller416. For example, the SPI subcontroller510may perform operations based on a rising edge of the clock signal or the falling edge of the clock signal from the SCLK channel508. As such, the operations performed by the SPI subcontroller510and the transmissions to/from the SPI subcontroller510may occur in a manner that is synchronized with the SPI controller416. In some embodiments, the SCLK channel508may be tied directly to an internal clock (e.g., quartz clock, etc.) of the SPI controller416. While a four-channel SPI is described above and throughout the disclosure, it is understood that other numbers of channels and communication protocols are also possible and are contemplated herein. For example, communication between the SPI controller416and the SPI subcontroller510could occur over one channel, two channels, three channels, five channels, six channels, seven channels, eight channels, nine channels, ten channels, etc. Additionally, while both the SPI controller416and the SPI subcontroller510illustrated inFIG.5have the same number of channels (i.e., four), it is understood that other embodiments are also possible and are contemplated herein. For example, in some embodiments, there may be multiple SPI subcontrollers/firing circuits corresponding to a single SPI controller (e.g., in the case of a lidar device, a central computer of the lidar device may control multiple laser drivers/firing circuits, etc.). In such embodiments, the SPI controller may have multiple SSNports (e.g., SS1, SS2, SS3, etc.), with one SSNport corresponding to one of the SPI subcontrollers. However, each of the SPI subcontrollers may only have a single SSNport. In this way, for example, the SPI controller may output a “0” to a select SSNchannel, while outputting a “1” to the other SSNchannels, in order to select one of the SPI subcontrollers to interact with. In such an embodiment, the SPI controller would clearly have a different number of ports/channels than each of the SPI subcontrollers. For example, in an embodiment having three SPI subcontrollers, each of the SPI subcontrollers may have four ports/channels (e.g., MOSI, MISO, SSN, SCLK, etc.) whereas the SPI controller may have seven ports/channels (e.g., MOSI, MISO, SS1, SS2, SS3, SCLK, etc.). Other numbers and arrangements of ports/channels are also possible and are contemplated herein. Further, while certain pieces of data are transmitted over specific channels, as described herein, it is understood that communication of those pieces of data over additional or alternative channels is also possible. Also, the directionality of the channels illustrated inFIG.5by the arrows is solely provided as an example. It is understood that, in various embodiments, various channels may have the reverse directionality (e.g., from the SPI subcontroller510to the SPI controller416as opposed to from the SPI controller416to the SPI subcontroller510, etc.) or be bidirectional. In some embodiments, communications/transactions between the SPI controller416and the SPI subcontroller510may occur using the transmission of data packets. For example, when transmitting a write command to the SPI subcontroller510, the SPI controller416may prepare and transmit a data packet to the SPI subcontroller510(e.g., over the MOSI channel502of the SPI, etc.). An example data packet600is illustrated inFIG.6. In some embodiments, as illustrated inFIG.6, the data packet600may be 4 bytes in length. The data packet600may include a read/write bit602, an address604, payload data606, and a forward error-checking code608. Further, for data that spans multiple bits (e.g., the address604, the payload data606, the forward error-checking code608, etc.), the data may be transmitted with the most-significant bit (MSB) first. However, it is understood that least-significant bit (LSB) first formats are also possible and contemplated herein. The read/write bit602may provide an indication to the SPI subcontroller510of whether the SPI controller416is instructing the SPI subcontroller510to perform a read transaction or a write transaction. For example, if the read/write bit602of the data packet600is a “1,” a write transaction may be indicated, whereas if the read/write bit602of the data packet600is a “0,” a read transaction is indicated. The reverse is equally possible and is contemplated herein. Upon receiving a read/write bit602indicating a write transaction is to be performed, the SPI subcontroller510may begin executing a write routine (e.g., stored as instructions within a memory of the SPI subcontroller510, etc.). Likewise, upon receiving a read/write bit602indicating a read transaction is to be performed, the SPI subcontroller510may begin executing a read routine (e.g., stored as instructions within a memory of the SPI subcontroller510, etc.). The address604may be a series of bits within the data packet600that represents the target address for the transaction. The address604may correspond to a target memory address within a memory of the SPI subcontroller510(e.g., a memory that is part of the integrated circuit of the SPI subcontroller510, etc.). For example, in a write transaction, the address604may correspond to a memory address within the memory of the SPI subcontroller510to which the payload data606is to be written by the SPI subcontroller510. Similarly, in a read transaction, the address604may correspond to a memory address within the memory of the SPI subcontroller510from which data is to be read by the SPI subcontroller510and, ultimately, transmitted to the SPI controller416. As illustrated inFIG.6, the address604may be a 7-bit value, beginning with the MSB. In other embodiments, the address604may be greater than or less than 7 bits in length and/or begin with the LSB. Further, the address604may be combined with the read/write bit602to form the first byte of the data packet600. The payload data606may be a series of bits within the data packet600that is to be written to a target address (e.g., the address604, etc.) in a write transaction. In a read transaction, however, the payload data606in the data packet600transmitted from the SPI controller416may be superfluous and, therefore, possibly disregarded by the SPI subcontroller510. As illustrated inFIG.6, the payload data606may be 2 bytes in length, beginning with the MSB. In other embodiments, the payload data606may be greater than or less than 2 bytes in length and/or may begin with the LSB. In some embodiments (e.g., embodiments where the SPI subcontroller510is a laser driver for a lidar device, etc.), the payload data606may be used by the SPI subcontroller510to control the firing characteristics (e.g., firing time, firing sequence, firing power, charging characteristics for associated energy storage devices, etc.) of light emitters (e.g., when the payload data606is written to a memory of the SPI subcontroller510, etc.). For example, the payload data606may be used to select a subset of the light emitters for firing during a subsequent firing cycle. The forward error-checking code608may be a series of bits that is usable by the SPI subcontroller510to determine whether any transmission errors occurred (e.g., as a result of data corruption in the transmission channel, etc.) during transmission of the data packet600. For example, the forward error-checking code608may include a cyclic redundancy check based on a 9-bit polynomial (e.g., a 1-Wire 0x31 polynomial, etc.). Other cyclic redundancy checks based on other polynomials (e.g., 17-bit polynomials, 33-bit polynomials, etc.) and/or other types of error-checking codes (e.g., checksums, etc.) are also possible and are contemplated herein. In an example write transaction, upon receiving the address604and the payload data606, the SPI subcontroller510may calculate a reverse error-checking code (e.g., using the same cyclic redundancy check as was used to generate the forward error-checking code608, etc.). Then, upon receiving the forward error-checking code608, the SPI subcontroller510may compare the forward error-checking code608to the calculated reverse error-checking code. If the two codes are the same, the SPI subcontroller510may proceed to writing the payload data606to the transmitted address604. If, however, the calculated reverse error-checking code is not the same as the transmitted forward error-checking code608, the SPI subcontroller510may refrain from writing the payload data606to the transmitted address604. As illustrated, in some embodiments the forward error-checking code608may be a byte in length, beginning with the MSB. In other embodiments, the forward error-checking code608may be greater than or less than a byte in length and/or may begin with the LSB. It is understood that the data packet600illustrated inFIG.6is provided solely as an example and that other types of data packets are also possible and are contemplated herein. For example, an alternate data packet may include less than or greater than 4 bytes; the read/write bit, the address, the payload data, and/or the error-checking code could be transmitted in a different order; the relative sizes of the read/write bit, the address, the payload data, and/or the error-checking code could be different; the data packet600could be encrypted; etc. Additionally, in some embodiments, the SPI subcontroller510may transmit data to the SPI controller416(e.g., over the MISO channel504, etc.) using one or more data packets. The data packet(s) passed from the SPI subcontroller510to the SPI controller416may not be identical to the data packet600illustrated inFIG.6, though. For example, the data packet(s) passed from the SPI subcontroller510to the SPI controller416may include payload data (e.g., data read from a memory of the SPI subcontroller510, etc.) and an error-checking code (e.g., a reverse error-checking code calculated by the SPI subcontroller510, etc.), but may not include a read/write bit or a target address. Further, the length of the payload data in the data packet(s) passed from the SPI subcontroller510to the SPI controller416, the length of the error-checking code in the data packet(s) passed from the SPI subcontroller510to the SPI controller416, the overall length of the data packet(s) passed from the SPI subcontroller510to the SPI controller416, and/or the order of the data contained within the data packet(s) passed from the SPI subcontroller510to the SPI controller416may be different from the data packet600illustrated inFIG.6. FIG.7is a timing diagram illustrating a write transaction. As illustrated, the write transaction may occur over an SPI (e.g., between the SPI controller416and the SPI subcontroller510illustrated inFIG.5, etc.) utilizing a periodic clock signal on the SCLK channel508. The write transaction illustrated inFIG.7may include the communication of a data packet (e.g., the data packet600illustrated inFIG.6, etc.) from the SPI controller416to the SPI subcontroller510(e.g., over the MOSI channel502, etc.). The data packet may be communicated by the SPI controller416by modulating a signal (e.g., from a low value to a high value or from a high value to a low value, etc.) on the MOSI channel502along a clock edge of the signal of the SCLK channel508(e.g., on a falling edge of the SCLK channel508, as illustrated inFIG.7, etc.). Further, the write transaction may occur on the SPI subcontroller510by selecting the SPI subcontroller510(e.g., from among a plurality of SPI subcontrollers connected to a single SPI controller416, etc.) using a signal on the SSNchannel506. As illustrated, the data packet600may be transmitted over the MOSI channel502while the SSNchannel506has a signal value of “0.” A value of “0” may indicate that the respective SPI subcontroller510is the SPI subcontroller with which the SPI controller416intends to interact with. It is understood that in other embodiments, a signal value of “1” may instead indicate that the respective SPI subcontroller510is the SPI subcontroller with which the SPI controller416intends to interact with. As illustrated, a read/write bit602with a value of “1” may be transmitted over the MOSI channel502. This may indicate to the SPI subcontroller510that a write routine is to be performed. In addition, an address604may be communicated using the MOSI channel502. As illustrated, the address604may be communicated one bit at a time, with the MSB first (i.e., from ADDR[6] to ADDR[0]). The address604may correspond to a memory address in the memory of the SPI subcontroller510to which the payload data is to be written during the write routine. Further, the payload data606(i.e., the data to be written to the address604during the write transaction) may be communicated from the SPI controller416to the SPI subcontroller510over the MOSI channel502. As illustrated, the payload data606may be communicated one bit at a time, with the MSB first (i.e., from WDATA[15] to WDATA[0]). As the SPI subcontroller510is receiving the address604and the payload data606, the SPI subcontroller510(e.g., a processor of the SPI subcontroller510executing instructions stored within a memory, etc.) may begin calculating a reverse error-checking code based on the address604and the payload data606. For example, the reverse error-checking code may be calculated according to a polynomial for a cyclic redundancy check. In some embodiments, the SPI subcontroller510may only calculate the reverse error-checking code when the read/write bit602indicates that a write routine is to be performed (e.g., when the read/write bit602is a “1,” as illustrated inFIG.7, etc.). Upon calculating the reverse error-checking code, the SPI subcontroller510may communicate the calculated reverse error-checking code to the SPI controller416over the MISO channel504. As illustrated, the reverse error-checking code may be communicated one bit at a time, with the MSB first (i.e., from R_ACK[7] to R_ACK[0]). Herein, the reverse and forward error-checking codes may be referred to as “ACK” or “NACK,” as they can serve as an acknowledgment (either to the SPI controller416from the SPI subcontroller510or vice versa) of whether a data packet or a piece of a data packet was received properly (e.g., in an uncorrupted fashion, etc.). The SPI controller416may use the reverse error-checking code to determine whether a transmission error occurred (e.g., by comparing the reverse error-checking code to the forward error-checking code608, etc.). Additionally or alternatively, the SPI controller416may transmit the forward error-checking code and the reverse error-checking code to another device (e.g., a central computing device for an autonomous or semi-autonomous vehicle, a fleet server configured to monitor the status of a fleet of autonomous or semi-autonomous vehicles, etc.) to determine whether a transmission error has occurred. If the SPI controller416(or another computing device) determines that a transmission error has occurred (e.g., the forward error-checking code608and the reverse error-checking code are not the same, etc.), appropriate remedial action may be taken. Such action may include the SPI controller416retransmitting the data packet600(or a piece of the data packet600, such as the payload data606) to the SPI subcontroller510over the SPI; the SPI subcontroller510and/or the SPI controller416being repaired or replaced; a flag being set indicating that the SPI subcontroller510, the SPI controller416, an associated lidar device, an associated firing circuit, and/or associated light emitters are functioning improperly and/or require repair; a decommissioning of the SPI subcontroller510, the SPI controller416, an associated lidar device, an associated firing circuit, and/or associated light emitters; etc. If, however, the SPI controller416(or another computing device) determines that a transmission error has not occurred (e.g., the forward error-checking code608and the reverse error-checking code are the same, etc.), the SPI controller416and/or an associated computing device may output an indication to one or more computing devices (e.g., a central computing device associated with an autonomous or semi-autonomous vehicle, a fleet server configured to monitor the status of a fleet of autonomous or semi-autonomous vehicles, a mobile computing device, etc.) that the write transaction was performed properly (e.g., that the proper light emitters in an associated lidar device were fired with appropriate firing settings, such as timing, power, etc.). Additionally, the forward error-checking code608may be communicated from the SPI controller416to the SPI subcontroller510over the MOSI channel502. As illustrated, the forward error-checking code may be communicated one bit at a time, with the MSB first (i.e., from F_ACK[7] to F_ACK[0]). Upon receiving the forward error-checking code608, the SPI subcontroller510may compare the forward error-checking code608to a calculated reverse error-checking code (e.g., previously calculated by the SPI subcontroller510based on the received address604and the received payload data606, etc.). If the forward error-checking code608matches the reverse error-checking code, the SPI subcontroller510may proceed to writing the payload data606to the address604within the SPI subcontroller510. In embodiments where the SPI subcontroller510is a laser driver for a lidar device, writing the payload data606to the address604may result in one or more light emitters of the lidar device firing (e.g., during a subsequent firing cycle, etc.) with associated firing parameters (e.g., emission duration, emission power, charging current and/or voltage for capacitors of an associated firing circuit, etc.). If, however, the forward error-checking code608does not match the reverse error-checking code, the SPI subcontroller510may refrain from writing the payload data606to the address604. In embodiments wherein the SPI subcontroller510is a laser driver for a lidar device, this may prevent light emitters of the lidar device from firing (e.g., during a subsequent firing cycle, etc.) and/or may maintain the firing parameters used during a previous firing cycle. FIG.8is a timing diagram illustrating a read transaction. As illustrated, the read transaction may occur over an SPI (e.g., between the SPI controller416and the SPI subcontroller510illustrated inFIG.5, etc.) utilizing a periodic clock signal on the SCLK channel508. The read transaction illustrated inFIG.8may include the communication of a data packet (e.g., the data packet600illustrated inFIG.6, etc.) from the SPI controller416to the SPI subcontroller510(e.g., over the MOSI channel502, etc.). The data packet may be communicated by the SPI controller416by modulating a signal (e.g., from a low value to a high value or from a high value to a low value, etc.) on the MOSI channel502along a clock edge of the signal of the SCLK channel508(e.g., on a falling edge of the SCLK channel508, as illustrated inFIG.8, etc.). Further, the read transaction may occur on the SPI subcontroller510by selecting the SPI subcontroller510(e.g., from among a plurality of SPI subcontrollers connected to a single SPI controller416, etc.) using a signal on the SSNchannel506. As illustrated, the data packet600may be transmitted over the MOSI channel502while the SSNchannel506has a signal value of “0.” A value of “0” may indicate that the respective SPI subcontroller510is the SPI subcontroller with which the SPI controller416intends to interact. It is understood that in other embodiments, a signal value of “1” may instead indicate that the respective SPI subcontroller510is the SPI subcontroller with which the SPI controller416intends to interact. As illustrated, a read/write bit602with a value of “0” may be transmitted over the MOSI channel502. This may indicate to the SPI subcontroller510that a read routine is to be performed. In addition, an address604may be communicated using the MOSI channel502. As illustrated, the address604may be communicated one bit at a time, with the MSB first (i.e., from ADDR[6] to ADDR[0]). The address604may correspond to a memory address in the memory of the SPI subcontroller510from which the payload data is to be read during the read routine. Given that the transaction depicted inFIG.8is a read transaction, the data in the payload data slot communicated to SPI subcontroller510over the MOSI channel502is not being written to a memory of the SPI subcontroller510. As such, the payload data in the data packet600communicated over the MOSI channel502may be set to a default value (e.g., all 0's, all 1's, or some other value). Alternatively, in some embodiments, because the payload data segment would otherwise go unused, the SPI controller416may assign a specified value to the payload data segment to provide additional robustness against bit-flips or other errors. For example, if a bit-flip occurs in the read/write bit602(e.g., a read value is bit-flipped to a write value, etc.), the SPI subcontroller510may attempt to write the payload data606to the address604if the received forward error-checking code608matches the calculated reverse error-checking code. To prevent this unintended write from occurring, the SPI controller416may send payload data606such that, when an error-checking code is determined based on the payload data606along with a read/write bit602that indicates a write is to be performed, the error-checking code will not correspond to the transmitted forward error-checking code608. As such, if a bit-flip occurs in the read/write bit602(and, therefore, the SPI subcontroller510incorrectly interprets that a write is to be performed), when the SPI subcontroller510calculates the reverse error-checking code based on the address604and/or the payload data606, that reverse error-checking code will not match the received forward error-checking code608. Thus, the SPI subcontroller510will refrain from writing the payload data606to the address604. In this way, additional robustness against unintended write routines may be provided. As such, in embodiments where the SPI subcontroller510represents a laser driver in a lidar device, unintended firings of light emitters and firings of light emitters with unintended firing parameters may also be prevented. Upon receiving the read/write bit602indicating that a read transaction is to be performed and receiving the address604that is to be read from, the SPI subcontroller510may begin reading data from a memory address of the SPI subcontroller510that corresponds to the address604. In embodiments where the SPI subcontroller510is a laser driver of a lidar device, the data read from the address604may correspond to firing parameters used in a previous firing of one or more light emitters of the lidar device, for example. This data read from the address604may be provided (e.g., one bit at a time) to the SPI controller416over the MISO channel504, with the MSB first (e.g., from RDATA[15] to RDATA[0]). Further, as the SPI subcontroller510is receiving the address604and the payload data606, the SPI subcontroller510(e.g., a processor of the SPI subcontroller510executing instructions stored within a memory, etc.) may begin calculating a reverse error-checking code based on the address604and the payload data606. For example, the reverse error-checking code may be calculated according to a polynomial for a cyclic redundancy check. Upon calculating the reverse error-checking code, the SPI subcontroller510may communicate the calculated reverse error-checking code to the SPI controller416over the MISO channel504. As illustrated, the reverse error-checking code may be communicated one bit at a time, with the MSB first (i.e., from R_ACK[7] to R_ACK[0]). The SPI controller416may use the reverse error-checking code to determine whether a transmission error occurred (e.g., by comparing the reverse error-checking code to the forward error-checking code608, etc.). Additionally or alternatively, the SPI controller416may transmit the forward error-checking code and the reverse error-checking code to another device (e.g., a central computing device for an autonomous or semi-autonomous vehicle, a fleet server configured to monitor the status of a fleet of autonomous or semi-autonomous vehicles, etc.) to determine whether a transmission error has occurred. If the SPI controller416(or another computing device) determines that a transmission error has occurred (e.g., the forward error-checking code608and the reverse error-checking code are not the same, etc.), appropriate remedial action may be taken. Such action may include the SPI controller416retransmitting the data packet600(or a piece of the data packet600, such as the payload data606) to the SPI subcontroller510over the SPI; the SPI subcontroller510and/or the SPI controller416being repaired or replaced; a flag being set indicating that the SPI subcontroller510, the SPI controller416, an associated lidar device, an associated firing circuit, and/or associated light emitters are functioning improperly and/or require repair; a decommissioning of the SPI subcontroller510, the SPI controller416, an associated lidar device, an associated firing circuit, and/or associated light emitters; etc. If, however, the SPI controller416(or another computing device) determines that a transmission error has not occurred (e.g., the forward error-checking code608and the reverse error-checking code are the same), the SPI controller416and/or an associated computing device may output an indication to one or more computing devices (e.g., a central computing device associated with an autonomous or semi-autonomous vehicle, a fleet server configured to monitor the status of a fleet of autonomous or semi-autonomous vehicles, a mobile computing device, etc.) that the read transaction was performed properly. While only a single write command is illustrated inFIG.7and a single read command is illustrated inFIG.8, it is understood that, in some embodiments, multiple commands may be executed within a single SSNwindow (i.e., within a single cycle of the signal on the SSNchannel corresponding to a given SPI subcontroller510going from low-to-high and back again or from high-to-low and back again). For example, multiple write transactions; multiple write transactions followed by a single read transaction; or multiple write transactions interspersed with multiple read transactions may occur within a single SSNwindow. Other combinations of read/write transactions are also possible and are contemplated herein. FIG.9is a flowchart diagram of a method900, according to example embodiments. The method900may be performed by a system (e.g., the system500illustrated inFIG.5). At block902, the method900may include receiving, at a Master Out Slave In (MOSI) channel of a serial peripheral interface (SPI) of a subcontroller, a write address to be written to within an integrated circuit of the subcontroller. At block904, the method900may include receiving, by the MOSI channel, payload data to be written. At block906, the method900may include receiving, by the MOSI channel, a forward error-checking code usable to identify data corruption within the write address or the payload data. The forward error-checking code may be generated by a controller (e.g., the SPI controller416of the system500illustrated inFIG.5) in order to identify errors in communications between the controller and the subcontroller. At block908, the method900may include calculating, by the integrated circuit, a reverse error-checking code based on the received write address and payload data. The reverse error-checking code may be usable to identify data corruption within the write address or the payload data. At block910, the method900may include providing, to a Master In Slave Out (MISO) channel of the SPI, the calculated reverse error-checking code. At block912, the method900may include comparing, by the integrated circuit, the forward error-checking code to the reverse error-checking code. For example, the integrated circuit may compare the forward error-checking code to the reverse error-checking code bitwise to determine whether each bit in the forward error-checking code matches each bit in the reverse error-checking code. At block914, the method900may include writing, to the write address by the integrated circuit if the forward error-checking code matches the reverse error-checking code, the payload data. The present disclosure is not to be limited in terms of the particular embodiments described in this application, which are intended as illustrations of various aspects. Many modifications and variations can be made without departing from its spirit and scope, as will be apparent to those skilled in the art. Functionally equivalent methods and apparatuses within the scope of the disclosure, in addition to those enumerated herein, will be apparent to those skilled in the art from the foregoing descriptions. Such modifications and variations are intended to fall within the scope of the appended claims. The above detailed description describes various features and functions of the disclosed systems, devices, and methods with reference to the accompanying figures. In the figures, similar symbols typically identify similar components, unless context dictates otherwise. The example embodiments described herein and in the figures are not meant to be limiting. Other embodiments can be utilized, and other changes can be made, without departing from the scope of the subject matter presented herein. It will be readily understood that the aspects of the present disclosure, as generally described herein, and illustrated in the figures, can be arranged, substituted, combined, separated, and designed in a wide variety of different configurations, all of which are explicitly contemplated herein. With respect to any or all of the message flow diagrams, scenarios, and flow charts in the figures and as discussed herein, each step, block, operation, and/or communication can represent a processing of information and/or a transmission of information in accordance with example embodiments. Alternative embodiments are included within the scope of these example embodiments. In these alternative embodiments, for example, operations described as steps, blocks, transmissions, communications, requests, responses, and/or messages can be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved. Further, more or fewer blocks and/or operations can be used with any of the message flow diagrams, scenarios, and flow charts discussed herein, and these message flow diagrams, scenarios, and flow charts can be combined with one another, in part or in whole. A step, block, or operation that represents a processing of information can correspond to circuitry that can be configured to perform the specific logical functions of a herein-described method or technique. Alternatively or additionally, a step or block that represents a processing of information can correspond to a module, a segment, or a portion of program code (including related data). The program code can include one or more instructions executable by a processor for implementing specific logical operations or actions in the method or technique. The program code and/or related data can be stored on any type of computer-readable medium such as a storage device including RAM, a disk drive, a solid state drive, or another storage medium. Moreover, a step, block, or operation that represents one or more information transmissions can correspond to information transmissions between software and/or hardware modules in the same physical device. However, other information transmissions can be between software modules and/or hardware modules in different physical devices. The particular arrangements shown in the figures should not be viewed as limiting. It should be understood that other embodiments can include more or less of each element shown in a given figure. Further, some of the illustrated elements can be combined or omitted. Yet further, an example embodiment can include elements that are not illustrated in the figures. While various aspects and embodiments have been disclosed herein, other aspects and embodiments will be apparent to those skilled in the art. The various aspects and embodiments disclosed herein are for purposes of illustration and are not intended to be limiting, with the true scope being indicated by the following claims.
113,890
11860731
DETAILED DESCRIPTION In some systems that include a memory device and a host device (e.g., a device that uses the memory device to store information), signals transmitted by the memory device may cause interference (e.g., radio frequency (RF) interference, electromagnetic (EM) interference, electric field interference) with signals that the memory device is meant to receive (e.g., from a host device). In some cases, the signals transmitted by the memory device may affect components that are configured to receive such signals. Thus, signal transmission by a memory device may affect signal reception at the memory device, which may impact various operations performed with the memory device. In some cases, signals received by the memory device may affect signals transmitted by the memory device as well. In accordance with examples as disclosed herein, a memory system may be configured to transmit signals in a manner to reduce interference with other signals. For example, a memory device may transmit a signal that has been modulated to have a smaller voltage swing than the signal being received by the memory device and thereby reduce the interference between the signals. The memory system may communicate, from a host device to a memory device, first signaling that is modulated using a first modulation scheme that spans a first range of voltages. In some examples, the first signaling may include or refer to data signaling, and the first signaling may include signaling over a data channel (e.g., a write signal). The first modulation scheme may include a first quantity of voltage levels (e.g., four (4) voltage levels). The system may also be configured to communicate, from the memory device to the host device, second signaling that is based at least in part on the first signaling. The second signaling may be modulated using a second modulation scheme that spans a second range of voltages that is smaller than the first range of voltages. In some examples, the second signaling may include or refer to error detection signaling, and the second signaling may include signaling over an error detection channel (e.g., an error detection signal or some other channel). The second modulation scheme may include a second quantity of voltage levels. In some examples, the second quantity of voltages may be fewer than the first quantity of voltages (e.g., two (2) voltage signals), and, in some examples, the voltage levels of the second modulation scheme may be a subset of the voltage levels of the first modulation scheme. Using the techniques described herein, interference of the second signaling on the first signaling may be reduced relative to other techniques, which may improve performance of the described memory systems. Features of the disclosure are initially described in the context of a memory system and memory die as described with reference toFIGS.1and2. Features of the disclosure are described in the context of diagrams and illustrative modulation schemes as described with reference toFIGS.3and4. These and other features of the disclosure are further illustrated by and described with reference to apparatus diagrams and flowcharts that relate to channel modulation for a memory device as described with references toFIGS.5through8. FIG.1illustrates an example of a system100that utilizes one or more memory devices in accordance with examples as disclosed herein. The system100may include an external memory controller105, a memory device110, and a plurality of channels115coupling the external memory controller105with the memory device110. The system100may include one or more memory devices, but for ease of description the one or more memory devices may be described as a single memory device110. The system100may include aspects of an electronic device, such as a computing device, a mobile computing device, a wireless device, a graphics processing device, a vehicle, and others. The system100may be an example of a portable electronic device. The system100may be an example of a computer, a laptop computer, a tablet computer, a smartphone, a cellular phone, a wearable device, an internet-connected device, a vehicle, or others. The memory device110may be component of the system configured to store data for one or more other components of the system100. At least portions of the system100may be examples of a host device. Such a host device may be an example of a device that uses memory to execute processes such as a computing device, a mobile computing device, a wireless device, a graphics processing device, a computer, a laptop computer, a tablet computer, a smartphone, a cellular phone, a wearable device, an internet-connected device, some other stationary or portable electronic device, a vehicle, or others. In some cases, the host device may refer to the hardware, firmware, software, or a combination thereof that implements the functions of the external memory controller105. In some cases, the external memory controller105may be referred to as a host or host device. In some cases, a memory device110may be an independent device or component that is configured to be in communication with other components of the system100and provide physical memory addresses or other space to potentially be used or referenced by the system100. In some examples, a memory device110may be configurable to work with at least one or a plurality of different types of systems100. Signaling between the components of the system100and the memory device110may be operable to support modulation schemes to modulate the signals, different pin designs for communicating the signals, distinct packaging of the system100and the memory device110, clock signaling and synchronization between the system100and the memory device110, timing conventions, or other factors. The memory device110may be configured to store data for the components of the system100. In some cases, the memory device110may act as a slave-type device to the system100(e.g., responding to and executing commands provided by the system100through the external memory controller105). Such commands may include an access command for an access operation, such as a write command for a write operation, a read command for a read operation, a refresh command for a refresh operation, or other commands. The memory device110may include two or more memory dice160(e.g., memory chips) to support a desired or specified capacity for data storage. The memory device110including two or more memory dice may be referred to as a multi-die memory or package (also referred to as multi-chip memory or package). The system100may further include a processor120, a basic input/output system (BIOS) component125, one or more peripheral components130, and an input/output (I/O) controller135. The components of system100may be in electronic communication with one another using a bus140. The processor120may be configured to control at least portions of the system100. The processor120may be a general-purpose processor, a digital signal processor (DSP), an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or it may be a combination of these types of components. In such cases, the processor120may be an example of a central processing unit (CPU), a graphics processing unit (GPU), a general purpose graphics processing unit (GPGPU), or a system on a chip (SoC), among other examples. The BIOS component125may be a software component that includes a BIOS operated as firmware, which may initialize and run various hardware components of the system100. The BIOS component125may also manage data flow between the processor120and the various components of the system100, e.g., the peripheral components130, the I/O controller135, etc. The BIOS component125may include a program or software stored in read-only memory (ROM), flash memory, or any other non-volatile memory. The peripheral component(s)130may be any input device or output device, or an interface for such devices, that may be integrated into or with the system100. Examples may include disk controllers, sound controller, graphics controller, Ethernet controller, modem, universal serial bus (USB) controller, a serial or parallel port, or peripheral card slots, such as peripheral component interconnect (PCI) or specialized graphics ports. The peripheral component(s)130may be other components understood by those skilled in the art as peripherals. The I/O controller135may manage data communication between the processor120and the peripheral component(s)130, input devices145, or output devices150. The I/O controller135may manage peripherals that are not integrated into or with the system100. In some cases, the I/O controller135may represent a physical connection or port to external peripheral components. The input145may represent a device or signal external to the system100that provides information, signals, or data to the system100or its components. This may include a user interface or interface with or between other devices. In some cases, the input145may be a peripheral that interfaces with system100via one or more peripheral components130or may be managed by the I/O controller135. The output150may represent a device or signal external to the system100configured to receive an output from the system100or any of its components. Examples of the output150may include a display, audio speakers, a printing device, or another processor on printed circuit board, and so forth. In some cases, the output150may be a peripheral that interfaces with the system100via one or more peripheral components130or may be managed by the I/O controller135. The components of system100may be made up of general-purpose or special purpose circuitry designed to carry out their functions. This may include various circuit elements, for example, conductive lines, transistors, capacitors, inductors, resistors, gates, decoders, amplifiers, or other active or passive elements, configured to carry out the functions described herein. The memory device110may include a device memory controller155and one or more memory dice160. Each memory die160may include a local memory controller165(e.g., local memory controller165-a, local memory controller165-b, and/or local memory controller165-N) and a memory array170(e.g., memory array170-a, memory array170-b, and/or memory array170-N). A memory array170may be a collection (e.g., a grid) of memory cells, with each memory cell being configured to store at least one bit of digital data. Features of memory arrays170and/or memory cells are described in more detail with reference toFIG.2. The memory device110may be an example of a two-dimensional (2D) array of memory cells or may be an example of a three-dimensional (3D) array of memory cells. For example, a 2D memory device may include a single memory die160. A 3D memory device may include two or more memory dice160(e.g., memory die160-a, memory die160-b, and/or any quantity of memory dice160-N). In a 3D memory device, a plurality of memory dice160-N may be stacked on top of one another or next to one another. In some cases, memory dice160-N in a 3D memory device may be referred to as decks, levels, layers, or dies. A 3D memory device may include any quantity of stacked memory dice160-N (e.g., two high, three high, four high, five high, six high, seven high, eight high). This may increase the quantity of memory cells that may be positioned on a substrate as compared with a single 2D memory device, which in turn may reduce production costs or increase the performance of the memory array, or both. In some 3D memory device, different decks may share at least one common access line such that some decks may share at least one of a word line, a digit line, and/or a plate line. The device memory controller155may include circuits or components configured to control operation of the memory device110. As such, the device memory controller155may include the hardware, firmware, and software that enables the memory device110to perform commands and may be configured to receive, transmit, or execute commands, data, or control information related to the memory device110. The device memory controller155may be configured to communicate with the external memory controller105, the one or more memory dice160, or the processor120. In some cases, the memory device110may receive data and/or commands from the external memory controller105. For example, the memory device110may receive a write command indicating that the memory device110is to store certain data on behalf of a component of the system100(e.g., the processor120) or a read command indicating that the memory device110is to provide certain data stored in a memory die160to a component of the system100(e.g., the processor120). In some cases, the device memory controller155may control operation of the memory device110described herein in conjunction with the local memory controller165of the memory die160. Examples of the components included in the device memory controller155and/or the local memory controllers165may include receivers for demodulating signals received from the external memory controller105, decoders for modulating and transmitting signals to the external memory controller105, logic, decoders, amplifiers, filters, or others. The local memory controller165(e.g., local to a memory die160) may be configured to control operations of the memory die160. Also, the local memory controller165may be configured to communicate (e.g., receive and transmit data and/or commands) with the device memory controller155. The local memory controller165may support the device memory controller155to control operation of the memory device110as described herein. In some cases, the memory device110does not include the device memory controller155, and the local memory controller165or the external memory controller105may perform the various functions described herein. As such, the local memory controller165may be configured to communicate with the device memory controller155, with other local memory controllers165, or directly with the external memory controller105or the processor120. The external memory controller105may be configured to enable communication of information, data, and/or commands between components of the system100(e.g., the processor120) and the memory device110. The external memory controller105may act as a liaison between the components of the system100and the memory device110so that the components of the system100may not need to know the details of the memory device's operation. The components of the system100may present requests to the external memory controller105(e.g., read commands or write commands) that the external memory controller105satisfies. The external memory controller105may convert or translate communications exchanged between the components of the system100and the memory device110. In some cases, the external memory controller105may include a system clock that generates a common (source) system clock signal. In some cases, the external memory controller105may include a common data clock that generates a common (source) data clock signal. In some cases, the external memory controller105or other component of the system100, or its functions described herein, may be implemented by the processor120. For example, the external memory controller105may be hardware, firmware, or software, or some combination thereof implemented by the processor120or other component of the system100. While the external memory controller105is depicted as being external to the memory device110, in some cases, the external memory controller105, or its functions described herein, may be implemented by a memory device110. For example, the external memory controller105may be hardware, firmware, or software, or some combination thereof implemented by the device memory controller155or one or more local memory controllers165. In some cases, the external memory controller105may be distributed across the processor120and the memory device110such that portions of the external memory controller105are implemented by the processor120and other portions are implemented by a device memory controller155or a local memory controller165. Likewise, in some cases, one or more functions ascribed herein to the device memory controller155or local memory controller165may, in some cases, be performed by the external memory controller105(either separate from or as included in the processor120). The components of the system100may exchange information with the memory device110using a plurality of channels115. In some examples, the channels115may enable communications between the external memory controller105and the memory device110. Each channel115may include one or more signal paths or transmission mediums (e.g., conductors) between terminals associated with the components of system100. For example, a channel115may include a first terminal including one or more pins or pads at external memory controller105and one or more pins or pads at the memory device110. A pin may be an example of a conductive input or output point of a device of the system100, and a pin may be configured to act as part of a channel. In some cases, a pin or pad of a terminal may be part of to a signal path of the channel115. Additional signal paths may be coupled with a terminal of a channel for routing signals within a component of the system100. For example, the memory device110may include signal paths (e.g., signal paths internal to the memory device110or its components, such as internal to a memory die160) that route a signal from a terminal of a channel115to the various components of the memory device110(e.g., a device memory controller155, memory dice160, local memory controllers165, memory arrays170). Channels115(and associated signal paths and terminals) may be dedicated to communicating specific types of information. In some cases, a channel115may be an aggregated channel and thus may include multiple individual channels. For example, a data channel190may be x4 (e.g., including four signal paths), x8 (e.g., including eight signal paths), x16 (including sixteen signal paths), and so forth. Signals communicated over the channels may use a double data rate (DDR) timing scheme. For example, some symbols of a signal may be registered on a rising edge of a clock signal and other symbols of the signal may be registered on a falling edge of the clock signal. Signals communicated over channels may use single data rate (SDR) signaling. For example, one symbol of the signal may be registered for each clock cycle. In some cases, the channels115may include one or more command and address (CA) channels186. The CA channels186may be configured to communicate commands between the external memory controller105and the memory device110including control information associated with the commands (e.g., address information). For example, the CA channel186may include a read command with an address of the desired data. In some cases, the CA channels186may be registered on a rising clock signal edge and/or a falling clock signal edge. In some cases, a CA channel186may include any quantity of signal paths to decode address and command data (e.g., eight or nine signal paths). In some cases, the channels115may include one or more clock signal (CK) channels188. The CK channels188may be configured to communicate one or more common clock signals between the external memory controller105and the memory device110. Each clock signal may be configured to oscillate between a high state and a low state and coordinate the actions of the external memory controller105and the memory device110. In some cases, the clock signal may be a differential output (e.g., a CK_t signal and a CK_c signal) and the signal paths of the CK channels188may be configured accordingly. In some cases, the clock signal may be single ended. A CK channel188may include any quantity of signal paths. In some cases, the clock signal CK (e.g., a CK_t signal and a CK_c signal) may provide a timing reference for command and addressing operations for the memory device110, or other system-wide operations for the memory device110. The clock signal CK therefore may be variously referred to as a control clock signal CK, a command clock signal CK, or a system clock signal CK. The system clock signal CK may be generated by a system clock, which may include one or more hardware components (e.g., oscillators, crystals, logic gates, transistors). In some cases, the channels115may include one or more data (DQ) channels190. The data channels190may be configured to communicate data and/or control information between the external memory controller105and the memory device110. For example, the data channels190may communicate information (e.g., bi-directional) to be written to the memory device110or information read from the memory device110. The data channels190may communicate signals that may be modulated using a variety of different modulation schemes (e.g., e.g., non-return-to-zero (NRZ), pulse amplitude modulation (PAM) having some quantity of symbols or voltage levels, such as a PAM4 scheme associated with four symbols or voltage levels). In some cases, the channels115may include one or more other channels192that may be dedicated to other purposes. These other channels192may include any quantity of signal paths. In some cases, the other channels192may include one or more write clock signal (WCK) channels. While the ‘W’ in WCK may nominally stand for “write,” a write clock signal WCK (e.g., a WCK_t signal and a WCK_c signal) may provide a timing reference for access operations generally for the memory device110(e.g., a timing reference for both read and write operations). Accordingly, the write clock signal WCK may also be referred to as a data clock signal WCK. The WCK channels may be configured to communicate a common data clock signal between the external memory controller105and the memory device110. The data clock signal may be configured to coordinate an access operation (e.g., a write operation or read operation) of the external memory controller105and the memory device110. In some cases, the write clock signal may be a differential output (e.g., a WCK_t signal and a WCK_c signal) and the signal paths of the WCK channels may be configured accordingly. A WCK channel may include any quantity of signal paths. The data clock signal WCK may be generated by a data clock, which may include one or more hardware components (e.g., oscillators, crystals, logic gates, transistors, or the like). In some cases, the other channels192may include one or more error detection code (EDC) channels. The EDC channels may be configured to communicate error detection signals, such as checksums, to improve system reliability. An EDC channel may include any quantity of signal paths, and may communicate signals that are modulated using a modulation scheme (e.g., PAM having some quantity of symbols or voltage levels). The channels115may couple the external memory controller105with the memory device110using a variety of different architectures. Examples of the various architectures may include a bus, a point-to-point connection, a crossbar, a high-density interposer such as a silicon interposer, or channels formed in an organic substrate or some combination thereof. For example, in some cases, the signal paths may at least partially include a high-density interposer, such as a silicon interposer or a glass interposer. Signals communicated over the channels115may be modulated using a variety of different modulation schemes. In some cases, a binary-symbol (or binary-level) modulation scheme may be used to modulate signals communicated between the external memory controller105and the memory device110. A binary-symbol modulation scheme may be an example of a M-ary modulation scheme where M is equal to two. Each symbol of a binary-symbol modulation scheme may be configured to represent one bit of digital data (e.g., a symbol may represent a logic 1 or a logic 0). Examples of binary-symbol modulation schemes include, but are not limited to, NRZ, unipolar encoding, bipolar encoding, Manchester encoding, PAM having two symbols (e.g., PAM2), and/or others. In some cases, a multi-symbol (or multi-level) modulation scheme may be used to modulate signals communicated between the external memory controller105and the memory device110. A multi-symbol modulation scheme may be an example of a M-ary modulation scheme where M is greater than or equal to three. Each symbol of a multi-symbol modulation scheme may be configured to represent more than one bit of digital data (e.g., where a symbol may represent a logic 00, a logic 01, a logic 10, or a logic 11). Examples of multi-symbol modulation schemes include, but are not limited to, PAM4, PAM8, etc., quadrature amplitude modulation (QAM), quadrature phase shift keying (QPSK), and/or others. A multi-symbol signal or a PAM4 signal may be a signal that is modulated using a modulation scheme that includes at least three levels to encode more than one bit of information. Multi-symbol modulation schemes and symbols may alternatively be referred to as non-binary, multi-bit, or higher-order modulation schemes and symbols. In some examples, signaling transmitted by the memory device110(e.g., over channels115, to a host device) may cause interference, such as radio frequency (RF) interference, electromagnetic (EM) interference, electric field interference, or others. In some examples, interference caused by transmitted signaling may be related to the level or rate of change of current associated with the transmitted signaling, related to the level or rate of change of voltage associated with the transmitted signaling, or various combinations thereof. For example, a relatively faster rate of change of current or voltage of a transmitted signal may be associated with a relatively stronger or faster change in electric or electromagnetic field associated with the transmitted signal, whereas a relatively slower rate of change of current or voltage of a transmitted signal may be associated with a relatively weaker or slower change in electric or electromagnetic field associated with the transmitted signal. The transmitted signaling may be referred to as an aggressor or aggressor signal, and interference may be based at least in part on a level of the aggressor, the rate of change of the aggressor, and other phenomena. In some systems, the interference from transmitted signaling may be incident on a signal-carrying path (e.g., a channel115, a conductive path of the memory device110or a host device, a conductive path between a host device and the memory device110) or a component that is part of a signal-carrying path for a signal that is to be received (e.g., a component of the memory device110or a host device, a component between a host device and the memory device110). For example, an electric or electromagnetic field caused by signaling transmitted by the memory device110, which may be an oscillating or otherwise changing electric or electromagnetic field, may be incident on a channel115that is associated with a signal that is to be received by the memory device110. In some examples, a channel115associated with receiving a signal (e.g., for reception the memory device110) may have a capacitive or inductive link or coupling with an aggressor, such as a capacitive or inductive link or coupling with another channel115, a component of the memory device110(e.g., a transmitter), a component of a host device, or various combinations thereof. In one example, incident electric or electromagnetic field caused by transmitted signaling may change or disrupt a signal that is to be received (e.g., change or disrupt a current or voltage on the conductive path), or affect components that are configured to receive the signal, which may be referred to as “cross-talk” (e.g., AC cross-talk, capacitive cross-talk) In some examples, such cross-talk may affect (e.g., impair, prevent) the ability of the memory device110to concurrently transmit and receive signals. In accordance with examples as disclosed herein, the system100may be configured to communicate, from a host device to the memory device110(e.g., over a first channel115), first signaling that is modulated using a first modulation scheme that spans a first range of voltages. In some examples, the first signaling may include or refer to data signaling over a data channel (e.g., a write signal, which may be carried on channel that is configured for data, such as a DQ channel), and the first modulation scheme may include a first quantity of voltage levels. The system may also be configured to communicate, from the memory device110to the host device (e.g., over a second channel115), second signaling that is based at least in part on the first signaling. The second signaling may be modulated using a second modulation scheme that spans a second range of voltages that is smaller than the first range of voltages. In some examples, the second signaling may include or refer to error detection signaling over an error detection channel (e.g., error detection information, which may be carried on a channel that is configured for error detection and correction information, such as an EDC channel), or some other channel, and the second modulation scheme may include a second quantity of voltage levels. In some examples, the second quantity of voltages may be fewer than the first quantity of voltages, such as a subset of the first quantity of levels. Using the techniques described herein, interference of the second signaling on the first signaling may be reduced relative to other techniques. For example, by reducing a span of voltages for transmitted signaling, a rate of change associated with the signaling (e.g., a rate of change of voltage or current associated with the transmitted signaling) may be reduced, which may reduce electromagnetic fields, electric fields, or other forms of interference induced by the transmitted signaling. Thus, signal paths associated with received signaling may be exposed to reduced interference, which may improve a device's ability to simultaneously transmit and receive signaling. Accordingly, by reducing a span of voltages for transmitted signaling, communication between a memory device110and a host device may be improved, which may improve performance of the memory system100. FIG.2illustrates an example of a memory die200in accordance with examples as disclosed herein. The memory die200may be an example of the memory dice160described with reference toFIG.1. In some cases, the memory die200may be referred to as a memory chip, a memory device, or an electronic memory apparatus. The memory die200may include one or more memory cells205that are programmable to store different logic states. Each memory cell205may be programmable to store two or more states. For example, the memory cell205may be configured to store one bit of digital logic at a time (e.g., a logic 0 and a logic 1). In some cases, a single memory cell205(e.g., a multi-level memory cell) may be configured to store more than one bit of digit logic at a time (e.g., a logic 00, logic 01, logic 10, or a logic 11). A memory cell205may store a charge representative of the programmable states in a capacitor. DRAM architectures may include a capacitor that includes a dielectric material to store a charge representative of the programmable state. In other memory architectures, other storage devices and components are possible. For example, nonlinear dielectric materials may be employed. Operations such as reading and writing may be performed on memory cells205by activating or selecting access lines such as a word line210and/or a digit line215. In some cases, digit lines215may also be referred to as bit lines. References to access lines, word lines and digit lines, or their analogues, are interchangeable without loss of understanding or operation. Activating or selecting a word line210or a digit line215may include applying a voltage to the respective line. The memory die200may include the access lines (e.g., the word lines210and the digit lines215) arranged in a grid-like pattern. Memory cells205may be positioned at intersections of the word lines210and the digit lines215. By biasing a word line210and a digit line215(e.g., applying a voltage to the word line210or the digit line215), a single memory cell205may be accessed at their intersection. Accessing the memory cells205may be controlled through a row decoder220or a column decoder225. For example, a row decoder220may receive a row address from the local memory controller260and activate a word line210based on the received row address. A column decoder225may receive a column address from the local memory controller260and may activate a digit line215based on the received column address. For example, the memory die200may include multiple word lines210, labeled WL_1through WL_M, and multiple digit lines215, labeled DL_1through DL_N, where M and N depend on the size of the memory array. Thus, by activating a word line210and a digit line215, e.g., WL_1and DL_3, the memory cell205at their intersection may be accessed. The intersection of a word line210and a digit line215, in either a two-dimensional or three-dimensional configuration, may be referred to as an address of a memory cell205. The memory cell205may include a logic storage component, such as capacitor230and a switching component235. The capacitor230may be an example of a dielectric capacitor or a ferroelectric capacitor. A first node of the capacitor230may be coupled with the switching component235and a second node of the capacitor230may be coupled with a voltage source240. In some cases, the voltage source240may be the cell plate reference voltage, such as Vpl, or may be ground, such as Vss. In some cases, the voltage source240may be an example of a plate line coupled with a plate line driver. The switching component235may be an example of a transistor or any other type of switch device that selectively establishes or de-establishes electronic communication between two components. Selecting or deselecting the memory cell205may be accomplished by activating or deactivating the switching component235. The capacitor230may be in electronic communication with the digit line215using the switching component235. For example, the capacitor230may be isolated from digit line215when the switching component235is deactivated, and the capacitor230may be coupled with digit line215when the switching component235is activated. In some cases, the switching component235is a transistor and its operation may be controlled by applying a voltage to the transistor gate, where the voltage differential between the transistor gate and transistor source may be greater or less than a threshold voltage of the transistor. In some cases, the switching component235may be a p-type transistor or an n-type transistor. The word line210may be in electronic communication with the gate of the switching component235and may activate/deactivate the switching component235based on a voltage being applied to word line210. A word line210may be a conductive line in electronic communication with a memory cell205that is used to perform access operations on the memory cell205. In some architectures, the word line210may be in electronic communication with a gate of a switching component235of a memory cell205and may be configured to control the switching component235of the memory cell. In some architectures, the word line210may be in electronic communication with a node of the capacitor of the memory cell205and the memory cell205may not include a switching component. A digit line215may be a conductive line that connects the memory cell205with a sense component245. In some architectures, the memory cell205may be selectively coupled with the digit line215during portions of an access operation. For example, the word line210and the switching component235of the memory cell205may be configured to couple and/or isolate the capacitor230of the memory cell205and the digit line215. In some architectures, the memory cell205may be in electronic communication (e.g., constant) with the digit line215. The sense component245may be configured to detect a state (e.g., a charge) stored on the capacitor230of the memory cell205and determine a logic state of the memory cell205based on the stored state. The charge stored by a memory cell205may be extremely small, in some cases. As such, the sense component245may include one or more sense amplifiers to amplify the signal output by the memory cell205. The sense amplifiers may detect small changes in the charge of a digit line215during a read operation and may produce signals corresponding to a logic state 0 or a logic state 1 based on the detected charge. During a read operation, the capacitor230of memory cell205may output a signal (e.g., discharge a charge) to its corresponding digit line215. The signal may cause a voltage of the digit line215to change. The sense component245may be configured to compare the signal received from the memory cell205across the digit line215to a reference signal250(e.g., reference voltage). The sense component245may determine the stored state of the memory cell205based on the comparison. For example, in binary-signaling, if digit line215has a higher voltage than the reference signal250, the sense component245may determine that the stored state of memory cell205is a logic 1 and, if the digit line215has a lower voltage than the reference signal250, the sense component245may determine that the stored state of the memory cell205is a logic 0. The sense component245may include various transistors or amplifiers to detect and amplify a difference in the signals. The detected logic state of memory cell205may be output through column decoder225as output255. In some cases, the sense component245may be part of another component (e.g., a column decoder225, row decoder220). In some cases, the sense component245may be in electronic communication with the row decoder220or the column decoder225. The local memory controller260may control the operation of memory cells205through the various components (e.g., row decoder220, column decoder225, and sense component245). The local memory controller260may be an example of the local memory controller165described with reference toFIG.1. In some cases, one or more of the row decoder220, column decoder225, and sense component245may be co-located with the local memory controller260. The local memory controller260may be configured to receive commands and/or data from an external memory controller105(or a device memory controller155described with reference toFIG.1), translate the commands and/or data into information that can be used by the memory die200, perform one or more operations on the memory die200, and communicate data from the memory die200to the external memory controller105(or the device memory controller155) in response to performing the one or more operations. The local memory controller260may generate row and column address signals to activate the target word line210and the target digit line215. The local memory controller260may also generate and control various voltages or currents used during the operation of the memory die200. In general, the amplitude, shape, or duration of an applied voltage or current discussed herein may be adjusted or varied and may be different for the various operations discussed in operating the memory die200. In some cases, the local memory controller260may be configured to perform a write operation (e.g., a programming operation) on one or more memory cells205of the memory die200. During a write operation, a memory cell205of the memory die200may be programmed to store a desired logic state. In some cases, a plurality of memory cells205may be programmed during a single write operation. The local memory controller260may identify a target memory cell205on which to perform the write operation. The local memory controller260may identify a target word line210and a target digit line215in electronic communication with the target memory cell205(e.g., the address of the target memory cell205). The local memory controller260may activate the target word line210and the target digit line215(e.g., applying a voltage to the word line210or digit line215), to access the target memory cell205. The local memory controller260may apply a specific signal (e.g., voltage) to the digit line215during the write operation to store a specific state (e.g., charge) in the capacitor230of the memory cell205, the specific state (e.g., charge) may be indicative of a desired logic state. In some cases, the local memory controller260may be configured to perform a read operation (e.g., a sense operation) on one or more memory cells205of the memory die200. During a read operation, the logic state stored in a memory cell205of the memory die200may be determined. In some cases, a plurality of memory cells205may be sensed during a single read operation. The local memory controller260may identify a target memory cell205on which to perform the read operation. The local memory controller260may identify a target word line210and a target digit line215in electronic communication with the target memory cell205(e.g., the address of the target memory cell205). The local memory controller260may activate the target word line210and the target digit line215(e.g., applying a voltage to the word line210or digit line215), to access the target memory cell205. The target memory cell205may transfer a signal to the sense component245in response to biasing the access lines. The sense component245may amplify the signal. The local memory controller260may fire the sense component245(e.g., latch the sense component) and thereby compare the signal received from the memory cell205to the reference signal250. Based on that comparison, the sense component245may determine a logic state that is stored on the memory cell205. The local memory controller260may communicate the logic state stored on the memory cell205to the external memory controller105(or the device memory controller155) as part of the read operation. In some memory architectures, accessing the memory cell205may degrade or destroy the logic state stored in a memory cell205. For example, a read operation performed in DRAM architectures may partially or completely discharge the capacitor of the target memory cell. The local memory controller260may perform a re-write operation or a refresh operation to return the memory cell to its original logic state. The local memory controller260may re-write the logic state to the target memory cell after a read operation. In some cases, the re-write operation may be considered part of the read operation. Additionally, activating a single access line, such as a word line210, may disturb the state stored in some memory cells in electronic communication with that access line. Thus, a re-write operation or refresh operation may be performed on one or more memory cells that may not have been accessed. FIG.3illustrates an example of a system300that supports channel modulation for a memory device in accordance with examples as disclosed herein. The system300may include a host device310and a memory device330, and may be configured to support signaling over channels between the host device310and the memory device330(e.g., channels115described with reference toFIG.1). The host device310may include a transmitter320and a receiver325. In some examples, the transmitter320and the receiver325may be part of a transceiver component of the host device310. Although illustrated as including a single transmitter320, in some examples, a host device310may include a transmitter320for each channel of a set of channels, for each pin of a set of pins (e.g., of a set of pins associated with a channel), or various other configurations. Likewise, although illustrated as including a single receiver325, in some examples, a host device310may include a receiver325for each channel of a set of channels, for each pin of a set of pins (e.g., of a set of pins associated with a channel), or various other configurations. In some examples, a transmitter320and a receiver325may be included in a channel-specific transceiver, such as a transceiver configured to communicate signaling over a bidirectional data channel (e.g., a DQ channel), or some other channel, having one or more transmission paths (e.g., one or more pins, one or more conductors). The memory device330may include a transmitter340and a receiver345. In some examples, the transmitter340and the receiver345may be part of a transceiver of the memory device330. Although illustrated as including a single transmitter340, in some examples, a memory device330may include a transmitter340for each channel of a set of channels, for each pin of a set of pins (e.g., of a set of pins associated with a channel), or various other configurations. Likewise, although illustrated as including a single receiver345, in some examples, a memory device330may include a receiver345for each channel of a set of channels, for each pin of a set of pins (e.g., of a set of pins associated with a channel), or various other configurations. In some examples, a transmitter340and a receiver345may be included in a channel-specific transceiver, such as a transceiver configured to communicate over a bidirectional channel such as a data channel (e.g., a DQ channel) having one or more transmission paths (e.g., one or more pins, one or more conductors). In another example, the transmitter340or the receiver345may be part of a transceiver configured to communicate data over an error detection channel. The system300may be configured to communicate first signaling350and second signaling360. In some examples, the second signaling360may be based at least in part on the first signaling350(e.g., responsive to the first signaling350, determined based on a calculation or other operation using information carried in the first signaling350). For example, the first signaling350may include or refer to data signaling (e.g., write data, a write signal) over a channel that is configured for data (e.g., a DQ channel), and the second signaling360may include or refer to error detection signaling (e.g., error detection information, a checksum, an error detection signal) over a channel that is configured for error detection and correction information (e.g., an EDC channel) or some other channel. In some examples, the first signaling350and the second signaling360may be associated with a different quantity of signal paths. For example, the first signaling350may be carried on an 8-line data channel and the second signaling360may be carried on a 1-line error detection channel. The memory device330may perform various operations to support error detection (e.g., detecting errors of communication between the host device310and the memory device330, detecting errors in access operations with the memory device330). For example, when the first signaling350includes write data, the memory device330may calculate a checksum or other condensed version of the write data. A checksum of the write data, for example, may be sent to the host device310via the second signaling360. The host device310may also calculate a checksum of the write data, and may compare the calculated value with the received value to detect whether the memory device330properly received or wrote the data, or whether various error recovery operations should be performed. Although described in the context of an error detection scheme having a comparison performed by a host device310, the described techniques performed at one of the host device310or the memory device330may, in some examples, be performed at the other of the host device310or the memory device330(e.g., when the memory device330performs a comparison of a calculated checksum with a checksum received from the host device310). Moreover, although described in the context of write data and responsive error detection data, the first signaling350and the second signaling360may refer to other signaling exchanged between the host device310and the memory device330, which may or may not be associated with responsive communications. For example, the memory device330may generate checksums for read data transmitted to the host device, command/address data received from the host device, or combinations thereof. In the example of system300, the first signaling350and the second signaling360may illustrate an asymmetry of signal strength, which may be related to an attenuation environment between the host device310and the memory device330. For example, the first signaling350may be relatively strong at the transmitter320and relatively weak at the receiver345(e.g., due to attenuation from the host device310to the memory device330). On the other hand, the second signaling360may be relatively strong at the transmitter340and relatively weak at the receiver325(e.g., due to attenuation from the memory device330to the host device310). Thus, from the perspective of the memory device330, the relatively strong second signaling360compared with the relatively weak first signaling350may illustrate a signal strength asymmetry, which, in some examples, may be associated with the first signaling350being susceptible to interference (e.g., from the second signaling360). Asymmetry in signal strength may cause one signal to be an aggressor signal and another signal to be a victim signal. For example, second signaling360may cause interference, such as radio frequency (RF) interference, electromagnetic (EM) interference, electric field interference, or others on the first signaling350. In some examples, interference caused by the second signaling360may be related to the level or rate of change of current associated with the transmitted signaling, related to the level or rate of change of voltage associated with the transmitted signaling, other phenomena, or various combinations thereof. For example, a relatively faster rate of change of current or voltage of the second signaling360may be associated with a relatively stronger or faster change in electric or electromagnetic field, whereas a relatively slower rate of change of current or voltage of the second signaling360may be associated with a relatively weaker or slower change in electric or electromagnetic field associated with the transmitted signal. Interference from the second signaling360may be experienced on a signal-carrying path (e.g., a channel115, a conductive path of the memory device110or a host device, a conductive path between a host device and the memory device110) associated with the first signaling350, or a component that is part of a signal-carrying path for the first signaling350(e.g., the receiver345, the transmitter320). For example, an electric or electromagnetic field caused by the second signaling360, which may be an oscillating or otherwise changing electric or electromagnetic field, may be incident on a channel configured to carry the first signaling350. In some examples, the incidence or susceptibility of such interference may be associated with a capacitive coupling or an inductive coupling, which may be related to geometry or layout of various components of the system300. An incident electric or electromagnetic field caused by the second signaling360may change or disrupt the first signaling350(e.g., change or disrupt a current or voltage on the conductive path), or affect components that are configured to receive the signal (e.g., the receiver345). In some examples, such cross-talk may affect (e.g., impair, prevent) the ability of the memory device330to concurrently transmit and receive signals. The system300may be configured to communicate the first signaling350using a first modulation scheme that spans a first range of voltages, and communicate the second signaling360using a second modulation scheme that spans a second range of voltages. In some cases, the second range of voltages is smaller than the first range of voltages. In some examples, by using the second modulation scheme for the second signaling360that spans a relatively smaller range of voltages, the interference of the second signaling360(e.g., on the first signaling350) may be reduced when compared to a relatively larger range of voltages. For example, interference of the second signaling360may be associated with a rate of change of voltage of the second signaling360, which may be based at least in part on a switching frequency, a modulation rate, or other time interval of the second modulation scheme for changing from one voltage level to another voltage level, and a maximum difference between a highest voltage level and a lowest voltage level of the second modulation scheme. Thus, interference from the second signaling360may be reduced by bringing a highest voltage level and a lowest voltage level closer together in voltage for the second modulation scheme (e.g., as compared with the first modulation scheme). In some examples, the first modulation scheme associated with the first signaling350may refer to a modulation according to a first quantity of levels (e.g., voltage levels). Thus, the transmitter320may include or otherwise refer to a modulator that is configured to process received information (e.g., received from another component of the host device310) and, based on the processed information, output the first signaling350at various voltages of the first quantity of levels. Accordingly, the receiver345may include or otherwise refer to a demodulator that is configured to receive the first signaling350, and decode the associated information according to the first quantity of levels of the first modulation scheme. In various examples, the quantity of levels for the first modulation scheme, or particular voltage levels thereof, may be designed (e.g., as fixed parameters or characteristics) or dynamically configured (e.g., based on communicated signaling, event-driven) at such a modulator (e.g., of the transmitter320) or demodulator (e.g., of the receiver345). In some examples, the second modulation scheme associated with the second signaling360may refer to a modulation according to a second quantity of levels (e.g., voltage levels). Thus, the transmitter340may include or otherwise refer to a modulator that is configured to process received information (e.g., received from another component of the memory device330) and output, based on the processed information, the second signaling360at various voltages of the second quantity of levels. Accordingly, the receiver325may include or otherwise refer to a demodulator that is configured to receive the second signaling360, and decode the associated information according to the first quantity of levels of the first modulation scheme. In various examples, the quantity of levels for the second modulation scheme, or particular voltage levels thereof, may be designed (e.g., as fixed parameters or characteristics) or dynamically configured (e.g., based on communicated signaling, event-driven) at such a modulator (e.g., of the transmitter340) or demodulator (e.g., of the receiver325). In various examples, the first quantity of levels and the second quantity of levels may be different. For example, the quantity of voltage levels of the second modulation scheme may be fewer than the quantity of voltage levels of the first modulation scheme. In some examples, the levels of the second modulation scheme may be a subset of the levels of the first modulation scheme. For example, when the first modulation scheme includes a set of voltage levels {V1, V2, V3, and V4}, the second modulation scheme may include a set of levels {V1, V2, and V3}, a set of levels {V2, V3, and V4}, a set of levels {V1, V3, and V4}, a set of voltage levels {V1and V2}, a set of voltage levels {V1and V3}, a set of voltage levels {V1and V4}, a set of voltage levels {V2and V3}, a set of voltage levels {V2and V4}, a set of voltage levels {V3and V4}, or others. In some examples, such as when the second modulation scheme includes fewer voltage levels than the first modulation scheme, the rate of information carried by the second signaling360may be lower than the rate of information carried by the first signaling350(e.g., per signal carrying path). In some examples, a voltage level of the first modulation scheme and the second modulation scheme may be shared based on a particular termination scheme of the system300(e.g., of the host device310or the memory device330). For example, V1may be common to the first modulation scheme and the second modulation scheme when V1is a termination voltage level associated with the system300, which may reduce power consumption in the system300(e.g., due to DC currents associated with the termination voltage). In some examples, the quantity of voltage levels may be associated with a particular modulation architecture, where a PAM4 modulation scheme would include 4 voltage levels, and a PAM2 modulation scheme would include 2 voltage levels, and so on. In some digital systems, it may be advantageous for the quantity of voltage levels of a modulation scheme be a power of 2 (e.g., 2 levels, 4 levels, 8 levels, 16 levels, and so on), but different quantities of voltage levels may be used to support the techniques described herein. In some examples, the first quantity of levels and the second quantity of levels may be the same, but may span different (e.g., larger or smaller) voltage ranges. For example, the first modulation scheme may include a set of voltage levels {V1, V2, V3, and V4} (e.g., in ascending order of voltage) and the second modulation scheme may include a set of levels {V5, V6, V7, and V8} (e.g., in ascending order of voltage). The span of voltage between V8and V5(e.g., V8-V5) may be smaller than the span of voltage between V4and V1(e.g., V4-V1). Thus, the second modulation scheme may be associated with a reduction in interference (e.g., in comparison with the first modulation scheme), and the system300may be designed such that the receiver325can resolve the narrower differences between voltage levels of the second signaling360. In various examples, the span of voltage levels of the first modulation scheme and the span of voltage levels of the second modulation scheme may or may not overlap. For example, various scenarios can be considered where the first modulation scheme includes a set of voltage levels {Va, . . . Vb} (e.g., in ascending order), which may be a set of any quantity of voltage levels, and where the second modulation scheme includes a set of voltages {Vc, . . . Vd} (e.g., in ascending order), which may be a set of any quantity of voltage levels. In some examples, for reducing interference from the second signaling360, the span (Vd-Vc) may be less than the span (Vb-Va). For examples where the spans of voltage levels are non-overlapping, both Vcand Vdmay be less than Va, or both Vcand Vdmay be greater than Vb. For examples where the spans of voltage levels are partially overlapping, Vcmay be less than Vaand Vdmay be between Vaand Vb, or Vcmay be between Vaand Vband Vdmay be greater than Va. For examples where the spans of voltages are entirely overlapping (e.g., with respect to the smaller of the voltage ranges), both Vcand Vdmay be between Vaand Vb, or Vcmay equal Vaand Vdmay be between Vaand Vb, or Vcmay be between Vaand Vband Vdmay be equal to Vb. The selection of a modulation scheme for the first signaling350or the second signaling360, or configuration of different aspects of a modulation scheme for the first signaling350or the second signaling360(e.g., quantities of voltage levels, particular voltage levels), may be performed in the system300in various manners. For example, aspects of the first modulation scheme and the second modulation scheme may be preconfigured in a system based on fixed hardware design (e.g., special-purpose hardware), selectable parameters (e.g., as stored in non-volatile storage of the memory device330or the host device310, a mode register of the memory device or the host device310, as stored in a fuse or other one-time programmable storage of the memory device330or the host device310), or various combinations thereof. In some examples, aspects of the first modulation scheme and the second modulation scheme may be dynamically configured. For example, the host device310, the memory device330, or both may perform various operations, detections, or determinations to dynamically configure the first modulation scheme or the second modulation scheme. In some examples, one of the host device310or the memory device330may perform such operations, detections, or determinations, and signal (e.g., via signaling390) a result, an operating mode, or a configuration to the other of the host device310or the memory device330. In some examples, such operations, detections, or determinations may be performed based at least on a communication or exchange of signaling between the host device310and the memory device330. For example, the host device310and the memory device330may perform a training sequence that includes a set of write operations and a set of error detection operations, and one or more modulation schemes may be selected or configured based on a result of the training sequence. In various examples, such a training sequence can be performed in a device qualification (e.g., in a manufacturing or assembly operation of the memory device330, of the host device310, or of the system300as a whole), in an initialization sequence (e.g., upon powering the memory device330or the host device), or in an event-driven sequence (e.g., upon detecting an operating condition). In some examples, a result of such a training sequence, or some other operation based on communicated signaling between the host device310and the memory device330, may be stored in non-volatile storage of the host device310or the memory device330, which may include storage in a one-time programmable storage element (e.g., a fuse). In some examples, the system300may also be configured to communicate third signaling370and fourth signaling380. In some examples, the fourth signaling380may be based at least in part on the third signaling370(e.g., responsive to the third signaling370, determined based on a calculation or other operation using information carried in the third signaling370). For example, the third signaling370may include or refer to data signaling (e.g., read data, a read signal) over a channel that is configured for data (e.g., a DQ channel), and the fourth signaling380may include or refer to error detection signaling (e.g., read operation error detection information, a checksum, an error detection signal) over a channel configured for error detection and correction information (e.g., an EDC channel). Although illustrated separately, the third signaling370and the first signaling350may be carried on a common channel (e.g., a bidirectional data channel), such as when the receiver345and the transmitter340are part of a transceiver of the memory device330or when the transmitter320and the receiver325are part of a transceiver of the host device310. Further, although illustrated separately, the fourth signaling380and the second signaling360may be carried on a common channel (e.g., a unidirectional error detection channel), such as when the host device310performs error detection comparisons for both read operations and write operations. The system300may be configured to communicate the third signaling370using a third modulation scheme, which may be the same as or different than the first modulation scheme (e.g., associated with the first signaling350). For example, when carried on a bidirectional data channel, the first signaling350and the third signaling370may use the same modulation scheme (e.g., a common data signaling modulation scheme), which may include the same quantity of voltage levels as the first modulation scheme. The system300may be configured to communicate the fourth signaling380using a fourth modulation scheme, which may be the same as or different than the second modulation scheme (e.g., associated with the second signaling360). In some examples, such as when carried on a unidirectional error detection channel, the second signaling360and the fourth signaling380may use different modulation schemes (e.g., different voltage levels, a different quantity of voltage levels, voltage levels spanning a different voltage range). For example, because the memory device330is transmitting both the third signaling370and the fourth signaling380, these signals may not be illustrative of signal strength asymmetry. In other words, the third signaling370and the fourth signaling380may both be relatively strong at the transmitter340and relatively weak at the receiver325. Thus, in some examples, the modulation scheme used for the third signaling370and the fourth signaling380may span a same voltage range because one is not relatively more susceptible to interference from the other, and the system300may not use a particular countermeasure for interference between the third signaling370and the fourth signaling380. In some examples, using a modulation scheme for the fourth signaling380that has a greater quantity of voltage levels than a modulation scheme for the second signaling360may be advantageous for supporting read operation error detection using a higher information rate on an error detection channel than write operation error detection (e.g., supporting 4-level EDC for read operations and 2-level EDC for write operations). In some examples, using the same modulation scheme for the second signaling360and the fourth signaling380may be advantageous (e.g., to reduce complexity). For example, when the second signaling360is associated with write operation error detection and the fourth signaling380is associated with read operation error detection, using the same modulation scheme for the second signaling360and the fourth signaling380may avoid complexity related to error detection asymmetry (e.g., different demodulation hardware or operations, different checksum calculations). FIG.4illustrates an example400of modulation schemes that support channel modulation for a memory device in accordance with examples as disclosed herein. The example400includes a signal plot410, which may illustrate transitions of signals over time according to a PAM4 modulation scheme using four voltage levels (e.g., having voltages of V0, V1, V2, and V3). In one example of the signal plot410, the voltage levels of the PAM4 modulation scheme may be assigned with voltages of V0=0.50 V, V1=0.67 V, V2=0.83 V, and V3=1.00 V, though other voltage levels or quantities of voltage levels may be assigned in different modulation schemes in accordance with examples as disclosed herein. The signal plot410may illustrate modulation schemes where a switching interval from one symbol to another spans a duration of t1-t0. In one example of the signal plot410, the modulation scheme may support switching signal states (e.g., switching from one symbol to another) in an interval of 100 picoseconds, which may correspond to a modulation rate of 10 GHz. When a signal is switching from one symbol of the modulation scheme to another, a signal path may support a change of voltage from one state to another, or to within some threshold amount of such a change (e.g., according to a time constant behavior of the signal path or components thereof), within a time interval or duration of t1-t0. In some examples, interference of signals illustrated by the signal plot410may be based at least in part on a rate of change associated with the respective signal (e.g., a rate of change in voltage when transitioning from one voltage level to another, a rate of change in charge or current associated with the illustrated change in voltage when transitioning from one voltage level to another). Thus, a steeper slope of a signal of the signal plot410may be illustrative of a signal associated with causing greater interference (e.g., a stronger aggressor). In an illustrative example, an interference from a signal transition that spans adjacent voltage levels of the signal plot410(e.g., a transition of signal411, from V3to V2) may be relatively low when compared to an interference from a signal transition that spans the entire voltage range of the signal plot410(e.g., a transition of signal412, from V3to V0). Although described with reference to slopes of the signal plot410(e.g., a time rate of change of signal voltage), interference of an aggressor may be related to other aspects associated with a range of voltage levels (e.g., a voltage swing) not illustrated by the signal plot410, such as switching phenomena, signal oscillation, or other sources of interference associated with a signal transitioning from one voltage level to another according to a given modulation scheme. In various examples, a system300including a host device310and a memory device330may use voltage levels of the signal plot410differently in different modulation schemes. For example, a first modulation scheme may use a set of voltage levels420(e.g., including V0, V1, V2, and V3) that span a first range of voltage levels (e.g., a span of V3V0). In some examples, a system may use the set of voltage levels420for modulating data signals (e.g., write data, read data) that may be transmitted by a host device310and received by a memory device330, or transmitted by a memory device330and received by a host device310. In some examples, the set of voltage levels420may include a quantity of four voltage levels, and may correspond to voltage levels of a PAM4 modulation scheme. In some examples of a system300that uses the set of voltage levels420for a first modulation scheme, one or both of the sets of voltage levels430-aor430-bmay be used for another modulation scheme, which may be based on a static or dynamic configuration. As illustrated, each of the set of voltage levels430-aand the set of voltage levels430-bincludes voltage levels that span a smaller range of voltages than the set of voltage levels420. For example, the set of voltage levels430-a(e.g., including V2and V3) span a range of voltages (e.g., a span of V3-V2), and the set of voltage levels430-b(e.g., including V1and V3) span a range of voltages (e.g., a span of V3-V1). In various examples, the set of voltage levels430-aor the set of voltage levels430-bmay include a quantity of two voltage levels, and may correspond to voltage levels of a PAM2 modulation scheme. In some examples, a system300that uses the set of voltage levels420for a first modulation scheme may use either the set of voltage levels430-aor the set of voltage levels430-bfor a second modulation scheme. In some examples, such a second modulation scheme may be used for error detection signaling, such as signaling carrying a checksum or other error detection information that is based at least in part on data signaling (e.g., as carried by signaling modulated using the first modulation scheme). In some examples, the set of voltage levels430-aor the set of voltage levels430-bmay correspond to levels of error detection signaling that is transmitted by a memory device330based on write data signaling received by the memory device. In some examples, the set of voltage levels430-aor the set of voltage levels430-bmay correspond to levels of error detection signaling that is transmitted by a memory device330based on read data signaling received by the memory device. In some examples, the set of voltage levels430-aor the set of voltage levels430-bmay be included in a modulation scheme used to transmit data from a memory device330to a host device310using a unidirectional channel such as channel configured for error detection and correction information (e.g., an EDC channel). In the example400, the set of voltage levels420may share at least a voltage level V3with the set of voltage levels430-aor the set of voltage levels430-b. Thus, voltage level V3may illustrate a voltage level that is shared between a first modulation scheme and a second modulation scheme. In some examples, such a voltage level may be shared based on a termination voltage of the respective system300. In other words, voltage V3may correspond to a termination voltage of the respective system300, which may be associated with zero current through the termination as a result of signals at that voltage level. Such an approach of sharing voltage levels may be beneficial for supporting low power consumption designs. In other examples (e.g., a ground-terminated system), another voltage level may be common between a set of voltages of a first modulation scheme and a set of voltages of a second modulation scheme (e.g., a ground voltage, a virtual ground voltage). In some examples, an intermediate voltage (e.g., V0=0.50 V) may be used as a termination voltage in a system300, and may correspondingly be a shared voltage level of a first modulation scheme and a second modulation scheme. The selection of one of the set of voltage levels430-aor the set of voltage levels430-b, may be performed in the system300in various manners. For example, a configuration that supports the set of voltage levels430-aor a configuration that supports the set of voltage levels430-bmay be preconfigured in a system based on fixed hardware design (e.g., special-purpose hardware), selectable parameters (e.g., as stored in non-volatile storage of the memory device330or the host device310, a mode register of the memory device or the host device310, as stored in a fuse or other one-time programmable storage of the memory device330or the host device310), or various combinations thereof. A system300may be configured with a second modulation scheme that includes the set of voltage levels430-a, for example, to support relatively low interference from an aggressor or relatively low cross-talk, because an interference associated with the set of voltage levels430-a(e.g., based at least in part on a relatively smaller time-derivative of voltage associated with the span of (V3-V2)) may be relatively lower than an interference associated with the set of voltage levels430-b(e.g., based at least in part on a relatively greater time-derivative of voltage associated with the span of (V3-V1)). A system300may be configured with a second modulation scheme that includes the set of voltage levels430-b, for example, to support relatively improved decoding or resolving, because it may be easier to distinguish between symbols conveyed with voltages of V3or V1(e.g., based on a relatively wider “eye height” or resolving window associated with the span of (V3-V1)) than to distinguish between symbols conveyed with voltages of V3or V2(e.g., based on a relatively narrower “eye height” or resolving window associated with the span of (V3-V2)). In some examples, the selection of one of the set of voltage levels430-aor the set of voltage levels430-bmay be dynamically configured. For example, a host device310, a memory device330, or both may perform various operations, detections, or determinations to dynamically configure a modulation scheme to use the set of voltage levels430-aor the set of voltage levels430-b. In some examples, a selection of one of the set of voltage levels430-aor the set of voltage levels430-bmay be performed based at least on a communication or exchange of signaling between a host device310and a memory device330. For example, a host device310and the memory device330may perform a training sequence that includes a set of write operations and a set of error detection operations, and the set of voltage levels430-aor the set of voltage levels430-bmay be selected or configured based on a result of the training sequence. In one example, a system300may begin with the set of voltage levels430-bfor a modulation scheme (e.g., for error detection signaling), and if too much interference or cross-talk is experienced (e.g., determining errors with data channel signaling being above a threshold), the system300may instead be configured to use the set of voltage levels430-a. In another example, a system300may begin with the set of voltage levels430-afor a modulation scheme (e.g., for error detection signaling), and if too many errors associated with resolving signaling using the modulation scheme is experienced (e.g., determining errors with resolving error detection signaling being above a threshold), the system300may instead be configured to use the set of voltage levels430-b. In some examples, a system300may use the set of voltage levels420for both write data signaling and read data signaling, which may be an example of a modulation scheme or schemes that use a same set of voltage levels for both directions of signaling (e.g., on a bidirectional channel115). In various examples, a system300that uses the set of voltage levels420for both write data signaling and read data signaling may use either a same set of voltage levels (e.g., a same modulation scheme) for associated error detection signaling or a different set of voltage levels (e.g., a different modulation scheme) for associated error detection signaling. In one example, a system300may use the set of voltage levels430-afor both write data error detection signaling and read data error detection signaling (e.g., in a symmetric error detection configuration). In another example, a system300may use the set of voltage levels430-afor write data error detection signaling and the set of voltage levels420for read date error detection signaling (e.g., in an asymmetric error detection configuration). Although example400is illustrated in the context of configuring (e.g., by design or by dynamic determination) particular voltage levels for a modulation scheme (e.g., according to a set of voltage levels420and either the set of voltage levels430-aor the set of voltage levels430-b), a system300may additionally or alternatively be configured with a particular quantity of levels for a second modulation scheme (e.g., configuring some quantity of voltage levels other than four levels or two levels for a modulation scheme). Moreover, in some examples, a system300may be configured with voltage levels that are not necessarily a subset of another set of voltage levels. For example, compared with a first modulation scheme that includes the set of voltage levels420(e.g., including V0, V1, V2, and V3), a second modulation scheme may include a set of voltage levels that includes V3(e.g., a termination voltage) and another voltage, not shown, that is between V1and V2, which may be a compromise of considerations described with reference to the set of voltage levels430-aand the set of voltage levels430-b. Moreover, although described in the context of two alternative sets of voltage levels for a modulation scheme (e.g., the set of voltage levels430-aand the set of voltage levels430-b, sets of voltage levels for a second modulation scheme), a configuration of voltage levels for a modulation scheme may be determined, identified, or otherwise selected from two alternatives or more than two alternatives. Indeed, the described sets of voltage levels are examples of configurations that may support the described techniques for channel modulation for a memory device. FIG.5shows a block diagram500of a memory device505that supports channel modulation for a memory device in accordance with examples as disclosed herein. The memory device505may be an example of aspects of a memory device as described with reference toFIGS.1through4. The memory device505may include a memory device receiver510, a memory device transmitter515, an error detection component520, a modulation scheme manager525, and a signaling component530. Each of these modules may communicate, directly or indirectly, with one another (e.g., via one or more buses). The memory device receiver510may receive, over a first channel, a first signal that is modulated using a first modulation scheme that includes a first quantity of voltage levels spanning a first range of voltages. In some examples, the memory device receiver510may include a demodulator configured to demodulate received signals according to one or more modulation schemes. In some cases, the first channel is a bidirectional channel. In some cases, the first channel is configured for data. In some cases, the first signal is associated with a write operation of the memory device. In some cases, the first quantity of voltage levels includes three or more voltage levels. The memory device transmitter515may transmit, over a second channel, (e.g., over an error detection channel or other channel) a second signal based on receiving the first signal, the second signal modulated using a second modulation scheme that includes a second quantity of voltage levels spanning a second range of voltages smaller than the first range of voltages. In some examples, the memory device receiver510may include a modulator configured to modulate signals for transmission according to one or more modulation schemes. In some cases, the second channel is configured for error detection and correction information. In some cases, the second quantity of voltage levels is less than the first quantity of voltage levels. In some cases, the second quantity of voltage levels includes two voltage levels. In some examples, the memory device transmitter515may transmit, over the first channel, a third signal that is modulated using the first modulation scheme. In some examples, the memory device transmitter515may transmit (e.g., over the second channel, over the error detection channel or the other channel) a fourth signal including error detection information associated with the third signal, the fourth signal modulated using a third modulation scheme that includes a third quantity of voltage levels. In some cases, the third quantity of voltage levels spans the first range of voltages. In some cases, the third quantity is different than the second quantity. In some cases, the third quantity is equal to the first quantity. The error detection component520may determine error detection information associated with the first signal, where the second signal includes the error detection information. In some cases, the error detection information includes a CRC-checksum based on the first signal. In some examples, the modulation scheme manager525may determine the second quantity of voltage levels based on reading a non-volatile storage component of the memory device, and transmitting the second signal may be based on determining the second quantity of voltage levels. In some examples, the modulation scheme manager525may determine one or more voltage levels of the second modulation scheme based on reading a non-volatile storage component of the memory device. In some examples, the signaling component530may communicate signaling with a host device. In some examples, the modulation scheme manager525may communicate signaling with a host device. In some examples, the modulation scheme manager525may determine the second quantity of voltage levels based on the signaling. In some examples, the modulation scheme manager525may determine one or more voltage levels of the second modulation scheme based on the signaling. In some cases, the communicating occurs in response to initializing the memory device. FIG.6shows a block diagram600of a host device605that supports channel modulation for a memory device in accordance with examples as disclosed herein. The host device605may be an example of aspects of a host device as described with reference toFIGS.1through4. The host device605may include a host device transmitter610, a host device receiver615, a signaling component620, and an error detection interpreter625. Each of these modules may communicate, directly or indirectly, with one another (e.g., via one or more buses). The host device transmitter610may transmit, to a memory device over a first channel, a first signal that is modulated using a first modulation scheme that includes a first quantity of voltage levels that span a first range of voltages. In some examples, the host device transmitter610may include a modulator configured to modulate signals for transmission according to one or more modulation schemes. In some cases, the first channel is a bidirectional channel. In some cases, the first channel is configured for data. In some cases, the first signal is associated with a write operation of the memory device. In some cases, the first quantity of voltage levels includes three or more voltage levels. The host device receiver615may receive, from the memory device over a second channel (e.g., over an error detection channel or another channel), a second signal based on the first signal, the second signal modulated using a second modulation scheme that includes a second quantity of voltage levels that span a second range of voltages smaller than the first range of voltages. In some examples, the host device receiver615may include a demodulator configured to demodulate received signals according to one or more modulation schemes. In some cases, the second quantity of voltage levels is less than the first quantity of voltage levels. In some cases, the second quantity of voltage levels includes two voltage levels. In some examples, the host device receiver615may receive, from the memory device over the first channel, a third signal that is modulated using the first modulation scheme that includes the first quantity of voltage levels. In some examples, the host device receiver615may receive, from the memory device (e.g., over the second channel, over the error detection channel or the other channel), a fourth signal, including error detection information associated with the third signal, using a third modulation scheme that includes a third quantity of voltage levels. In some cases, the third quantity is different than the second quantity. In some cases, the third quantity is equal to the first quantity. In some cases, the third quantity of voltage levels spans the first range of voltages. The signaling component620may communicate signaling with the memory device, where the second quantity of voltage levels is determined based on the signaling. In some examples, the signaling component620may communicate signaling with the memory device, where one or more voltage levels of the second modulation scheme is determined based on the signaling. In some cases, the communicating is based on an initialization of the memory device. In some cases, the second signal includes error detection information associated with the first signal, and the error detection interpreter625may interpret the error detection information. In some cases, the error detection information includes a CRC-checksum based on the first signal. FIG.7shows a flowchart illustrating a method or methods700that support channel modulation for a memory device in accordance with examples as disclosed herein. The operations of method700may be implemented by a memory device or its components as described herein. For example, the operations of method700may be performed by a memory device as described with reference toFIGS.1through5. In some examples, a memory device may execute a set of instructions to control the functional elements of the memory device to perform the described functions. Additionally or alternatively, a memory device may perform aspects of the described functions using circuitry or special-purpose hardware. At705, the memory device may receive, over a first channel, a first signal that is modulated using a first modulation scheme that includes a first quantity of voltage levels spanning a first range of voltages. The operations of705may be performed according to the methods described herein. In some examples, aspects of the operations of705may be performed by a memory device receiver as described with reference toFIG.5. At710, the memory device may transmit, over a second channel (e.g., over an error detection channel or another channel), a second signal based on receiving the first signal, the second signal modulated using a second modulation scheme that includes a second quantity of voltage levels spanning a second range of voltages smaller than the first range of voltages. The operations of710may be performed according to the methods described herein. In some examples, aspects of the operations of710may be performed by a memory device transmitter as described with reference toFIG.5. In some examples, an apparatus as described herein may perform a method or methods, such as the method700. The apparatus may include features, circuitry, means, or instructions (e.g., a non-transitory computer-readable medium storing instructions executable by a processor) for receiving, at a memory device over a first channel, a first signal that is modulated using a first modulation scheme that includes a first quantity of voltage levels spanning a first range of voltages, and transmitting, at the memory device over a second channel (e.g., over an error detection channel or another channel), a second signal based on receiving the first signal, the second signal modulated using a second modulation scheme that includes a second quantity of voltage levels spanning a second range of voltages smaller than the first range of voltages. In some examples of the method700and the apparatus described herein, the first channel may be a bidirectional channel. In some examples of the method700and the apparatus described herein, the first channel may be configured for data. In some examples of the method700and the apparatus described herein, the second quantity of voltage levels may be less than the first quantity of voltage levels. Some examples of the method700and the apparatus described herein may include operations, features, circuitry, means, or instructions for determining error detection information associated with the first signal, where the second signal includes the error detection information. In some examples, the error detection information includes a CRC-checksum based on the first signal. In some examples of the method700and the apparatus described herein, the second channel may be configured for error detection and correction information. Some examples of the method700and the apparatus described herein may include operations, features, circuitry, means, or instructions for determining the second quantity of voltage levels based on reading a non-volatile storage component of the memory device. Some examples of the method700and the apparatus described herein may include operations, features, circuitry, means, or instructions for determining one or more voltage levels of the second modulation scheme based on reading a non-volatile storage component of the memory device. In some examples, transmitting the second signal may be based on determining the second quantity of voltage levels. Some examples of the method700and the apparatus described herein may include operations, features, circuitry, means, or instructions for communicating signaling with a host device, and determining the second quantity of voltage levels based on the signaling. Some examples of the method700and the apparatus described herein may include operations, features, circuitry, means, or instructions for communicating signaling with a host device, and determining one or more voltage levels of the second modulation scheme based on the signaling. In some examples of the method700and the apparatus described herein, the communicating occurs in response to initializing the memory device. In some examples of the method700and the apparatus described herein, the first signal may be associated with a write operation of the memory device. In some examples of the method700and the apparatus described herein, the first quantity of voltage levels includes three or more voltage levels, and the second quantity of voltage levels includes two voltage levels. Some examples of the method700and the apparatus described herein may include operations, features, circuitry, means, or instructions for transmitting, at the memory device over the first channel, a third signal that may be modulated using the first modulation scheme, and transmitting, at the memory device (e.g., over the second channel, over the error detection channel or the other channel), a fourth signal including error detection information associated with the third signal, the fourth signal modulated using a third modulation scheme that includes a third quantity of voltage levels. In some examples of the method700and the apparatus described herein, the third quantity of voltage levels spans the first range of voltages. In some examples of the method700and the apparatus described herein, the third quantity may be different than the second quantity. In some examples of the method700and the apparatus described herein, the third quantity may be equal to the first quantity. FIG.8shows a flowchart illustrating a method or methods800that support channel modulation for a memory device in accordance with examples as disclosed herein. The operations of method800may be implemented by a host device or its components as described herein. For example, the operations of method800may be performed by a host device as described with reference toFIGS.1through4and6. In some examples, a host device may execute a set of instructions to control the functional elements of the host device to perform the described functions. Additionally or alternatively, a host device may perform aspects of the described functions using circuitry or special-purpose hardware. At805, the host device may transmit, to a memory device over a first channel, a first signal that is modulated using a first modulation scheme that includes a first quantity of voltage levels that span a first range of voltages. The operations of805may be performed according to the methods described herein. In some examples, aspects of the operations of805may be performed by a host device transmitter as described with reference toFIG.6. At810, the host device may receive, from the memory device over a second channel (e.g., over an error detection channel or other channel), a second signal based on the first signal, the second signal modulated using a second modulation scheme that includes a second quantity of voltage levels that span a second range of voltages smaller than the first range of voltages. The operations of810may be performed according to the methods described herein. In some examples, aspects of the operations of810may be performed by a host device receiver as described with reference toFIG.6. In some examples, an apparatus as described herein may perform a method or methods, such as the method800. The apparatus may include features, circuitry, means, or instructions (e.g., a non-transitory computer-readable medium storing instructions executable by a processor) for transmitting, to a memory device over a first channel, a first signal that is modulated using a first modulation scheme that includes a first quantity of voltage levels that span a first range of voltages, and receiving, from the memory device over a second channel (e.g., over an error detection channel or other channel), a second signal based on the first signal, the second signal modulated using a second modulation scheme that includes a second quantity of voltage levels that span a second range of voltages smaller than the first range of voltages. Some examples of the method800and the apparatus described herein may include operations, features, circuitry, means, or instructions for communicating signaling with the memory device, and the second quantity of voltage levels may be determined based on the signaling. Some examples of the method800and the apparatus described herein may include operations, features, circuitry, means, or instructions for communicating signaling with the memory device, and one or more voltage levels of the second modulation scheme may be determined based on the signaling. In some examples of the method800and the apparatus described herein, the communicating may be based on an initialization of the memory device. Some examples of the method800and the apparatus described herein may include operations, features, circuitry, means, or instructions for receiving, from the memory device over the first channel, a third signal that may be modulated using the first modulation scheme that includes the first quantity of voltage levels, and receiving, from the memory device (e.g., over the second channel, over the error detection channel or the other channel), a fourth signal, including error detection information associated with the third signal, using a third modulation scheme that includes a third quantity of voltage levels. In some examples of the method800and the apparatus described herein, the third quantity of voltage levels spans the first range of voltages. In some examples of the method800and the apparatus described herein, the third quantity may be different than the second quantity. In some examples of the method800and the apparatus described herein, the third quantity may be equal to the first quantity. In some examples of the method800and the apparatus described herein, the second quantity of voltage levels may be less than the first quantity of voltage levels. In some examples of the method800and the apparatus described herein, the first quantity of voltage levels includes three or more voltage levels, and the second quantity of voltage levels includes two voltage levels. In some examples of the method800and the apparatus described herein, the first signal may be associated with a write operation of the memory device. In some examples of the method800and the apparatus described herein, the first channel may be a bidirectional channel. In some examples of the method800and the apparatus described herein, the first channel may be configured for data. In some examples of the method800and the apparatus described herein, the second signal includes error detection information associated with the first signal. In some examples, the error detection information includes a CRC-checksum based on the first signal. In some examples of the method800and the apparatus described herein, the second channel may be configured for error detection and correction information. An apparatus is described. The apparatus may include an array of memory cells configured to store information, a receiver coupled with the array of memory cells and configured to receive, over a first channel, a first signal that is modulated using a first modulation scheme that includes a first quantity of voltage levels that span a first range of voltages, and a transmitter coupled with the array of memory cells and configured to transmit, over a second channel, a second signal based at least in part on receiving the first signal, the second signal modulated using a second modulation scheme that includes a second quantity of voltage levels that span a second range of voltages smaller than the first range of voltages. Some examples of the apparatus may include a non-volatile storage component configured to store a modulation parameter that indicates the second quantity of voltage levels associated with the second modulation scheme. Some examples of the apparatus may include a non-volatile storage component configured to store a modulation parameter that indicates the second range of voltages associated with the second modulation scheme. Some examples of the apparatus may include a controller configured to determine error detection information associated with the first signal based at least in part on the second signal. Some examples of the apparatus may include a controller configured to determine the second quantity of voltage levels based at least in part on exchanging signaling with a host device. Some examples of the apparatus may include a controller configured to determine one or more voltage levels of the second modulation scheme based at least in part on exchanging signaling with a host device. In some examples of the apparatus, the first quantity of voltage levels may include three or more voltage levels, and the second quantity of voltage levels may include two voltage levels. In some examples of the apparatus, the first channel may be configured for data and the second channel may be configured for error detection and correction information. Another apparatus is described. The apparatus may include a transmitter of a host device configured to transmit, to a memory device over a first channel, a first signal that is modulated using a first modulation scheme that includes a first quantity of voltage levels that span a first range of voltages, and a receiver of the host device configured to receive, from the memory device over a second channel, a second signal based at least in part on the first signal, the second signal modulated using a second modulation scheme that includes a second quantity of voltage levels that span a second range of voltages smaller than the first range of voltages. Some examples of the apparatus may include a controller of the host device configured to transmit one or more signals to the memory device to indicate the second quantity of voltage levels of the second modulation scheme. Some examples of the apparatus may include a controller of the host device configured to transmit one or more signals to the memory device to indicate one or more voltage levels of the second modulation scheme. In some examples of the apparatus, the first quantity of voltage levels may include three or more voltage levels, and the second quantity of voltage levels may include two voltage levels. In some examples of the apparatus, the second signal includes error detection information associated with the first signal. In some examples of the apparatus, the first channel may be configured for data and the second channel may be configured for error detection and correction information. A system is described. The system may include a memory device and a host device coupled with the memory device. The system may be configured to communicate, from the host device to the memory device over a first channel, a first signal that is modulated using a first modulation scheme that includes a first quantity of voltage levels that span a first range of voltages, and communicate, from the memory device to the host device over a second channel, a second signal that is based at least in part on the first signal, the second signal modulated using a second modulation scheme that includes a second quantity of voltage levels that span a second range of voltages smaller than the first range of voltages. In some examples of the system, the first quantity of voltage levels may include three or more voltage levels, and the second quantity of voltage levels may include two voltage levels. In some examples of the system, the first channel may be configured for data and the second channel may be configured for error detection and correction information. It should be noted that the described methods include possible implementations, and that the operations and the steps may be rearranged or otherwise modified and that other implementations are possible. Further, aspects from two or more of the methods may be combined. Information and signals described herein may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof. Some drawings may illustrate signals as a single signal; however, it will be understood by a person of ordinary skill in the art that the signal may represent a bus of signals, where the bus may have a variety of bit widths. As used herein, the term “virtual ground” refers to a node of an electrical circuit that is held at a voltage of approximately zero volts (0V) but that is not directly coupled with ground. Accordingly, the voltage of a virtual ground may temporarily fluctuate and return to approximately 0V at steady state. A virtual ground may be implemented using various electronic circuit elements, such as a voltage divider consisting of operational amplifiers and resistors. Other implementations are also possible. “Virtual grounding” or “virtually grounded” means connected to approximately 0V. The terms “electronic communication,” “conductive contact,” “connected,” and “coupled” may refer to a relationship between components that supports the flow of signals between the components. Components are considered in electronic communication with (or in conductive contact with or connected with or coupled with) one another if there is any conductive path between the components that can, at any time, support the flow of signals between the components. At any given time, the conductive path between components that are in electronic communication with each other (or in conductive contact with or connected with or coupled with) may be an open circuit or a closed circuit based on the operation of the device that includes the connected components. The conductive path between connected components may be a direct conductive path between the components or the conductive path between connected components may be an indirect conductive path that may include intermediate components, such as switches, transistors, or other components. In some cases, the flow of signals between the connected components may be interrupted for a time, for example, using one or more intermediate components such as switches or transistors. The term “coupling” refers to condition of moving from an open-circuit relationship between components in which signals are not presently capable of being communicated between the components over a conductive path to a closed-circuit relationship between components in which signals can be communicated between components over the conductive path. When a component, such as a controller, couples other components together, the component initiates a change that allows signals to flow between the other components over a conductive path that previously did not permit signals to flow. The term “isolated” refers to a relationship between components in which signals are not presently capable of flowing between the components. Components are isolated from each other if there is an open circuit between them. For example, two components separated by a switch that is positioned between the components are isolated from each other when the switch is open. When a controller isolates two components from one another, the controller affects a change that prevents signals from flowing between the components using a conductive path that previously permitted signals to flow. The term “layer” used herein refers to a stratum or sheet of a geometrical structure. each layer may have three dimensions (e.g., height, width, and depth) and may cover at least a portion of a surface. For example, a layer may be a three-dimensional structure where two dimensions are greater than a third, e.g., a thin-film. Layers may include different elements, components, and/or materials. In some cases, one layer may be composed of two or more sublayers. In some of the appended figures, two dimensions of a three-dimensional layer are depicted for purposes of illustration. Those skilled in the art will, however, recognize that the layers are three-dimensional in nature. As used herein, the term “substantially” means that the modified characteristic (e.g., a verb or adjective modified by the term substantially) need not be absolute but is close enough to achieve the advantages of the characteristic. As used herein, the term “electrode” may refer to an electrical conductor, and in some cases, may be employed as an electrical contact to a memory cell or other component of a memory array. An electrode may include a trace, wire, conductive line, conductive layer, or other features that provides a conductive path between elements or components of memory array. As used herein, the term “shorting” refers to a relationship between components in which a conductive path is established between the components via the activation of a single intermediary component between the two components in question. For example, a first component shorted to a second component may exchange signals with the second component when a switch between the two components is closed. Thus, shorting may be a dynamic operation that enables the flow of charge between components (or lines) that are in electronic communication. The devices discussed herein, including a memory array, may be formed on a semiconductor substrate, such as silicon, germanium, silicon-germanium alloy, gallium arsenide, gallium nitride, etc. In some cases, the substrate is a semiconductor wafer. In other cases, the substrate may be a silicon-on-insulator (SOI) substrate, such as silicon-on-glass (SOG) or silicon-on-sapphire (SOP), or epitaxial layers of semiconductor materials on another substrate. The conductivity of the substrate, or sub-regions of the substrate, may be controlled through doping using various chemical species including, but not limited to, phosphorous, boron, or arsenic. Doping may be performed during the initial formation or growth of the substrate, by ion-implantation, or by any other doping means. A switching component or a transistor discussed herein may represent a field-effect transistor (FET) and comprise a three terminal device including a source, drain, and gate. The terminals may be connected to other electronic elements through conductive materials, e.g., metals. The source and drain may be conductive and may comprise a heavily-doped, e.g., degenerate, semiconductor region. The source and drain may be separated by a lightly-doped semiconductor region or channel. If the channel is n-type (i.e., majority carriers are electrons), then the FET may be referred to as a n-type FET. If the channel is p-type (i.e., majority carriers are holes), then the FET may be referred to as a p-type FET. The channel may be capped by an insulating gate oxide. The channel conductivity may be controlled by applying a voltage to the gate. For example, applying a positive voltage or negative voltage to an n-type FET or a p-type FET, respectively, may result in the channel becoming conductive. A transistor may be “on” or “activated” when a voltage greater than or equal to the transistor's threshold voltage is applied to the transistor gate. The transistor may be “off” or “deactivated” when a voltage less than the transistor's threshold voltage is applied to the transistor gate. The description set forth herein, in connection with the appended drawings, describes example configurations and does not represent all the examples that may be implemented or that are within the scope of the claims. The term “exemplary” used herein means “serving as an example, instance, or illustration,” and not “preferred” or “advantageous over other examples.” The detailed description includes specific details to provide an understanding of the described techniques. These techniques, however, may be practiced without these specific details. In some instances, well-known structures and devices are shown in block diagram form to avoid obscuring the concepts of the described examples. In the appended figures, similar components or features may have the same reference label. Further, various components of the same type may be distinguished by following the reference label by a dash and a second label that distinguishes among the similar components. If just the first reference label is used in the specification, the description is applicable to any one of the similar components having the same first reference label irrespective of the second reference label. Information and signals described herein may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof. The various illustrative blocks and modules described in connection with the disclosure herein may be implemented or performed with a general-purpose processor, a DSP, an ASIC, an FPGA or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general-purpose processor may be a microprocessor, but in the alternative, the processor may be any processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices (e.g., a combination of a DSP and a microprocessor, multiple microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration). The functions described herein may be implemented in hardware, software executed by a processor, firmware, or any combination thereof. If implemented in software executed by a processor, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Other examples and implementations are within the scope of the disclosure and appended claims. For example, the described functions can be implemented using software executed by a processor, hardware, firmware, hardwiring, or combinations of any of these. Features implementing functions may also be physically located at various positions, including being distributed such that portions of functions are implemented at different physical locations. Also, as used herein, including in the claims, “or” as used in a list of items (for example, a list of items prefaced by a phrase such as “at least one of” or “one or more of”) indicates an inclusive list such that, for example, a list of at least one of A, B, or C means A or B or C or AB or AC or BC or ABC (i.e., A and B and C). Also, as used herein, the phrase “based on” shall not be construed as a reference to a closed set of conditions. For example, an exemplary step that is described as “based on condition A” may be based on both a condition A and a condition B without departing from the scope of the present disclosure. In other words, as used herein, the phrase “based on” shall be construed in the same manner as the phrase “based at least in part on.” Computer-readable media includes both non-transitory computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A non-transitory storage medium may be any available medium that can be accessed by a general purpose or special purpose computer. By way of example, and not limitation, non-transitory computer-readable media can comprise RAM, ROM, electrically erasable programmable read-only memory (EEPROM), compact disk (CD) ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other non-transitory medium that can be used to carry or store desired program code means in the form of instructions or data structures and that can be accessed by a general-purpose or special-purpose computer, or a general-purpose or special-purpose processor. Also, any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. Disk and disc, as used herein, include CD, laser disc, optical disc, digital versatile disc (DVD), floppy disk and Blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above are also included within the scope of computer-readable media. The description herein is provided to enable a person skilled in the art to make or use the disclosure. With the guidance of the present disclosure, various modifications to the described techniques will be apparent to those skilled in the art, and the principles defined herein may be applied to other variations or equivalents without departing from the scope of the disclosure. Thus, the disclosure is not limited to the examples and designs described herein, but is to be accorded the broadest scope consistent with the principles and novel features disclosed herein.
117,484
11860732
DETAILED DESCRIPTION Aspects of the present disclosure are directed to redundancy metadata media management at a memory sub-system. A memory sub-system can be a storage device, a memory module, or a combination of a storage device and memory module. Examples of storage devices and memory modules are described below in conjunction withFIG.1. In general, a host system can utilize a memory sub-system that includes one or more memory components, such as memory devices that store data. The host system can provide data to be stored at the memory sub-system and can request data to be retrieved from the memory sub-system. A memory sub-system can utilize one or more memory devices, including any combination of the different types of non-volatile memory devices and/or volatile memory devices, to store the data provided by the host system. In some embodiments, non-volatile memory devices can be provided by negative-and (NAND) type flash memory devices. Other examples of non-volatile memory devices are described below in conjunction withFIG.1. A non-volatile memory device is a package of one or more dice. Each die can include one or more planes. A plane is a portion of a memory device that includes multiple memory cells. Some memory devices can include two or more planes. For some types of non-volatile memory devices (e.g., NAND devices), each plane includes a set of physical blocks. Each block includes a set of pages. “Block” herein shall refer to a set of contiguous or non-contiguous memory pages. An example of a “block” is an “erasable block,” which is the minimal erasable unit of memory, while “page” is a minimal writable unit of memory. Each page includes a set of memory cells. A memory cell is an electronic circuit that stores information. Some types of memory, such as 3D cross-point, can group pages across dice and channels to form management units (MUs) (also referred to as logical units (LUNs)). A MU can correspond to a page, a block, etc. In some instances, a group of MUs that are grouped together for management purposes can be referred to as a super MU (SMU). A memory device can include multiple memory cells arranged in a two-dimensional grid. The memory cells are formed onto a silicon wafer in an array of columns and rows. A memory cell includes a capacitor that holds an electric charge and a transistor that acts as a switch controlling access to the capacitor. Accordingly, the memory cell may be programmed (written to) by applying a certain voltage, which results in an electric charge being held by the capacitor. The memory cells are joined by wordlines, which are conducting lines electrically connected to the control gates of the memory cells, and bitlines, which are conducting lines electrically connected to the drain electrodes of the memory cells. Data operations can be performed by the memory sub-system. The data operations can be host-initiated operations. For example, the host system can initiate a data operation (e.g., write, read, erase, etc.) on a memory sub-system. The host system can send access requests (e.g., write command, read command) to the memory sub-system, such as to store data on a memory device at the memory sub-system and to read data from the memory device on the memory sub-system. The data to be read or written, as specified by a host request, is hereinafter referred to as “host data.” A host request can include a logical address (e.g., a logical block address (LBA) and namespace) for the host data, which is the location that the host system associates with the host data. The logical address information (e.g., LBA, namespace) can be part of metadata for the host data. As described above, a die can contain one or more planes. A memory sub-system can use a striping scheme to treat various sets of data as units when performing data operations (e.g., write, read, erase, etc.). A die stripe refers to a collection of planes that are treated as one unit when writing, reading, or erasing data. A controller of a memory device (i.e., a memory sub-system controller, a memory device controller, etc.) can execute the same operation can carry out the same operation, in parallel, at each plane of a dice stripe. A block stripe is a collection of blocks, at least one from each plane of a die stripe, that are treated as a unit. The blocks in a block stripe can be associated with the same block identifier (e.g., block number) at each respective plane. A page stripe is a set of pages having the same page identifier (e.g., the same page number), across a block stripe, and treated as a unit. A MU stripe is a collection of MUs, at least one from each plane of a die stripe, a block stripe, a page stripe, etc., that are treated as a unit. A super management unit (SMU) refers to a collection or group of MUs that are grouped together for memory management purposes. A host system can initiate a memory access operation (e.g., a programming operation, a read operation, an erase operation, etc.) on a memory sub-system. For example, the host system can transmit a request to a memory sub-system controller, to program data to and/or read data from a memory device of the memory sub-system. Such data is referred to herein as “host data.” The memory sub-system controller can execute one or more operations to access the host data in accordance with request. As data is accessed at a memory cell of a memory device, the memory cell can deteriorate and eventually become defective. For example, when a host system initiates too many memory access operations for host data stored at a memory device, the memory cells that store the host data, as well as the adjacent memory cells at the memory device, can become corrupted. In some instances, a memory access operation (e.g., a read operation, etc.) performed by a memory sub-system controller to access data at one or more memory pages at each plane of a memory device can fail. Such failure is referred to herein as a multi-plane memory access failure. A memory sub-system can support a redundancy mechanism to protect host data against a memory access failure. For example, the memory sub-system can implement one or more redundancy operations (e.g., redundant array of independent NAND (RAIN) operations) to provide redundancy for the data stored on the memory sub-system. When host data is received from the host system to be programmed to a memory device of the memory sub-system, a memory sub-system controller can generate redundancy metadata (e.g., parity data) based on an exclusive-or (XOR) operation with the received host data and can use the redundancy metadata to reconstruct or recalculate the host data in the event of a failure of a portion of the memory device that is storing host data. As an example, the memory sub-system controller can generate the redundancy metadata based on an XOR operation applied to host data stored at a particular number of data locations of a management unit (e.g., a page, a block) of the memory sub-system. If a portion of a memory device storing the host data fails and the corresponding data is lost or corrupted, the memory sub-system controller can reconstruct the lost/corrupted data based on an XOR operation among the rest of the host data and the redundancy metadata. A portion of memory at a memory sub-system can be reserved to store redundancy metadata generated for host data that is stored at other portions of memory at the memory sub-system. For example, a memory sub-system controller can allocate one or more MUs of an MU stripe to store redundancy metadata generated for host data programmed to other MUs of the MU stripe. For purposes of explanation, the one or more allocated MUs are referred to herein as redundancy MUs and the other MUs of the MU stripe are referred to as host data MUs. As host systems and memory sub-systems become more advanced and complex, the overall storage capacity of a memory sub-system can be significantly large and/or the size of a unit of data that is accessible to a host system can be significantly small. For example, in some instances, an overall storage capacity of a memory sub-system can include several terabytes (TB) of memory space and a size of a unit of data that is accessible to the host system can correspond to tens of bytes of memory space. As indicated above, a host system can initiate a memory access operation (e.g., a programming operation, etc.) with respect to one unit of host data (e.g., corresponding to tens of bytes of memory space). In some instances, multiple units of host data can be stored at multiple respective host data MUs of a MU stripe. The host system can transmit requests to access a respective unit of host data at different time periods. For example, the host system can transmit a first request to program a first unit of host data at a first time period and a second request to program a second unit of host data at a second time period. Responsive to receiving the first request, the memory sub-system controller can generate redundancy metadata associated with the first host data and store the generated redundancy metadata at a redundancy MU of a respective MU stripe. Responsive to receiving the second request, the memory sub-system controller can generate updated redundancy metadata associated with the first host data and the second host data and store the updated redundancy metadata at the redundancy MU. The memory sub-system controller can continue to generate updated redundancy metadata and store updated redundancy metadata at the redundancy MU until each host data MU of the MU stripe stores host data (i.e., the MU stripe is “closed”). In conventional systems, one or more redundancy MUs for each MU stripe can reside at a particular memory device of the memory sub-system. Accordingly, the memory sub-system controller can program redundancy metadata and updated redundancy metadata to the one or more redundancy MUs multiple times before a respective MU stripe is closed. Additionally, as host data is removed from respective host data MUs and/or new host data is programmed to the respective host data MUs, the memory sub-system controller can update and reprogram the redundancy metadata associated with the MU stripe at the one or more redundancy MUs after the MU stripe is closed. As multiple programming operations are performed at the one or more redundancy MUs residing on the particular memory device, the memory cells associated with the redundancy MUs can degrade at a faster rate than memory cells residing at other devices (i.e., that are not allocated to store redundancy metadata). As the memory cells associated with the redundancy MUs degrade, a significant number of memory access errors can occur, causing an overall error rate associated with the memory sub-system to increase. The memory sub-system controller can execute error correction operations to address the significant number of memory access errors, which can consume a significant amount of computing resources (e.g., processor cycles, etc.). Consuming a significant amount of computing resources can cause an overall system latency to increase and an overall system efficiency to decrease. In addition, over time, the memory cells associated with the redundancy MUs can degrade to a point at which data stored at the memory cells is not reliable and cannot be recovered (e.g., via an error correction operation). As the redundancy MUs are allocated to store redundancy metadata generated for host data stored that the memory sub-system, if the redundancy metadata stored at the redundancy MUs is inaccessible, the host data can become unrecoverable in the event of a catastrophic memory failure. Aspects of the present disclosure address the above and other deficiencies by providing a scheme for redundancy metadata media management at a memory sub-system. One example of media management is wear leveling. In accordance with embodiments described herein, wear leveling refers to a technique for evenly distributing data (e.g., redundancy metadata) across a memory sub-system to avoid the concentration of memory cell wear at a particular portion (e.g., a particular memory device, a particular portion of a memory device, etc.) of the memory sub-system. Other media management operations are possible. In some embodiments, a memory sub-system controller can receive a request to program host data to a memory device of a memory sub-system. The host data can be associated with a logical address (e.g., indicated by the received request). The memory sub-system controller can obtain a redundancy factor that corresponds to the logical address associated with the host data. The redundancy factor can be a randomly generated number between zero and a number of super management units (SMUs) associated with the memory sub-system. In some embodiments, the memory sub-system can include multiple fault tolerant stripes across multiple memory devices of the memory sub-system. A fault tolerant stripe refers to a collection of management units (MUs) (e.g., blocks, pages, etc.) at particular regions (e.g., planes) of two or more memory devices that store data that can be recovered by the same one or more data recovery operations executed by the memory sub-system controller. In some embodiments, multiple fault tolerant stripes can span a super management unit (SMU) associated with the memory sub-system. The memory sub-system controller can associate the redundancy factor with each MU of a respective fault tolerant stripe. In some embodiments, the memory sub-system controller can obtain the redundancy factor using a logical-to-physical (L2P) data structure associated with the memory sub-system. For example, the memory sub-system controller can determine an address associated with a virtual SMU (vSMU) associated with the host data based on the logical address for the host data. The memory sub-system controller can identify an entry of the L2P data structure that corresponds to the determined vSMU and obtain the redundancy factor from the identified entry. The memory sub-system controller can determine a physical address associated with a first set of memory cells of the memory device that is to store the host data and a physical address associated with a second set of memory cells of the memory device that is to store redundancy metadata associated with the host data based on the redundancy factor. In some embodiments, the memory sub-system controller can determine a virtual fault tolerant stripe associated with the host data and a virtual management unit (vMU) associated with the virtual fault tolerant stripe based on the logical address. The memory sub-system controller can provide an indication of the virtual fault tolerant stripe and the redundancy factor as input to a first function and an indication of the vMU and the redundancy factor as input to a second function. The memory sub-system controller can obtain one or more outputs of the first and second functions, which can include an indication of an index associated with the physical fault tolerant stripe that is to store the host data of the request and an indication of an index associated with a physical MU of the physical fault tolerant stripe that is to store the host data. The memory sub-system controller can determine a physical address associated with the first set of memory cells that is to store the host data of the request based on the index associated with the physical fault tolerant stripe and the index associated with the physical MU of the physical fault tolerant stripe. In some embodiments, the memory sub-system controller can determine the physical address associated with the second set of memory cells that is to store the redundancy metadata based on the redundancy factor, the index associated with the virtual fault tolerant stripe (i.e., provided as input to the first function), a number of MUs associated with the physical fault tolerant stripe (i.e., obtained as an output from the first function), a number of memory devices associated with the memory sub-system, and a number of partitions associated with each memory device of the memory sub-system. The memory sub-system controller can program the host data to the first set of memory cells responsive to determining the physical address associated with the first set of memory cells. The memory sub-system controller can program redundancy metadata associated with the host data to the second set of memory cells responsive to determining the physical address associated with the second set of memory cells. After the host data and the redundancy metadata is programmed to the first set of memory cells and the second set of memory cells, respectively, the memory sub-system controller can receive another request to program other host data to a memory device of the memory sub-system. The memory sub-system controller can obtain a redundancy factor associated with the other host data based on a logical address associated with the other host data, as described above. If the obtained redundancy factor corresponds to the redundancy factor associated with the host data of the previous request, the memory sub-system controller can determine that redundancy metadata associated with the other host data is to be stored at the second set of memory cells, in accordance with previously described embodiments. If the obtained redundancy factor corresponds to a different redundancy factor than the redundancy factor associated with the host data of the previous request, the memory sub-system controller can determine that the redundancy metadata associated with the host data is to be stored at another set of memory cells (e.g., a third set of memory cells) of another memory device or another portion of a memory device of the memory sub-system. Advantages of the present disclosure include, but are not limited to, providing a scheme that distributes redundancy data across a memory sub-system. As indicated above, embodiments of the present disclosure provide that a memory sub-system can generate and maintain a redundancy factor for each fault tolerant stripe (e.g., a fault tolerant stripe) associated with the memory sub-system. The memory sub-system controller can obtain the redundancy factor based on a logical address associated with host data and determine the physical address associated with the portion of memory that is to store the host data and the physical address associated with the portion of memory that is to store redundancy metadata for the host data based on the redundancy factor. As the redundancy factor corresponds to a randomly generated number between zero and the number of SMUs associated with the memory sub-system, the set of MUs of each fault tolerant stripe that are allocated to store redundancy metadata can reside at a different memory device, or a different portion of a memory device, than MUs at other fault tolerant stripe that are allocated to store redundancy metadata. Accordingly, redundancy metadata can be stored across multiple memory devices, or multiple portions of a memory device, for a memory sub-system, which reduces the concentration of programming operations at a single memory device, or a single portion of a memory device. As a result, a fewer number of memory access errors can occur at the memory sub-system and the memory sub-system controller can execute fewer error correction operations. As fewer error correction operations are executed, fewer computing resources (e.g., processing cycles, etc.) are consumed to perform error correction and such computing resources can be made available to perform other processes associated with the memory sub-system. As additional computing resources are made available for other processes, an overall latency of the memory sub-system decreases and an overall efficiency of the memory sub-system increases. In addition, the fewer number of errors occur at the portions of memory that store redundancy metadata, a likelihood that the redundancy metadata is accessible is significantly higher, which increases the likelihood that host data can be recovered in the event of a catastrophic memory failure. FIG.1illustrates an example computing system100that includes a memory sub-system110in accordance with some embodiments of the present disclosure. The memory sub-system110can include media, such as one or more volatile memory devices (e.g., memory device140), one or more non-volatile memory devices (e.g., memory device130), or a combination of such. A memory sub-system110can be a storage device, a memory module, or a combination of a storage device and memory module. Examples of a storage device include a solid-state drive (SSD), a flash drive, a universal serial bus (USB) flash drive, an embedded Multi-Media Controller (eMMC) drive, a Universal Flash Storage (UFS) drive, a secure digital (SD) card, and a hard disk drive (HDD). Examples of memory modules include a dual in-line memory module (DIMM), a small outline DIMM (SO-DIMM), and various types of non-volatile dual in-line memory modules (NVDIMMs). The computing system100can be a computing device such as a desktop computer, laptop computer, network server, mobile device, a vehicle (e.g., airplane, drone, train, automobile, or other conveyance), Internet of Things (IoT) enabled device, embedded computer (e.g., one included in a vehicle, industrial equipment, or a networked commercial device), or such computing device that includes memory and a processing device. The computing system100can include a host system120that is coupled to one or more memory sub-systems110. In some embodiments, the host system120is coupled to multiple memory sub-systems110of different types.FIG.1illustrates one example of a host system120coupled to one memory sub-system110. As used herein, “coupled to” or “coupled with” generally refers to a connection between components, which can be an indirect communicative connection or direct communicative connection (e.g., without intervening components), whether wired or wireless, including connections such as electrical, optical, magnetic, etc. The host system120can include a processor chipset and a software stack executed by the processor chipset. The processor chipset can include one or more cores, one or more caches, a memory controller (e.g., NVDIMM controller), and a storage protocol controller (e.g., PCIe controller, SATA controller). The host system120uses the memory sub-system110, for example, to write data to the memory sub-system110and read data from the memory sub-system110. The host system120can be coupled to the memory sub-system110via a physical host interface. Examples of a physical host interface include, but are not limited to, a serial advanced technology attachment (SATA) interface, a peripheral component interconnect express (PCIe) interface, universal serial bus (USB) interface, Fibre Channel, Serial Attached SCSI (SAS), a double data rate (DDR) memory bus, Small Computer System Interface (SCSI), a dual in-line memory module (DIMM) interface (e.g., DIMM socket interface that supports Double Data Rate (DDR)), etc. The physical host interface can be used to transmit data between the host system120and the memory sub-system110. The host system120can further utilize an NVM Express (NVMe) interface to access components (e.g., memory devices130) when the memory sub-system110is coupled with the host system120by the physical host interface (e.g., PCIe bus). The physical host interface can provide an interface for passing control, address, data, and other signals between the memory sub-system110and the host system120.FIG.1illustrates a memory sub-system110as an example. In general, the host system120can access multiple memory sub-systems via a same communication connection, multiple separate communication connections, and/or a combination of communication connections. The memory devices130,140can include any combination of the different types of non-volatile memory devices and/or volatile memory devices. The volatile memory devices (e.g., memory device140) can be, but are not limited to, random access memory (RAM), such as dynamic random access memory (DRAM) and synchronous dynamic random access memory (SDRAM). Some examples of non-volatile memory devices (e.g., memory device130) include a negative-and (NAND) type flash memory and write-in-place memory, such as a three-dimensional cross-point (“3D cross-point”) memory device, which is a cross-point array of non-volatile memory cells. A cross-point array of non-volatile memory cells can perform bit storage based on a change of bulk resistance, in conjunction with a stackable cross-gridded data access array. Additionally, in contrast to many flash-based memories, cross-point non-volatile memory can perform a write in-place operation, where a non-volatile memory cell can be programmed without the non-volatile memory cell being previously erased. NAND type flash memory includes, for example, two-dimensional NAND (2D NAND) and three-dimensional NAND (3D NAND). Each of the memory devices130can include one or more arrays of memory cells. One type of memory cell, for example, single level cells (SLC) can store one bit per cell. Other types of memory cells, such as multi-level cells (MLCs), triple level cells (TLCs), quad-level cells (QLCs), and penta-level cells (PLCs) can store multiple bits per cell. In some embodiments, each of the memory devices130can include one or more arrays of memory cells such as SLCs, MLCs, TLCs, QLCs, PLCs or any combination of such. In some embodiments, a particular memory device can include an SLC portion, and an MLC portion, a TLC portion, a QLC portion, or a PLC portion of memory cells. The memory cells of the memory devices130can be grouped as pages that can refer to a logical unit of the memory device used to store data. With some types of memory (e.g., NAND), pages can be grouped to form blocks. Although non-volatile memory components such as a 3D cross-point array of non-volatile memory cells and NAND type flash memory (e.g., 2D NAND, 3D NAND) are described, the memory device130can be based on any other type of non-volatile memory, such as read-only memory (ROM), phase change memory (PCM), self-selecting memory, other chalcogenide based memories, ferroelectric transistor random-access memory (FeTRAM), ferroelectric random access memory (FeRAM), magneto random access memory (MRAM), Spin Transfer Torque (STT)-MRAM, conductive bridging RAM (CBRAM), resistive random access memory (RRAM), oxide based RRAM (OxRAM), negative-or (NOR) flash memory, or electrically erasable programmable read-only memory (EEPROM). A memory sub-system controller115(or controller115for simplicity) can communicate with the memory devices130to perform operations such as reading data, writing data, or erasing data at the memory devices130and other such operations. The memory sub-system controller115can include hardware such as one or more integrated circuits and/or discrete components, a buffer memory, or a combination thereof. The hardware can include a digital circuitry with dedicated (i.e., hard-coded) logic to perform the operations described herein. The memory sub-system controller115can be a microcontroller, special purpose logic circuitry (e.g., a field programmable gate array (FPGA), an application specific integrated circuit (ASIC), etc.), or other suitable processor. The memory sub-system controller115can include a processing device, which includes one or more processors (e.g., processor117), configured to execute instructions stored in a local memory119. In the illustrated example, the local memory119of the memory sub-system controller115includes an embedded memory configured to store instructions for performing various processes, operations, logic flows, and routines that control operation of the memory sub-system110, including handling communications between the memory sub-system110and the host system120. In some embodiments, the local memory119can include memory registers storing memory pointers, fetched data, etc. The local memory119can also include read-only memory (ROM) for storing micro-code. While the example memory sub-system110inFIG.1has been illustrated as including the memory sub-system controller115, in another embodiment of the present disclosure, a memory sub-system110does not include a memory sub-system controller115, and can instead rely upon external control (e.g., provided by an external host, or by a processor or controller separate from the memory sub-system). In general, the memory sub-system controller115can receive commands or operations from the host system120and can convert the commands or operations into instructions or appropriate commands to achieve the desired access to the memory devices130. The memory sub-system controller115can be responsible for other operations such as wear leveling operations, garbage collection operations, error detection and error-correcting code (ECC) operations, encryption operations, caching operations, and address translations between a logical address (e.g., a logical block address (LBA), namespace) and a physical address (e.g., physical block address) that are associated with the memory devices130. The memory sub-system controller115can further include host interface circuitry to communicate with the host system120via the physical host interface. The host interface circuitry can convert the commands received from the host system into command instructions to access the memory devices130as well as convert responses associated with the memory devices130into information for the host system120. The memory sub-system110can also include additional circuitry or components that are not illustrated. In some embodiments, the memory sub-system110can include a cache or buffer (e.g., DRAM) and address circuitry (e.g., a row decoder and a column decoder) that can receive an address from the memory sub-system controller115and decode the address to access the memory devices130. In some embodiments, the memory devices130include local media controllers135that operate in conjunction with memory sub-system controller115to execute operations on one or more memory cells of the memory devices130. An external controller (e.g., memory sub-system controller115) can externally manage the memory device130(e.g., perform media management operations on the memory device130). In some embodiments, memory sub-system110is a managed memory device, which is a raw memory device130having control logic (e.g., local media controller135) on the die and a controller (e.g., memory sub-system controller115) for media management within the same memory device package. An example of a managed memory device is a managed NAND (MNAND) device. In one embodiment, the memory sub-system110includes a redundancy metadata manager component113(referred to as redundancy metadata manager113) that can manage redundancy data generated for host data stored at one or more portions of a memory device130,140. In some embodiments, the memory sub-system controller115includes at least a portion of the redundancy metadata manager component113. For example, the memory sub-system controller115can include a processor117(processing device) configured to execute instructions stored in local memory119for performing the operations described herein. In some embodiments, the redundancy metadata manager component113is part of the host system120, an application, or an operating system. Redundancy metadata manager113can be configured to implement a scheme for redundancy metadata media management at memory sub-system110. In some embodiments, host system120can transmit a request to memory sub-system110to program host data to a memory device130,140. Redundancy metadata manager113can obtain a redundancy factor that correspond to a logical address associated with the host data. In some embodiments, redundancy metadata manager113can obtain the redundancy factor by determining a virtual super management unit (vSMU) associated with the host data based on the logical address and identifying an entry of a logical-to-physical (L2P) data structure that corresponds to the determined vSMU. Redundancy metadata manager113can extract the redundancy factor from the identified entry. If the identified entry does not include an indication of the redundancy factor, redundancy metadata manager113can generate the redundancy metadata factor. Redundancy metadata manager113can determine a first physical address associated with a first set of memory cells of a memory device130,140that is to store the host data based on the obtained redundancy factor. Redundancy metadata manager113can also determine a second physical address associated with a second set of memory cells of a memory device130,140that is to store the redundancy metadata based on the obtained redundancy factor. Further details regarding determining the first physical address and the second physical address are provided herein. Responsive to determining the first physical address and the second physical address, redundancy metadata manager113can program the host data to the first set of memory cells and the redundancy metadata associated with the host data to the second set of memory cells. In some embodiments, redundancy metadata manager113can receive another request to program additional host data to a memory device130,140. Redundancy metadata manager113can determine the redundancy factor associated with the additional host data based on a logical address associated with the additional host data, and can use the determined redundancy factor to determine physical addresses associated with a set of memory cells that is to store the additional host data and another set of memory cells that is to store redundancy metadata associated with the additional host data, as described above. If the redundancy factor obtained for the additional host data corresponds to the redundancy factor obtained for the host data programmed to the first set of memory cells, the determined physical address for the set of memory cells that is to store the redundancy metadata associated with the additional host data can correspond to the second physical address. If the redundancy factor obtained for the additional host data does not correspond to the redundancy factor obtained for the host data programmed to the first set of memory cells, the determined physical address can correspond to an address associated with another set of memory cells (e.g., a third set of memory cells) of a memory device130,140. In some embodiments, the third set of memory cells can reside at a different memory device130,140from the second set of memory cells (i.e., that stores the redundancy metadata associated with the host data stored at the first set of memory cells). Further details regarding the redundancy metadata manager113are provided herein. FIG.2is a flow diagram of an example method200for redundancy metadata media management at a memory sub-system, in accordance with embodiments of the present disclosure. The method200can be performed by processing logic that can include hardware (e.g., processing device, circuitry, dedicated logic, programmable logic, microcode, hardware of a device, integrated circuit, etc.), software (e.g., instructions run or executed on a processing device), or a combination thereof. In some embodiments, the method200is performed by the redundancy metadata manager component113ofFIG.1. In other or similar embodiments, one or more operations of method200is performed by another component of the memory sub-system controller115, or by a component of local media controller135. Although shown in a particular sequence or order, unless otherwise specified, the order of the processes can be modified. Thus, the illustrated embodiments should be understood only as examples, and the illustrated processes can be performed in a different order, and some processes can be performed in parallel. Additionally, one or more processes can be omitted in various embodiments. Thus, not all processes are required in every embodiment. Other process flows are possible. At block210, processing logic receives a request to program host data to a memory device of a memory sub-system. In some embodiments, the memory sub-system can correspond to memory sub-system110illustrated inFIG.3A. As illustrated inFIG.3A, multiple memory devices310can be connected to memory sub-system controller115of memory sub-system110. One or more fault tolerant stripes312can be included across the multiple memory devices310. As indicated above, a fault tolerant stripe refers to a collection of management units (MUs) (e.g., planes, blocks, pages, etc.) at particular portions of two or more memory devices310that store host data that is recoverable by memory sub-system controller115. In some embodiments, each memory device310illustrated inFIG.3A(e.g., memory device310A,310B,310N,310N+1) can correspond to a memory device130,140described with respect toFIG.1. It should be noted that althoughFIG.3Adepicts four memory devices310connected to memory sub-system controller115, embodiments of the present disclosure can be directed to any number of memory devices (e.g., one memory device310, two memory devices310, four memory devices310, eight memory devices310, etc.) connected to any number of memory sub-system controllers115(e.g., one memory sub-system controller115, two memory sub-system controllers115, etc.). It should also be noted that although embodiments of the present disclosure may be directed to a fault tolerant stripe across multiple memory devices310connected to memory sub-system controller115, embodiments of the present disclosure can be directed to a fault tolerant stripe across multiple portions of a single memory device310. As illustrated inFIG.3A, multiple fault tolerant stripes312can reside across multiple memory devices310. As described above, each memory device can include one or more MUs314(e.g., blocks, pages, etc.). A plane316at a respective memory device310can refer to a grouping of one or more MUs314residing at a particular region of memory device310. In some embodiments, a memory device310can include at least two planes316. For example, as illustrated inFIG.3A, each memory device310connected to memory sub-system controller can include four planes316that each include a grouping of one or more MUs314. It should be noted that although some embodiments of the disclosure are directed to memory devices310that include four planes316, embodiments of the present disclosure can applied to memory devices that include any number of planes316(e.g., two planes316, four planes316, eight planes316, etc.). In some embodiments, each fault tolerant stripe312across memory devices310can be associated with a particular stripe identifier (ID) (e.g., a stripe index). For example, a first stripe312A across memory devices310can be associated with a first stripe ID (e.g., a first stripe index) and, a second stripe312B across memory devices310can be associated with a second stripe ID (e.g., a second stripe index), a nth stripe312N across memory devices310can be associated with a nth stripe ID (e.g., a nth stripe index), and/or a (n+1)th stripe312N+1 across memory devices310can be associated with a (n+1)th stripe ID (e.g., a (n+1)th stripe index). In some embodiments, each MU included in a respective stripe312can be associated with a particular MU ID (e.g., a MU index). For example, first stripe312A can include a first set of MUs that are each associated with a MU ID (e.g., MU-0, MU-1, MU-2, etc.). Second stripe312B can also include a second set of MUs314that are each associated with the MU IDs (e.g., MU-0, MU-1, MU-2, etc.). In some embodiments, a physical address associated with a set of memory cells of a memory device310can correspond to a stripe ID and a MU ID associated with the set of memory cells. For example, the physical address associated with a set of memory cells at MU314A can correspond to a (n+1)th stripe index associated with stripe312N+1 (e.g., S-N+1) and a MU index associated with MU314A (e.g., MU-0, as MU314A is depicted to be the first MU of stripe312N+1). In another example, the physical address associated with a set of memory cells at MU314B can correspond to a nth stripe index associated with stripe312N (e.g., S-N) and a MU index associated with MU314B (e.g., MU-0, as MU314B is depicted to be the first MU if stripe312B). In yet another example, the physical address associated with a set of memory cells at314C can correspond to the (n+1)th stripe index and a MU index associated with MU314C (e.g., MU-4). As indicated above, a super management unit (SMU) refers to a collection or grouping of MUs314for the purpose of memory management. In some embodiments, a SMU can include MUs314associated with a fault tolerant stripe (e.g., stripe312A). In other or similar embodiments, a SMU can include MUs314associated with two or more fault tolerant stripes312. For example, in some embodiments, a SMU can include MUs314associated with a first fault tolerant stripe312A and MUs314associated a second fault tolerant stripe312B. In some embodiments, each SMU of memory sub-system110can be associated with a particular SMU ID (e.g., a SMU index, a SMU address, etc.), in accordance with previously described embodiments. As indicated above, memory sub-system controller115can receive a request to program host data to a memory device310of memory sub-system110. The host data can be associated with a logical address (e.g., a logical page address, a logical block address, etc.). The logical address can be included with the request to program the host data, in some embodiments. In some embodiments, the logical address can correspond to a virtual SMU, a virtual fault tolerant stripe, and/or a virtual MU. In some embodiments, redundancy metadata manager component113can determine an identifier and/or an address associated with the virtual SMU, the virtual fault tolerant stripe, and/or the virtual MU. Further details regarding determining the virtual SMU, the virtual fault tolerant stripe, and/or the virtual MU are provided below. Referring back toFIG.2, at block212, processing logic obtains a redundancy factor that corresponds to the logical address associated with the host data. As indicated above, the redundancy factor can be a randomly generated number between zero and a number of SMUs associated with memory sub-system110. In some embodiments, each MU314of a respective fault tolerant stripe312across memory devices310can be associated with the same redundancy factor. In some embodiments, processing logic can obtain the redundancy factor using a logical-to-physical (L2P) data structure, such as L2P data structure322ofFIG.3A. As illustrated inFIG.3A, L2P data structure322can be stored at local memory119of memory sub-system controller115. In other or similar embodiments, L2P data structure322can be stored at another portion of memory sub-system110(e.g., at one or more portions of memory devices310. In some embodiments, L2P data structure322can include one or more entries that are configured to store a mapping between an address or an identifier (e.g., an index) associated with a virtual SMU and an address or an identifier associated with a physical SMU (i.e., residing at one or more portions of memory devices310).FIG.4depicts an example L2P data structure322, in accordance with some embodiments of the present disclosure. As illustrated inFIG.4, L2P data structure322can include one or more entries410. Each entry can include a virtual SMU field412that is configured to store an indication of an address or an identifier associated with a virtual SMU. Each entry410can also include a physical SMU field414that is configured to store an indication of an address or identifier associated with a physical SMU (i.e., residing at a memory device310). In some embodiments, redundancy metadata manager113(or another component of memory sub-system115) can generate a mapping between an address or identifier associated with a particular virtual SMU and an address or identifier associated with a particular physical SMU during an initialization of the memory sub-system110. For example, during an initialization of memory sub-system110, redundancy metadata manager113can determine an address associated with each physical SMU associated with memory sub-system115and can generate a mapping between the determined address and an identifier for a virtual SMU. Redundancy metadata manager113can store the generated mapping at data structure322. In other or similar embodiments, redundancy metadata manager113(or another component of memory sub-system115) can generate a mapping between an address or identifier associated with a particular virtual SMU and an address or identifier associated with a particular physical SMU during a runtime of the memory sub-system110. For example, memory sub-system controller115can make MUs associated with a physical SMU available to store host data (i.e., open the SMU). Responsive to detecting that the SMU is open, redundancy metadata manager113can obtain an address or identifier associated with the SMU. Host system120can transmit a request to store host data to memory sub-system110, as described above. Redundancy metadata manger113can determine an address or an identifier associated with a virtual SMU based on a logical address associated with the host data. For example, redundancy metadata manager113can determine the address or identifier associated with the virtual SMU based on the logical address associated with the host data, which can be represented as LA, and a number a number of MUs included in a virtual SMU, which can be expressed as variable m. In some embodiments, redundancy metadata manager113can determine the value of m based on pre-configured or experimental data that is obtained by memory sub-system controller115before or during initialization of memory sub-system110. For illustrative purposes, the address or identifier associated with the virtual SMU, which can be expressed as LSA, can be represented as LA/m. Responsive to determining the address or identifier associated with the virtual SMU, redundancy metadata manager113can generate a mapping between the address or identifier associated with the physical SMU and the address or identifier associated with the virtual SMU and can store an indication of the logical address at an entry410of data structure. As illustrated inFIG.4, each entry of data structure322can include a redundancy factor field416. The redundancy factor field416is configured to store an indication of a redundancy factor associated with a respective virtual SMU and/or physical SMU. As indicated above, in some embodiments, redundancy metadata manager113can obtain a redundancy factor associated with a logical address for host data using data structure322. In response to receiving a request to program host data, redundancy metadata manager113can determine an address or an identifier associated with a virtual SMU based on the logical address, as described above. Redundancy metadata manager113can identify an entry of data structure322that corresponds to the determined address or identifier and can determine whether the redundancy factor field416of the identified entry includes an indication of a redundancy factor. In response to determining that filed416does not include an indication of the redundancy factor, redundancy metadata manager113can generate the redundancy factor by selecting a random number between 0 and a number of SMUs associated with memory sub-system110(which can be expressed as nSMU). For illustrative purposes, the redundancy factor, which can be expressed as RF, can be represented as rand(nSMU), where rand( ) refers to a random number generation function. Responsive to generating the redundancy factor, redundancy metadata manager113can add an indication of the redundancy factor to the entry associated with the virtual SMU. Referring toFIG.2, at block214, processing logic determines a first physical address associated with a first set of memory cells of the memory device based on the redundancy factor. In some embodiments, processing logic determines the first physical address based on an identifier or an address for a virtual fault tolerant stripe associated with the host data and/or an identifier or address for a virtual MU associated with the host data. Further details regarding determining the first physical address, the identifier or the address for the virtual fault tolerant stripe, and the identifier or the address for the virtual MU associated with the host data are provided in further detail with respect toFIG.5. At block216, processing logic determines a second physical address associated with a second set of memory cells of the memory device based on the redundancy factor. In some embodiments, processing logic determines the second physical address based on an identifier or an address for the virtual fault tolerant stripe, a number of MUs314associated with the fault tolerant stripe, a number of memory devices310associated with the memory sub-system110, and/or a number of partitions associated with each memory device310associated with memory sub-system110. FIG.5illustrates an example logical address to physical address translation500, in accordance with some embodiments of the present disclosure. As indicated above, host data of a request received by memory sub-system controller115can be associated with a logical address510, which can be expressed as LA. In some embodiments, redundancy metadata manager113can determine an identifier or an address for a logical fault tolerant stripe (e.g., logical stripe index512, and an identifier or an address for a logical MU (e.g., logical MU index514), based on logical address510. For illustrative purposes, logical stripe index512can be expressed as LSI and logical MU index514can be expressed as LMI. In some embodiments, LSI can be represented as (LA % m)/s, where m represents a number of MUs in a virtual SMU, as indicated above, and s represents a number of MUs in a virtual fault tolerant stripe. In some embodiments, redundancy metadata manager113can obtain the values of m and/or s from local memory119(e.g., one or more registers of local memory119). Memory sub-system controller115can obtain the values of m and/or s based on pre-configured or experimental data before or during initialization of memory sub-system110. In additional or alternative embodiments, LMI can be represented as (LA % m) % s. In some embodiments, redundancy metadata manager113can provide logical stripe index512and a redundancy factor516determined for the host data of the request, as described above, as input to a first function518. The first function518can be configured to determine an identifier or an address associated with a physical stripe312across memory devices310(i.e., physical stripe index520) based on a given identifier or address for a virtual fault tolerant stripe and a given redundancy factor. Redundancy metadata manager113can obtain one or more outputs of first function518and can determine physical stripe index520based on the one or more obtained outputs. For illustrative purposes, physical stripe index520can be expressed as PSI and can be represented as (LSI+[RF/s]) % (m′/s′), where RF, represents the redundancy factor, m′ represents a number of MUs in a physical SMU and s′ represents a number of MUs in a physical stripe312. Redundancy metadata manager113can obtain the values of m′ and/or s′ from local memory119(e.g., one or more registers of local memory119). Memory sub-system controller115can obtain the values of m′ and/or s′ based on pre-configured or experimental data before or during initialization of memory sub-system110, as described above. In additional or alternative embodiments, redundancy metadata manager113can provide logical MU index514and redundancy factor516as input to a second function522. The second function522can be configured to determine an identifier or an address associated with a physical MU314of a physical stripe312(i.e., physical MU index524) based on a given identifier or address for a virtual MU and a given redundancy factor. Redundancy metadata manager113can obtain one or more outputs of second function522and can determine physical MU index524based on the one or more obtained outputs. For illustrative purposes, physical MU index524can be expressed as PMI and can be represented as (LMI+RMI+1)% s′, wherein RMI represents an identifier or an address (e.g., an index) associated with a set of memory cells that is to store redundancy metadata associated with host data of the request. Further details regarding determining a value of RMI are provided below. As indicated above, a physical address (e.g., physical address526) associated with a set of memory cells of memory device310can correspond to an identifier or address for a fault tolerant stripe and an identifier or address for a MU associated with the set of memory cells. Accordingly, redundancy metadata manager113can determine physical address526based on physical stripe index520and physical MU index524. Redundancy metadata manager113can further determine physical address526based on an identifier or address for a physical SMU associated with the set of memory cells (i.e., physical SMU index520). In some embodiments, redundancy metadata manager113can determine the identifier or address for the physical SMU based on the physical SMU field414of an entry410of data structure322that corresponds to the virtual SMU associated with the host data, as described above. For illustrative purposes, physical SMU index520can be expressed as PSA, physical stripe index520can be expressed as PSI, and physical MU index can be expressed as PMI. Physical address526can be represented as (PSA*m′)+(PSI*s′)+PMI, where m′ represents a number of MUs in a physical SMU and s′ represents a number of MUs in a physical stripe312. As indicated above, redundancy metadata manager113can determine a physical address associated with a second set of memory cells that are to store redundancy metadata associated with the host data. In some embodiments, the physical address associated with the second set of memory cells can correspond to an index associated with the second set of memory cells, which can be expressed as RMI, as indicated above. For illustrative purposes, RMI can be expressed, in some embodiments, as: ⌊LSI+RF⁢%⁢sd*p⌋⁢%⁢s′ where d represents a number of memory devices310(e.g., die) associated with memory sub-system110and p represents a number of partitions per memory device310. Redundancy metadata manager113can obtain the identifier or the address for the virtual fault tolerant stripe (e.g., logical stripe index512or LSI) as described above. In some embodiments, redundancy metadata manager113can obtain the number of MUs314associated with the fault tolerant stripe (i.e., s), the number of memory devices310associated with the memory sub-system110(i.e., d), and/or the number of partitions associated with each memory device310associated with memory sub-system110(i.e., p) from local memory119(e.g., one or more registers of local memory119). Memory sub-system controller115can obtain the number of MUs314associated with the fault tolerant stripe, the number of memory devices310associated with the memory sub-system110, and/or the number of partitions associated with each memory device310associated with memory sub-system110based on pre-configured or experimental data before or during initialization of memory sub-system110, as described above. Referring back toFIG.2, at block218, processing logic programs the host data to the first set of memory cells. At block220, processing logic programs the redundancy metadata associated with the host data to the second set of memory cells. In some embodiments, the host data of the request can be the first host data that is programmed to MUs314of a particular fault tolerant stripe312. In such embodiments, the redundancy metadata associated with the host data can be a copy of the host data. Accordingly, processing logic can program a copy of the host data to the second set of memory cells associated. In other or similar embodiments, the host data is not the first host data that is programmed to MUs314of the particular fault tolerant stripe312. In such embodiments, processing logic can generate the redundancy data associated with the host data based on the host data of the request and additional host data already residing on one or more MUs314of the particular fault tolerant stripe312and store the host data at the second set of memory cells, as described herein. FIG.3Bdepicts host data and redundancy metadata programmed to memory cells associated with a respective fault tolerant stripe312(e.g., stripe312A), in accordance with some embodiments of the disclosure. As illustrated inFIG.3B, host data of the request can be programmed to a first set of memory cells332corresponding to one or more first MUs of a first fault tolerant stripe312A. As also illustrated inFIG.3B, redundancy metadata associated with the host data can be programmed to a second set of memory cells334corresponding to one or more second MUs of the first fault tolerant stripe312A. After the host data and redundancy metadata are programmed to memory sub-system110, memory sub-system controller115can receive requests from host system120to program additional host data to a memory device310of memory sub-system110. In some embodiments, the logical address associated with the additional host data can be associated with the same virtual SMU and/or virtual fault tolerant stripe as the logical address for the host data programmed to memory cells332. Redundancy metadata manager113can determine, based on the logical address associated with the additional host data, that redundancy metadata associated with the additional host data is to be programmed to the second set of memory cells334, in accordance with embodiments described herein. Redundancy metadata manager113can generate updated redundancy metadata associated with the host data associated with fault tolerant stripe312A and program the updated redundancy metadata at the second set of memory cells334, in accordance with embodiments provided below. FIG.6is a flow diagram of another example method600for redundancy metadata media management at a memory sub-system, in accordance with some embodiments of the present disclosure. The method600can be performed by processing logic that can include hardware (e.g., processing device, circuitry, dedicated logic, programmable logic, microcode, hardware of a device, integrated circuit, etc.), software (e.g., instructions run or executed on a processing device), or a combination thereof. In some embodiments, the method400is performed by the redundancy metadata manager component113ofFIG.1. In other or similar embodiments, one or more operations of method600is performed by another component of the memory sub-system controller, or by a component of local media controller135. Although shown in a particular sequence or order, unless otherwise specified, the order of the processes can be modified. Thus, the illustrated embodiments should be understood only as examples, and the illustrated processes can be performed in a different order, and some processes can be performed in parallel. Additionally, one or more processes can be omitted in various embodiments. Thus, not all processes are required in every embodiment. Other process flows are possible. At block610, processing logic can receive a first request to program first host data to a memory device. At block612, processing logic can receive a second request to program second host data to a memory device. Processing logic can receive the first and second requests in accordance with previously described embodiments. In some embodiments, the first host data can be associated with a first logical address (e.g., indicated in the first request) and the second host data can be associated with a second logical address (e.g., indicated in the second request). The first logical address can correspond to a first set of memory cells of a fault tolerant stripe (e.g., fault tolerant stripe312) of a memory sub-system (e.g., memory sub-system110). The second logical address can correspond to a second set of memory cells of the fault tolerant stripe. In some embodiments, processing logic can receive the first request and determine a physical address associated with the first set of memory cells based on the first logical address, in accordance with embodiments described with respect toFIG.2. Processing logic can also determine a physical address of a third set of memory cells that is to store redundancy metadata associated with the first host data based on the first logical address, as described with respect toFIG.2. Responsive to determining the physical address associated with the first set of memory cells and the physical address associated with the third set of memory cells, processing logic can program the first host data to the first set of memory cells and redundancy metadata associated with the first host data to the third set of memory cells, as described above. In some embodiments, processing logic can program the first host data and the redundancy metadata associated with the first host data before the second request to program the second host data is received. At block614, processing logic can determine, based on a redundancy factor that corresponds to the first logical address associated with the first host data and the second logical address associated with the second host data, that redundancy metadata associated with the first host data and the second host data is to be stored at a particular set of memory cells. In some embodiments, processing logic can obtain a redundancy factor that corresponds to the second logical address. For example, processing logic can determine a vSMU associated with the second host data, in accordance with previously described embodiments. As indicated above, the second logical address can correspond to the same fault tolerant stripe312that stores the first host data and the redundancy metadata associated with the first host data. Accordingly, the vSMU associated with the second host data can correspond to the vSMU associated with the first host data. Processing logic can identify an entry410of data structure322that corresponds to the vSMU associated with the second host data, in accordance with previously described embodiments and can extract, from the identified entry410, an indication of the redundancy factor associated with the second host data. As the vSMU associated with the second host data corresponds to the vSMU associated with the first host data, the redundancy factor associated with the second host data can be the redundancy factor associated with the first host data. Processing logic can determine a physical address associated with the second set of memory cells (i.e., the memory cells to store the second host data) based on the redundancy factor, in accordance with previously described embodiments. For example, processing logic can determine a logical stripe index512and a logical MU index512associated with the second host data, as described above. Processing logic can provide the logical stripe index512and the redundancy factor (e.g., redundancy factor516) as input to first function518and can determine physical stripe index520associated with the second set of memory cells based on one or more outputs of first function518. Processing logic can provide the logical MU index512and redundancy factor516as input to second function522and can determine physical MU index524associated with the second set of memory cells based on one or more outputs of second function522. Processing logic can determine the physical address526associated with the second set of memory cells based on physical stripe index520, physical MU index524, and physical SMU index520(i.e., obtained from the identified entry410of data structure522), in accordance with previously described embodiments. Processing logic can program the second host data to the second set of memory cells, in accordance with previously described embodiments. Processing logic can also determine the physical address associated with the third set of memory cells (i.e., the memory cells to store redundancy metadata associated with the second host data) based on the redundancy factor, an identifier or an address for a virtual fault tolerant stripe associated with the second host data, a number of MUs associated with the fault tolerant stripe, a number of memory devices310associated with memory sub-system110, and/or a number of partitions associated with each memory device310associated with memory sub-system110, as described above. As indicated above, the redundancy factor associated with the second host data can be the redundancy factor associated with the first host data. In addition, the identifier or the address for the virtual fault tolerant stripe associated with the second host data can be the identifier or the address for the virtual fault tolerant stripe associated with the first host data. Accordingly, processing logic can determine, based on the redundancy factor, that redundancy metadata associated with the second host data is to be stored at the same set of memory cells that store the redundancy metadata associated with the first host data (e.g., the third set of memory cells). At block616, processing logic can generate redundancy metadata associated with the first host data and the second host data. In some embodiments, processing logic can obtain the first host data from the first set of memory cells and execute a redundancy metadata operation based on the first host data and the second host data. For example, processing logic can execute an XOR operation based on the first host data and the second host data to generate redundancy metadata associated with the first host data and the second host data. At block618, processing logic can program the generated redundancy metadata to the third set of memory cells. In some embodiments, the third set of memory cells can store redundancy metadata associated with the first host data, as described above. Processing logic can replace the redundancy metadata associated with the first host data at the third set of memory cells with the redundancy metadata associated with the first host data and the second host data (i.e., generated at block616). FIG.3Cdepicts first host data, second host data, and redundancy metadata associated with the first and second host data programmed to memory cells associated with a respective fault tolerant stripe312(e.g., stripe312A), in accordance with some embodiments of the present disclosure. As illustrated inFIG.3C, first host data can be programmed to a set of memory cells332corresponding to one or more first MUs of a first fault tolerant stripe312A, as described above. As also illustrated inFIG.3C, second host data can be programmed to a set of memory cells336corresponding to one or more second MUs of first fault tolerant stripe312A. Before the second request to program the second host data to memory sub-system110, memory cells334can store redundancy metadata associated with the first host data (i.e., as illustrated inFIG.3B). After (or before) the second host data is programmed to memory cells336, redundancy metadata manager113can generated updated redundancy metadata that includes redundancy metadata associated with the first host data and the second host data, as described above. Redundancy metadata manager113can program the updated redundancy metadata to memory cells334, as illustrated inFIG.3C. In some embodiments, memory sub-system controller115can program host data to memory cells associated with each host data MU of a respective fault tolerant stripe, in accordance with embodiments described herein. As host data is programmed to memory cells associated with the respective fault tolerant stripe, redundancy metadata manager113can update redundancy metadata associated with the fault tolerant stripe and can store the updated redundancy metadata at the set of memory cells of the fault tolerant stripe that is allocated to store redundancy metadata, in accordance with embodiments provided herein. After each host data MU of the fault tolerant stripe stores host data (i.e., no memory cells of the fault tolerant stripe are available to store host data), memory sub-system controller115can “close” the fault tolerant stripe and “open” another fault tolerant stripe to store incoming host data. In response to receiving a request to store host data to the memory sub-system110, redundancy metadata manager113can obtain a redundancy factor corresponding to a logical address of the host data, as described above. As the host data is to be stored at memory cells of the newly opened fault tolerant stripe, the redundancy factor corresponding to the logical address can be different from the redundancy factor corresponding to the logical addresses associated with host data programmed to the “closed” fault tolerant stripe. Accordingly, redundancy metadata manager113can identify a different set of memory cells (e.g., at a different memory device310or a different portion of memory device310) that is to store host data associated with the incoming host data. FIG.3Ddepicts additional host data programmed to memory cells associated with a different fault tolerant stripe (e.g., stripe312B), in accordance with embodiments of the present disclosure. As illustrated inFIG.3D, memory sub-system controller115can program host data received after stripe312A is closed at memory cells338. Redundancy metadata manager113can determine, based on a logical address associated with the host data, that redundancy data associated with the host data is to be stored at memory cells340. As illustrated inFIG.3D, memory cells340can reside at a different memory device (e.g., memory device310N) than the memory device that includes memory cells334, which are configured to store redundancy metadata for stripe312A. FIG.7illustrates an example machine of a computer system700within which a set of instructions, for causing the machine to perform any one or more of the methodologies discussed herein, can be executed. In some embodiments, the computer system700can correspond to a host system (e.g., the host system120ofFIG.1) that includes, is coupled to, or utilizes a memory sub-system (e.g., the memory sub-system110ofFIG.1) or can be used to perform the operations of a controller (e.g., to execute an operating system to perform operations corresponding to the redundancy metadata manager component113ofFIG.1). In alternative embodiments, the machine can be connected (e.g., networked) to other machines in a LAN, an intranet, an extranet, and/or the Internet. The machine can operate in the capacity of a server or a client machine in client-server network environment, as a peer machine in a peer-to-peer (or distributed) network environment, or as a server or a client machine in a cloud computing infrastructure or environment. The machine can be a personal computer (PC), a tablet PC, a set-top box (STB), a Personal Digital Assistant (PDA), a cellular telephone, a web appliance, a server, a network router, a switch or bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine. Further, while a single machine is illustrated, the term “machine” shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein. The example computer system700includes a processing device702, a main memory704(e.g., read-only memory (ROM), flash memory, dynamic random access memory (DRAM) such as synchronous DRAM (SDRAM) or RDRAM, etc.), a static memory706(e.g., flash memory, static random access memory (SRAM), etc.), and a data storage system718, which communicate with each other via a bus730. Processing device702represents one or more general-purpose processing devices such as a microprocessor, a central processing unit, or the like. More particularly, the processing device can be a complex instruction set computing (CISC) microprocessor, reduced instruction set computing (RISC) microprocessor, very long instruction word (VLIW) microprocessor, or a processor implementing other instruction sets, or processors implementing a combination of instruction sets. Processing device602can also be one or more special-purpose processing devices such as an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), network processor, or the like. The processing device702is configured to execute instructions726for performing the operations and steps discussed herein. The computer system700can further include a network interface device708to communicate over the network720. The data storage system718can include a machine-readable storage medium724(also known as a computer-readable medium) on which is stored one or more sets of instructions726or software embodying any one or more of the methodologies or functions described herein. The instructions726can also reside, completely or at least partially, within the main memory704and/or within the processing device702during execution thereof by the computer system700, the main memory704and the processing device702also constituting machine-readable storage media. The machine-readable storage medium724, data storage system618, and/or main memory704can correspond to the memory sub-system110ofFIG.1. In one embodiment, the instructions726include instructions to implement functionality corresponding to a voltage bin boundary component (e.g., the redundancy metadata manager component113ofFIG.1). While the machine-readable storage medium724is shown in an example embodiment to be a single medium, the term “machine-readable storage medium” should be taken to include a single medium or multiple media that store the one or more sets of instructions. The term “machine-readable storage medium” shall also be taken to include any medium that is capable of storing or encoding a set of instructions for execution by the machine and that cause the machine to perform any one or more of the methodologies of the present disclosure. The term “machine-readable storage medium” shall accordingly be taken to include, but not be limited to, solid-state memories, optical media, and magnetic media. Some portions of the preceding detailed descriptions have been presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the ways used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is here, and generally, conceived to be a self-consistent sequence of operations leading to a desired result. The operations are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like. It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. The present disclosure can refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage systems. The present disclosure also relates to an apparatus for performing the operations herein. This apparatus can be specially constructed for the intended purposes, or it can include a general purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program can be stored in a computer readable storage medium, such as, but not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, and magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, or any type of media suitable for storing electronic instructions, each coupled to a computer system bus. The algorithms and displays presented herein are not inherently related to any particular computer or other apparatus. Various general purpose systems can be used with programs in accordance with the teachings herein, or it can prove convenient to construct a more specialized apparatus to perform the method. The structure for a variety of these systems will appear as set forth in the description below. In addition, the present disclosure is not described with reference to any particular programming language. It will be appreciated that a variety of programming languages can be used to implement the teachings of the disclosure as described herein. The present disclosure can be provided as a computer program product, or software, that can include a machine-readable medium having stored thereon instructions, which can be used to program a computer system (or other electronic devices) to perform a process according to the present disclosure. A machine-readable medium includes any mechanism for storing information in a form readable by a machine (e.g., a computer). In some embodiments, a machine-readable (e.g., computer-readable) medium includes a machine (e.g., a computer) readable storage medium such as a read only memory (“ROM”), random access memory (“RAM”), magnetic disk storage media, optical storage media, flash memory components, etc. In the foregoing specification, embodiments of the disclosure have been described with reference to specific example embodiments thereof. It will be evident that various modifications can be made thereto without departing from the broader spirit and scope of embodiments of the disclosure as set forth in the following claims. The specification and drawings are, accordingly, to be regarded in an illustrative sense rather than a restrictive sense.
78,867
11860733
DETAILED DESCRIPTION In the following description, numerous details are set forth, such as data storage device configurations, controller operations, and the like, in order to provide an understanding of one or more aspects of the present disclosure. It will be readily apparent to one skilled in the art that these specific details are merely exemplary and not intended to limit the scope of this application. In particular, the functions associated with the memory device may be performed by hardware (e.g., analog or digital circuits), a combination of hardware and software (e.g., program code or firmware, stored in a non-transitory computer-readable medium, that is executed by processing or control circuitry), or any other suitable means. The following description is intended solely to give a general idea of various aspects of the disclosure, and does not limit the scope of the disclosure in any way. FIG.1is a block diagram of one example of a system100that includes a data storage device102. In some implementations, the data storage device102is a flash memory device. For example, the data storage device102is a Secure Digital SD® card, a microSD® card, or another similar type of data storage device. The data storage device102illustrated inFIG.1includes a non-volatile memory104and a controller106. The data storage device102is coupled to a host device108. The host device108is configured to provide data110(for example, user data) to the data storage device102to be stored, for example, in the non-volatile memory104. The host device108is, for example, a smart phone, a music player, a video player, a gaming console, an e-book reader, a personal digital assistance device, a tablet, a notebook computer, or another similar device. The non-volatile memory104of the data storage device102is coupled to the controller106. In some implementations, the non-volatile memory104is NAND flash memory. The non-volatile memory104illustrated inFIG.1includes a plurality of memory units112A-112N (for example, flash memory units). Each of the plurality of memory units112A-112N includes a plurality of storage elements. For example, inFIG.1, the memory unit112A includes a representative storage element114. In some implementations, the storage element114is a multi-level cell flash memory, such as a 2 levels cell (“SLC”), a 4 levels cell(“MLC”), an 8 levels cell (“TLC”), a 16 levels cell (“QLC”), or a flash memory cell having a larger number of bits per cell (for example, between five and ten bits per cell). In some implementations, the plurality of memory units112A-112N are included in a word line or page of a multi-level cell flash memory. In other implementations, the plurality of memory units112A-112N are spread across multiple word lines or pages of a multi-level cell flash memory. The controller106illustrated inFIG.1includes a host interface116, a memory interface118, a controller circuit120, and an ECC engine122. The controller106is illustrated inFIG.1in a simplified form. One skilled in the art would recognize that a controller for a non-volatile memory would include additional modules or components other than those specifically illustrated inFIG.1. Additionally, although the data storage device102is illustrated inFIG.1as including the controller106and modules for performing, for example, ECC, in other implementations, the controller106is instead located within the host device108or is otherwise separate from the data storage device102. As a result, ECC and other flash translation layer (“FTL”) operations that would normally be performed by the controller106(for example, wear leveling, bad block management, data scrambling, garbage collection, address mapping, etc.) can be performed by the host device108or another device that connects to the data storage device102. The controller106is configured to send data to, and receive data and instructions from, the host device108via the host interface116. The host interface116enables the host device108to, for example, read from the non-volatile memory104and to write to the non-volatile memory104using any suitable communication protocol. Suitable communication protocols include, for example, the Universal Flash Storage (“UFS”) Host Controller Interface specification, the Secure Digital (“SD”) Host Controller specification, etc. The controller106is also configured to send data and commands to (e.g., the memory operation134), and receive data from, the non-volatile memory104with the memory interface118. As an illustrative example, the controller106is configured to send data and a write command to instruct the non-volatile memory104to store data in a particular memory location in the non-volatile memory104. The controller106is also configured to send a read command to the non-volatile memory104to read data from a particular memory location in the non-volatile memory104. In some examples, the controller106is coupled to the non-volatile memory104with a bus132in combination with the memory interface118. The bus132may include multiple distinct channels to enable the controller120to communicate with each of the one or more memory units112in parallel with, and independently of, communication with the other memory dies103. The controller circuit120illustrated inFIG.1includes an processor124(for example, a microprocessor, a microcontroller, a field-programmable gate array [“FPGA” ] semiconductor, an application specific integrated circuit [“ASIC” ], or another suitable programmable device) and a non-transitory computer readable medium or memory126(for example, including random access memory [“RAM” ] and read only memory [“ROM” ]). The processor124is operatively connected to the various modules within the controller circuit120, the controller106, and the data storage device102. For example, firmware is loaded in a ROM of the memory126as computer executable instructions. Those computer executable instructions are capable of being retrieved from the memory126and executed by the processor124to control the operation of the controller circuit120and perform the processes described herein (for example, data alignment and ECC). In some implementations, one or more modules of the controller circuit120correspond to separate hardware components within the controller circuit120. In other implementations, one or more modules of the controller circuit120correspond to software stored within the memory126and executed by the processor124. The memory126is configured to store data used by the controller circuit120during operation. The ECC engine122is configured to receive data to be stored in the non-volatile memory104. The ECC engine122is configured to encode data using an ECC encoding scheme. In some implementations, the ECC encoding scheme is a Reed Solomon encoding scheme, a Bose-Chaudhuri-Hocquenghem (“BCH”) encoding scheme, a low-density parity check (“LDPC”) encoding scheme, or another suitable encoding scheme. The ECC engine122illustrated inFIG.1includes a decoder128and an encoder130. The decoder128is configured to decode data that is read from the non-volatile memory104. For example, the decoder128is configured to decode a codeword read from the non-volatile memory104. The codeword may include, for example, the 4*k data bits and the 4*m parity bits, described in greater detail below. The decoder128is configured to detect and correct bit errors that are present in the data read from the non-volatile memory104. The decoder128corrects bit errors present in the data read from the non-volatile memory104up to an error correction capability of the implemented ECC scheme. In some implementations, the ECC engine122is included in the controller circuit120. As previously stated, coding schemes descried herein use the expected error rate of each programming state induced by the cell voltage distribution (CVD) in a memory device.FIG.2illustrates a graph200an example CVD of a QLC memory. A plurality of memory cell states S0-S15are shown by legend205. The graph200includes an x-axis210representing the gate voltage values of the plurality of memory cell states S0-S15. The graph200includes a y-axis215representing the bit distribution for each of the plurality of memory cell states S0-S15. In some implementations, one or more of the memory cell states S0-S15may overlap another one or the memory cell states S0-S15at a given gate voltage. Such an overlap results in a cell error rate (CER) for each overlap. Graph200includes fifteen CERs for the fifteen overlaps between the sixteen memory cell states S0-S15. Additionally, each memory cell state S0-S15in graph200are not symmetrical, and may have a different range of gate voltage or bit distribution. In some ideal implementations, each memory cell state S0-S15are identical. To tailor LDPC to match the CVD and its memory error model (defined by the CERs), what is herein referred to as a “memory matching transform” (i.e., a m2transform) is applied to the memory model, ensuring that the errors that are introduced by each read threshold affect only a subset of the coded bits.FIGS.3A-3Billustrate an example memory matching transform for a QLC memory. As illustrated inFIG.3A, the QLC memory has a plurality of memory cell states300, where each memory cell state300stores n=4 user data bits in each memory cell state300, with a total of 2n=16 states (beginning with state 0 and ending with state 15). Each memory cell state300has a lower page302, a middle page304, an upper page306, and a top page308, each page configured to store a bit. In the example ofFIG.3A, only one bit is changed when moving from left to right through the states. For example, state 0 stores the bits [1 1 1 1], while state 1 stores the bits [1 1 1 0] in which only the bit in the top page308has changed. State 2 stores the bits [1 0 1 0] in which only the bit in the middle page306has changed. To read each memory cell state300, t=2n−1=15 read thresholds (i.e., the CERs) are required. Hence, the memory matching transform converts every n=4 user data pages into t=15 transformed data pages320. The transform fromFIG.3AtoFIG.3Boccurs vertically. For example, while state 0 stores the bits [1 1 1 1], the transformed page 0 stores the bits [1 1 1 1 1 1 1 1 1 1 1 1 1 1 1]. State 1 stores the bits [1 1 1 0], while transformed page 1 stores the bits [0 1 1 1 1 1 1 1 1 1 1 1 1 1 1]. With the transformed data pages320, any cell error introduced by the j′th read threshold will introduce a bit error only at the j′th transformed page320. Each transformed page has a CER value340. For example, consider a cell programmed to the first of the plurality of memory cell states300(i.e., state 0) such that the values of the transformed pages corresponding to this specific cell are [1 1 1 1 1 1 1 1 1 1 1 1 1 1 1]. However, during reading, a cell error is introduced by the 1stread threshold, resulting in reading the cell's state at a second plurality of cell states200(i.e., state 1) and thus the corresponding transformed page values read out are [0 1 1 1 1 1 1 1 1 1 1 1 1 1 1]. Hence, a bit error is only introduced in the 1stpage of the 15 transformed pages. In another example, consider a cell programmed to the fourth of the plurality of memory cell states (i.e., state 3) such that the values of the transformed pages corresponding to this specific cell are [0 0 0 1 1 1 1 1 1 1 1 1 1 1 1]. However, during reading, a cell error is introduced by the 3rdread threshold, resulting in reading the cell's state at a third plurality of cell states200(i.e., state 2) and thus the corresponding transformed page values read out are [0 0 1 1 1 1 1 1 1 1 1 1 1 1 1]. Hence, a bit error is only introduced in the 3rdpage of the 15 transformed pages. Accordingly, as a result of the memory matching transform, the bit error rate introduced to each page of the transformed pages320corresponds to the cell error rate associated with the corresponding read threshold. Additionally, an appropriate protection level may be assigned to each transformed page corresponding to the expected CER of the corresponding read threshold. Different memory matching transforms may also be used in place of the transform shown byFIGS.3A-3B. For example, a page level matching transform may be used. The overall bit error rate (BER) of the original n=4 pages (i.e., the Lower page302, the Middle page304, the Upper page306, and the Top page308) and the overall BER of the transformed pages320is the same. Specifically: BERLower+BERMiddle+BERUpper+BERTop=BERt1+BERt2+ . . . +BERt15 On average the BER observed by each transformed page320is n/t= 4/15 that of an original page. Accordingly, the overall ECC redundancy required for protecting the transformed pages320is the same as the ECC redundancy required for protecting the n=4 original data pages. The transformed pages320are encoded using an LDPC encoder (i.e., a memory matched LDPC encoder, a m2LDPC encoder) which accounts for the error rates of the transformed pages320. An amount of parity bits is assigned to each of the transformed pages320according to their expected BER. Accordingly, parity bits may be symmetrically assigned should each transformed page320have the same expected BER. In some examples, when the transformed pages320have varying BERs, the parity bits may be asymmetrically assigned. In other examples, a single LDPC code may be used, where within the underlying bipartite graph representing the LDPC code, a different degree spectrum (i.e., a different allocation of parity check equations per bit) is used for each of the transformed pages320. The degree spectrum then corresponds to the respective transformed page's expected BER. For example, a subset of bits expected to exhibit a higher BER, such as those influenced by a more error prone read threshold, participate in more parity check equations than a subset of bits expected to exhibit a lower comparative BER. FIG.4illustrates an example process400for the memory matched LDPC encoding and decoding method. The process400may be performed by the ECC engine122. In the example ofFIG.4, n=4 user data pages (i.e., the Lower page302, the Middle page304, the Upper page306, and the Top page308) are provided as input data, each page comprising k bits. The input data of 4*k bits are provided to memory matching transform encoding block405. The memory matching transform block encoding 405 transforms the input data to t=15 transformed pages (e.g., the transformed pages320) of k bits. The transformed 15*k bits are encoded by a memory matched LDPC encoder410, which produces n*m=4*m parity bits. In parallel to the memory matching transform encoding block405and the memory matched LDPC encoder410, the 4*k original user bits are stored in the memory104(i.e., the QLC data pages) with the 4*m parity bits, such that there are k+m bits per page. The transformed pages are not stored directly in the memory104, as a memory cell can store only n pages, and cannot directly store t=2n−1 pages. Rather, during a read operation, the transformed pages are inferred from the read cell voltages (Vt) via an inverse memory matching transform. For example, the cell's state, the values of the logical pages (e.g., Lower page302, Middle page304, Upper page306, and Top page308), and the values of the transformed pages320can be inferred from the read cell's Vt. The read operation retrieves a quantized version of the cells' Vt (denoted inFIG.5as Y), at any required resolution. In one embodiment, a read is performed at the nominal read thresholds shown inFIG.2to retrieve the Hard Bit (HB) pages and infer the cells' states. In another embodiment, a higher read resolution (determining the cell's Vt more finely) to retrieve Soft Bits (SB) in addition to the Hard Bits is used for assigning reliabilities to the read values. Each Hard Bit corresponds to the logical pages (e.g., Lower page302, Middle page304, Upper page306, and Top page308) read from the memory104. Each Soft Bit indicates how close the cell's Vt is to a read threshold, as illustrated inFIG.8. Accordingly, the Soft Bit indicates how likely the Hard Bit is incorrect (e.g., the likelihood of a read error). The combination of the cell's read Hard Bits and Soft Bits is denoted as Y and corresponds to a quantized version of the cell's Vt. The read cell value Y can be used to determine the probability that the cell was programmed to different states s=0, 1, . . . , 2n−1. The probability that a certain value Y was read from a cell that was programmed to state s, denoted as Pr(Y|s), may be computed based on the CVD model. During a read operation, decoding is performed on the read cell values Y in m2LDPC decoder415, in order to correct the errors introduced by memory104and recover the user data that was stored in the memory104. The m2LDPC decoding operation applies a unique iterative message passing decoding algorithm. The m2LDPC message passing algorithm can be viewed as operating over a tri-partite graph (as opposed to conventional LDPC message passing decoding which operates over a bi-partite graph), having “cell nodes,” “bit nodes,” and “check nodes,” with appropriate message computation rules. The messages exchanged over the edges of the tri-partite graph convey estimations on the codeword bits, which improve iteratively. A common metric used for conveying the bit estimations is Log-Likelihood-Ratio (LLR). The LLR for a bit b, given observation V, is defined according to Equation (1): L⁢L⁢Rb(V)=log⁢P⁢r⁡(b=0⁢❘"\[LeftBracketingBar]"V)P⁢r⁡(b=1⁢❘"\[LeftBracketingBar]"V)(1) where log base 2 is assumed for calculations described herein, but bases of other values may be used. An example tri-partite graph500is illustrated inFIG.5. The tri-partite graph500includes cell nodes505, bit nodes510, and check nodes515. The cell nodes505comprise of k information cells (“storing” the original n*k=4*k information bits) and m parity cells (“storing” the n*m=4*m parity bits). Each cell node505is connected to t=15 bit nodes, corresponding to the t=15 transformed information bits that which the cell indirectly stores. Each parity cell node is connected to n=4 bit nodes, corresponding to the n=4 parity bits that the cell directly stores. The operation performed between the information cell nodes505and their corresponding bit nodes510is an “inflation” operation, “inflating” from the n=4 read bits to the t=15 transformed bits. The inflation operation corresponds to the memory matching transform at the log likelihood ratio (LLR) level. The memory matching transform performed between the cell nodes505and the bit nodes510computes LLR estimations for the transformed bits, denoted as T messages. The LLR message from information cell node f to bit node v is computed according to the following Expression (2a): Tfv=log⁢∑s=i15⁢2∑j=1,j≠isQvj⁢f·P⁡(Y⁢❘"\[LeftBracketingBar]"s)∑s=0i-1⁢2∑j=1sQvj⁢f·P⁡(Y⁢❘"\[LeftBracketingBar]"s)(2⁢a) The LLR message from parity cell node f to bit node v is computed according to the following Expression (2b): Tfv=log⁢∑s⁢s.t.s⁡(v)=0⁢2∑j=1,vj≠vn⁢(1-s⁡(vj))·Qvj⁢f·P⁡(Y⁢❘"\[LeftBracketingBar]"s)∑s⁢s.t.s⁡(v)=1⁢2∑j=1,vj≠vn⁢(1-s⁡(vj))·Qvj⁢f·P⁡(Y⁢❘"\[LeftBracketingBar]"s)(2⁢b) where s(v) is the value of bit v in state s. The LLR message from bit node v to cell node f is computed according to Expression (3): Qvf=Σc′ϵN(v)Rc′v(3) A conventional LDPC message passing decoding is performed between the bit nodes510and the check nodes515by exchanging bit-to-check messages (denoted as Q messages) and check-to-bit messages (denoted as R messages). The check nodes impose parity check constraints on the bit nodes connected to them (as guaranteed by the encoding operation, which generated the parity bits, such that the parity check constraints were met). The LLR message from bit node v to check node c (conveying the current bit estimation) is computed according to Expression (4): Qvc=Tfv+Σc′ϵN(v)†cRc′v(4) The LLR message from check node c to bit node v (conveying an updated bit estimation based on the fact that the bits connected to the check nodes should satisfy a parity check constraint) is computed according to Expression (5): Rcv=φ−1(Σv′ϵN(c)†vφ(Qv′c))  (5) where φ(x)=φ−1(x)=(sign(x), −log2(tan h(x/2)). In the example ofFIG.5, each check node515is connected to four bit nodes510. In other examples, each check node is connected to more or less than four bit nodes510. The four bit nodes510that connect to a given check node515add to 0 (modulo 2). If there is an error, the check node515instead adds to 1 (modulo 2). The error can then be detected and corrected when the check node515adds to 1. In some examples, the memory matching transforms are also be based on the current bit estimation, conveyed by the Q messages. Returning toFIG.4, 4*(k+m) “hard bits” and 4*(k+m) “soft bits” are input into the memory matched LDPC decoder415. A “soft” memory matching transform decoding block420decodes the provided hard bits and soft bits (indicating a quantized read cell's Vt value Y) and outputs the 15*k transformed pages and the 4*m parity bits in LLR form (hence this is a “soft” transform, conveying soft estimations of the transformed information bits and parity bits). The soft memory matching transform corresponds to the iterative message passing operation performed between the cell nodes and the bit nodes. A LDPC decoder block425updates the LLR estimations of the bits by utilizing the fact that the 15*k transformed bits and the 4*m parity bits should satisfy a set of parity check constraints. This corresponds to the message passing operation performed between the bit nodes and the check nodes. Several iterations may be performed between the memory matching transform block420and the LDPC decoder block425, until the bit estimations of the 15*k transformed bits and the 4*m parity bits satisfy all the parity check constraints, indicating that the decoding operation converged to a valid codeword. Once the decoding operation has converged to a valid codeword (indicating that the decoder425corrected all the errors within the 15*k transformed pages), the decoder425retrieves the original 4*k bits from the 15*k transformed pages (by applying the inverse hard memory matching transform) to output the corrected input data. FIG.6illustrates a QLC memory with a plurality of memory cell states600. The plurality of memory cell states600have a Gaussian distribution, where the standard deviation (σ) of the state 0 (i.e., the Er state) is some value α times greater than that of the other states. The LDPC correction capability of the QLC memory ofFIG.6is illustrated inFIG.7for various α values.FIG.7provides a graph700illustrating the decoder failure probability as a function of the CER. First curve705and third curve715correspond to a conventional LDPC coding method. Second curve710and fourth curve720correspond to the memory matched LDPC coding (also referred to as “m2LDPC”) described herein. Additionally, first curve705and second curve710correspond to α=4, while the third curve715and fourth curve720correspond to α=1. As α increases and the memory model becomes more asymmetric (i.e., increased variance between the error rates of different read thresholds), the effectiveness of the memory matched LDPC coding increases. In addition to the benefit of improved correction capability, the use of the proposed memory matched LDPC coding has several additional benefits. When using m2LDPC codes, the errors introduced by each read threshold affect a unique subset of bits (i.e. a specific transformed page associated with that read threshold). By estimating the Bit Error Rate (BER) of each transformed page, the error rate associated to each read threshold may be estimated. The error rate of a transformed page may be estimated by comparing the initial read bit values of the transformed page to the corrected transformed page bit values post decoding, assuming the decoder has converged. Alternatively, the number of unsatisfied parity check equations associated with the initial read bits of the transformed page can be used for BER estimation. The higher the number of unsatisfied parity check constraints (aka Syndrome Weight), the higher the expected BER. This approach may be applied even if decoding fails (as its based solely on the initial read bit values). Estimating the BER of each of the read thresholds can be used for estimating the underlying memory error model or CVD associated with the read memory Word Line of memory104. This in turn can be used for accessing the memory health. It may also be used for computing accurate LLR values for the decoder based on more accurate estimation of P(Y|s). Another application of the ability to estimate the BER per read threshold is for adjusting and tuning the read thresholds. An optimal read threshold may be found as the read threshold which minimizes its associated BER estimation. A directional BER, indicating how many bits have flipped from 0 to 1 (BER0→1) and how many bits have flipped from 1 to 0 (BER0→1) may also be estimated per read threshold by comparing the initial transformed page bit values before decoding to their values after decoding. The directional BER may provide an indication to which direction the read threshold needs to be adjusted for minimizing the BER and for balancing BER0→1and BER0→1. The embodiments described so far are based on LDPC coding with iterative message passing decoding. However, the memory matched coding concept may be applied to other coding schemes as well. A memory matching transform may be applied to the user data and then any given coding scheme (BCH, RS, or other suitable coding scheme) may be applied to each of the transformed pages. During read, an inverse memory matching transform may be applied. The inverse memory matching transform may be a hard transform (on the read bits) or a soft transform (outputting LLRs for a soft decoder). After the inverse memory matching transform decoding may be applied to correct the errors across the transformed pages. With regard to the processes, systems, methods, heuristics, etc. described herein, it should be understood that, although the steps of such processes, etc. have been described as occurring according to a certain ordered sequence, such processes could be practiced with the described steps performed in an order other than the order described herein. It further should be understood that certain steps could be performed simultaneously, that other steps could be added, or that certain steps described herein could be omitted. In other words, the descriptions of processes herein are provided for the purpose of illustrating certain embodiments, and should in no way be construed so as to limit the claims. Accordingly, it is to be understood that the above description is intended to be illustrative and not restrictive. Many embodiments and applications other than the examples provided would be apparent upon reading the above description. The scope should be determined, not with reference to the above description, but should instead be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled. It is anticipated and intended that future developments will occur in the technologies discussed herein, and that the disclosed systems and methods will be incorporated into such future embodiments. In sum, it should be understood that the application is capable of modification and variation. All terms used in the claims are intended to be given their broadest reasonable constructions and their ordinary meanings as understood by those knowledgeable in the technologies described herein unless an explicit indication to the contrary in made herein. In particular, use of the singular articles such as “a,” “the,” “said,” etc. should be read to recite one or more of the indicated elements unless a claim recites an explicit limitation to the contrary. The Abstract is provided to allow the reader to quickly ascertain the nature of the technical disclosure. It is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. In addition, in the foregoing Detailed Description, it can be seen that various features are grouped together in various embodiments for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed embodiments require more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter lies in less than all features of a single disclosed embodiment. Thus, the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separately claimed subject matter.
28,796
11860734
DETAILED DESCRIPTION OF THE EMBODIMENTS Various example embodiments will be described more fully hereinafter with reference to the accompanying drawings, in which some example embodiments are shown. FIG.1is a block diagram illustrating a memory system according to example embodiments. Referring toFIG.1, a memory system20may include a memory controller100(e.g., an external memory controller) and a semiconductor memory device200. The memory controller100may control the overall operation of the memory system20. The memory controller100may control the overall data exchange between an external host and the semiconductor memory device200. For example, the memory controller100may write data in the semiconductor memory device200or read data from the semiconductor memory device200in response to one or more requests from the host. In addition, the memory controller100may issue operation commands to the semiconductor memory device200for controlling the semiconductor memory device200. The memory controller100may transmit a command CMD and an address (signal) ADDR to the semiconductor memory device200and may exchange main data MD with the semiconductor memory device200. In addition, the memory controller100may transmit, to the semiconductor memory device200, a poison flag PF indicating whether the main data MD corresponds to poisoned data. The poisoned data may be data including an error or a bit in which an error occurs. The semiconductor memory device200may transmit, to the memory controller100, the poison flag PF indicating whether the main data MD to be transmitted to the memory controller100corresponds to poisoned data. In example embodiments, the semiconductor memory device200is a memory device including a plurality of dynamic (volatile) memory cells such as a dynamic random access memory (DRAM), double data rate 7 (DDR7) synchronous DRAM (SDRAM), but example embodiments are not limited thereto. The memory controller100may include a central processing unit (CPU)110and a poison flag generator130and the semiconductor memory device200may include a control logic circuit210, a memory cell array (MCA)300and an on-die (OD) error correction code (ECC) engine400. The CPU110may control overall operation of the memory controller100. The poison flag generator130may generate the poison flag PF under control of the CPU110. The memory cell array300may store the main data MD and may selectively store the poison flag PF. The control logic circuit210may control the on-die ECC engine400based on the command CMD and the address ADDR and may provide a poison mode signal to the on-die ECC engine400. The on-die ECC engine400, based on an ECC stored therein and in a write operation, may perform an ECC encoding operation on the main data MD received from the memory controller100to generate first parity data, may selectively replace a portion of the first parity data with the poison flag PF received from the memory controller100to generate second parity data based on a poison mode signal, may provide the main data MD to a normal cell region in a target page of the memory cell array300, and may provide the first parity data to a parity cell region in the target page or provide the poison flag PF and the second parity data to the parity cell region. As used herein, the term “selectively replace x” or similar terms may mean to replace x based on the poison mode signal. That is, x may be replaced or not based on whether the poison mode signal indicates a normal mode or a poison mode. The on-die ECC engine400, in a read operation and in response to the poison mode signal designating a poison mode, may receive the main data MD read from the normal cell region in the target page, may receive the poison flag PF and the second parity data read from the parity cell region in the target page, and may perform an ECC decoding operation on the main data MD and the poison flag PF based on the second parity data using the ECC to correct an error bit in the main data MD and the poison flag PF. The on-die ECC engine400may perform the ECC encoding operation to store the poison flag PF in the parity cell region and may perform the ECC decoding operation on the main data MD and the poison flag PF to protect the poison flag PF by using the ECC as a single error correction/double error detection (SECDED) code in a normal mode and by using the ECC as a single error correction (SEC) code in the poison mode. That is, the on-die ECC engine400may perform the ECC encoding operation and the ECC decoding operation based on the ECC which is used as a different code in the normal mode and the poison mode based on the poison mode signal. That is, a different code is used in normal mode for the ECC encoding operation and ECC decoding operation as compared to the code used if in poison mode. FIG.2is a block diagram illustrating an example of the memory controller inFIG.1according to example embodiments. Referring toFIG.2, the memory controller100may include the CPU110a data buffer120, the poison flag generator130, an ECC engine140, a command (CMD) buffer180and an address buffer190. The CPU110receives a request REQ and a data DTA from a host device (not shown), and provides the data DTA to the data buffer120and the poison flag generator130. The data buffer120buffers the data DTA to provide a first main data MD1to the semiconductor memory device200(not shown inFIG.2). The poison flag generator130generates the poison flag PF indicating whether the first main data MD1corresponds to poisoned data based on the data DTA and transmits the poison flag PF to the semiconductor memory device200(not shown inFIG.2). The ECC engine140, in a read operation of the semiconductor memory device200, receives a second main data MD2and selectively receives the poison flag PF from the semiconductor memory device200, performs an ECC decoding on the second main data MD2and the poison flag PF, corrects an error bit in the second main data MD2or the poison flag PF, and provides a corrected main data C MD2or the poison flag PF to the CPU110. The command buffer180stores the command CMD corresponding to the request REQ and transmits the command CMD to the semiconductor memory device200under control of the CPU110. The address buffer190stores the address ADDR and transmits the address ADDR to the semiconductor memory device200under control of the CPU110. FIG.3is a block diagram illustrating an example of the semiconductor memory device in the memory system ofFIG.1according to example embodiments. Referring toFIG.3, a semiconductor memory device200may include the control logic circuit210, an address register220, a bank control logic230, a row address multiplexer240, a column address latch250, a row decoder260, a column decoder270, the memory cell array300, a sense amplifier unit285, an input/output (I/O) gating circuit290, a refresh counter245, the on-die ECC engine400and a data input/output (I/O) buffer295. The memory cell array300may include first through eighth bank arrays310˜380. The row decoder260may include first through eighth bank row decoders260a˜260hrespectively coupled to the first through eighth bank arrays310˜380, the column decoder270may include first through eighth bank column decoders270a˜270hrespectively coupled to the first through eighth bank arrays310˜380, and the sense amplifier unit285may include first through eighth bank sense amplifiers285a˜285hrespectively coupled to the first through eighth bank arrays310˜380. The first through eighth bank arrays310˜380, the first through eighth bank row decoders260a˜260h, the first through eighth bank column decoders270a˜270hand first through eighth bank sense amplifiers285a˜285hmay form first through eighth banks. Each of the first through eighth bank arrays310˜380may include a plurality of memory cells MC coupled to word-lines WL and bit-lines BTL. The address register220may receive an address ADDR including a bank address BANK_ADDR, a row address ROW_ADDR and a column address COL_ADDR and the command CMD from the memory controller100. The address register220may provide the received bank address BANK_ADDR to the bank control logic230, may provide the received row address ROW_ADDR to the row address multiplexer240, and may provide the received column address COL_ADDR to the column address latch250. The bank control logic230may generate bank control signals in response to the bank address BANK_ADDR. One of the first through eighth bank row decoders260a˜260hcorresponding to the bank address BANK_ADDR may be activated in response to the bank control signals, and one of the first through eighth bank column decoders270a˜270hcorresponding to the bank address BANK_ADDR may be activated in response to the bank control signals. The row address multiplexer240may receive the row address ROW_ADDR from the address register220, and may receive a refresh row address REF_ADDR from the refresh counter245. The row address multiplexer240may selectively output one of the row address ROW_ADDR the refresh row address REF_ADDR as a row address RA. The row address RA that is output from the row address multiplexer240may be applied to the first through eighth bank row decoders260a˜260h. The activated one of the first through eighth bank row decoders260a˜260hmay decode the row address RA that is output from the row address multiplexer240, and may activate a word-line corresponding to the row address RA. For example, the activated bank row decoder may apply a word-line driving voltage to the word-line corresponding to the row address RA. The column address latch250may receive the column address COL_ADDR from the address register220, and may temporarily store the received column address COL_ADDR. In some embodiments, in a burst mode, the column address latch250may generate column addresses COL_ADDR′ that increment from the received column address COL_ADDR. The column address latch250may apply the temporarily stored or generated column address COL_ADDR′ to the first through eighth bank column decoders270a˜270h. The activated one of the first through eighth bank column decoders270a˜270hmay decode the column address COL_ADDR that is output from the column address latch250, and may control the I/O gating circuit290in order to output data corresponding to the column address COL_ADDR. The I/O gating circuit290may include a circuitry for gating input/output data. The I/O gating circuit290further may include read data latches for storing data that is output from the first through eighth bank arrays310˜380, and write drivers for writing data to the first through eighth bank arrays310˜380. Codeword CW read from one bank array of the first through eighth bank arrays310˜380may be sensed by a sense amplifier coupled to the one bank array from which the data is to be read, and may be stored in the read data latches. The codeword CW stored in the read data latches is ECC-decoded by the on-die ECC engine400and the main data MD may be provided to the data I/O buffer295in a normal mode and the main data MD and the poison flag PF may be provided to the data I/O buffer295in a poison mode. The data I/O buffer295may transmit the main data MD to the memory controller100in the normal mode and may transmit the main data MD and the poison flag PF to the memory controller100in the poison mode. The main data MD to be written in one bank array of the first through eighth bank arrays310˜380may be provided to the data I/O buffer295along with the poison flag PF from the memory controller100. The data I/O buffer may provide the main data MD and the poison flag PF to the on-die ECC engine400. The on-die ECC engine400, in the normal mode based on a poison mode signal PMS, may perform an ECC decoding operation on the main data MD to generate first parity data and may write a codeword CW including the main data MD and the first parity data in a target page in one of the first through eighth bank arrays310˜380through the I/O gating circuit290. The on-die ECC engine400, in the poison mode based on the poison mode signal PMS, may perform an ECC decoding operation on the main data MD to generate first parity data, may replace a portion of bits of the first parity data with the poison flag PF to generate second parity data, and may write a codeword CW including the main data MD, the second parity data, and the poison flag PF in a target page in one of the first through eighth bank arrays310˜380through the I/O gating circuit290. The control logic circuit210may control operations of the semiconductor memory device200. For example, the control logic circuit210may generate control signals for the semiconductor memory device200in order to perform a write operation or a read operation. The control logic circuit210may include a command decoder211that decodes the command CMD received from the memory controller100and a mode register212that sets an operation mode of the semiconductor memory device200. For example, the command decoder211may generate the control signals corresponding to the command CMD by decoding a write enable signal, a row address strobe signal, a column address strobe signal, a chip select signal, etc. The control logic circuit210may generate a first control signal CTL1to control the I/O gating circuit290, a second control signal CTL2to control the on-die ECC engine400and the poison mode signal PMS designating one of the normal mode and the poison mode and may provide the poison mode signal PMS to the on-die ECC engine400. When the mode register212is set to the poison mode based on the command (or a mode register set command), the control logic circuit210may output the poison mode signal PMS with a first logic level in response to the mode register212being set to the poison mode FIG.4illustrates an example of the first bank array in the semiconductor memory device ofFIG.3. Referring toFIG.4, the first bank array310includes a plurality of word-lines WL0˜WLm−1 (where m is an even number equal to or greater than two), a plurality of bit-lines BTL0˜BTLn−1 (where n is an even number equal to or greater than two), and a plurality of memory cells MCs at intersections between the word-lines WL0˜WLm−1 and the bit-lines BTL0˜BTLn−1. The bit-lines BTL0˜BTLn−1 may extend in a first direction D1and the word-lines WL˜WLm−1 may extend in a second direction D2. Each of the memory cells MCs includes an access (cell) transistor coupled to one of the word-lines WL0˜WLm−1 and one of the bit-lines BTL0˜BTLn−1 and a storage (cell) capacitor coupled to the cell transistor. That is, each of the memory cells MCs has a DRAM cell structure. In addition, the memory cells MCs may have different arrangement depending on whether the memory cells MCs are coupled to an even word-line (for example, WL0) or an odd word-line (for example, WL1). That is, a bit-line coupled to adjacent memory cells may be different depending on whether a word-line selected by an access address is an even word-line or an odd word-line. FIG.5illustrates a portion of the semiconductor memory device200ofFIG.3in a write operation of a normal mode. InFIG.5, the control logic circuit210, the first bank array310, the I/O gating circuit290, and the on-die ECC engine400are illustrated. Referring toFIG.5, the first bank array310includes a normal cell array NCA and a parity cell array PCA. The normal cell array NCA may be referred to as a normal cell region and the parity cell array PCA may be referred to a parity cell region. The normal cell array NCA includes a plurality of first memory blocks MB0˜MB15, i.e.,311˜313, and the parity cell array PCA includes at least a second memory block314. The first memory blocks311˜313are memory blocks determining a memory capacity of the semiconductor memory device200. The second memory block314is for ECC and/or redundancy repair. Since the second memory block314for ECC and/or redundancy repair is used for ECC, data line repair, and block repair to repair ‘fail’ cells generated in the first memory blocks311˜313, the second memory block314is also referred to as an EDB block. In each of the first memory blocks311˜313, a plurality of first memory cells are arranged in rows and columns. In the second memory block314, a plurality of second memory cells are arranged in rows and columns. The first memory cells connected to intersections of the word-lines WL and the bit-lines BTL may be volatile memory cells. The second memory cells connected to intersections of the word-lines WL and bit-lines RBTL may be volatile memory cells. The I/O gating circuit290includes a plurality of switching circuits291a-291drespectively connected to the first memory blocks311˜313and the second memory block314. The on-die ECC engine400may be connected to the switching circuits291a-291dthrough first data lines GIO and second data lines EDBIO. The control logic circuit210may receive the command CMD and the address ADDR and may decode the command CMD to provide the first control signal CTL1for controlling the switching circuits291a-291dto the I/O gating circuit290and provide the second control signal CTL2for controlling the on-die ECC engine400and the poison mode signal PMS to the on-die ECC engine400. When the command CMD is a write command and the mode register212(seeFIG.3) in the control logic circuit is set to the normal mode, the control logic circuit210provides the second control signal CTL2and the poison mode signal PMS designating the normal mode to the on-die ECC engine400and the on-die ECC engine400performs the ECC encoding on the main data MD to generate first parity data PRT1based on the second control signal CTL2and the poison mode signal PMS and provides the I/O gating circuit290with the codeword CW including the main data MD and the first parity data PRT1. The control logic circuit210provides the first control signal CTL1to the I/O gating circuit290such that the codeword CW is to be stored in a target page in the first bank array310. FIG.6illustrates a portion of the semiconductor memory device ofFIG.3in a read operation of the normal mode. InFIG.6, description of elements repeated with respect toFIG.5will be omitted. Referring toFIG.6, when the command CMD is a read command to designate a read operation and the mode register212in the control logic circuit is set to the normal mode, the control logic circuit210provides the first control signal CTL1to the I/O gating circuit290such that a (read) codeword RCW stored in the target page in the first bank array310is provided to the on-die ECC engine400. The codeword RCW includes the main data MD and the first parity data PRT1. In one embodiment, the on-die ECC engine400performs ECC decoding operation on the read codeword RCW to correct one error bit and detect two error bits in the read codeword RCW and outputs the corrected main data C_MD. FIG.7illustrates a portion of the semiconductor memory device ofFIG.3in a write operation of a poison mode. InFIG.7, description of elements repeated with respect toFIG.5will be omitted. Referring toFIG.7, when the command CMD is a write command and the mode register212in the control logic circuit is set to the poison mode, the control logic circuit210provides the second control signal CTL2and the poison mode signal PMS designating the poison mode to the on-die ECC engine400. The on-die ECC engine400performs the ECC encoding on the main data MD to generate first parity data based on the second control signal CTL2and the poison mode signal PMS, replaces a portion of the first parity data with the poison flag PF to generate second parity data PRT2, and provides the I/O gating circuit290with the codeword CW including the main data MD, the poison flag PF and the second parity data PRT2. The control logic circuit210provides the first control signal CTL1to the I/O gating circuit290such that the codeword CW is to be stored in a target page in the first bank array310. The poison flag PF and the second parity data PRT2may be stored in the parity cell region PCA in the target page. FIG.8illustrates a portion of the semiconductor memory device ofFIG.3in a read operation of the poison mode. InFIG.8, description of elements repeated with respect toFIG.7will be omitted. Referring toFIG.10, when the command CMD is a read command to designate a read operation and the mode register212in the control logic circuit is set to the poison mode, the control logic circuit210provides the first control signal CTL1to the I/O gating circuit290such that a (read) codeword RCW stored in the target page in the first bank array310is provided to the on-die ECC engine400. The codeword RCW includes the main data MD, the poison flag PF and the second parity data PRT2. The on-die ECC engine400performs ECC decoding operation on the main data MD and the poison flag PF using the second parity data PRT2to correct one error bit in the main data MD and the poison flag PF and outputs the corrected main data C_MD and the poison flag PF. FIG.9is a block diagram illustrating an example of the on-die ECC engine in the semiconductor memory device ofFIG.3according to example embodiments. Referring toFIG.9, the on-die ECC engine400may include an (ECC) memory410, an ECC encoder420, a selective poison flag injector430and an ECC decoder440. The ECC memory410stores an ECC415. The ECC415may be represented by a parity check matrix (e.g., a data format/structure of the ECC415may be a parity check matrix) or a H matrix, and may include a plurality of column vectors corresponding to data bits in the main data (e.g., MD) and a plurality of parity vectors corresponding to parity bits in the first parity data (e.g., PRT1). The ECC encoder420is connected to the ECC memory410, and may perform ECC encoding operation on the main data MD using the ECC415stored in the ECC memory410to generate the first parity data PRT1in a write operation of the semiconductor memory device200. The ECC encoder420may provide the first parity data PRT1to the selective poison flag injector430. The selective poison flag injector430may receive the first parity data PRT1, the poison flag PF and the poison mode signal PMS, may selectively replace a portion of bits in the first parity data PRT1with the poison flag PF based on the poison mode signal PMS to generate the second parity data PRT2, and may output the second parity data PRT2and the poison flag PF. Therefore, the selective poison flag injector430may output the first parity data PRT1in response to the poison mode signal PMS designating the normal mode and may output the second parity data PRT2and the poison flag PF in response to the poison mode signal PMS designating the poison mode. Therefore, the on-die ECC engine400may provide the target page of the bank array with the codeword CW including the main data MD and the first parity data PRT1in the normal mode and provide the target page of the bank array with the codeword CW including the main data MD, the second parity data PRT2and the poison flag PF in the poison mode. The selective poison flag injector430may provide the ECC decoder440with a first parity bit PB1that is replaced with the poison flag PF. The ECC decoder440is connected to the ECC memory410. The ECC decoder440may receive the codeword CW including the main data MD and the first parity data PRT1in a read operation of the normal mode based on the poison mode signal PMS, may perform ECC decoding operation on the main data MD based on the first parity data PRT1using the ECC415to correct an error bit in the main data MD and/or detect two error bits in the main data MD, and may output corrected main data C_MD. The ECC decoder440may receive the codeword CW including the main data MD, the second parity data PRT2and the poison flag PF in a read operation of the poison mode based on the poison mode signal PMS, may perform ECC decoding operation on the main data MD and the poison flag PF based on the second parity data PRT2and the first parity bit PB1using the ECC415to correct an error bit in the main data MD and the poison flag PF, and may output corrected main data C_MD and a corrected poison flag C_PF. Although it is described with reference toFIG.9that the ECC memory410is connected to the ECC encoder420and the ECC decoder440, in example embodiments, the ECC memory410may be implemented with XOR gates within the ECC encoder420and/or the ECC decoder440. FIG.10illustrates relationships of the ECC and the first parity data used in on-die ECC engine ofFIG.9according to example embodiments. InFIG.10, it is assumed that the main data MD includes a plurality of sub data units SDU1˜SDUx, and the first parity data PRT1includes x-bit parity bits PB1˜PBx. InFIG.10, it is assumed that x is a natural number equal to or greater than eight. Referring toFIG.10, the ECC415may be divided into a plurality of code groups CG1˜CGx and PCG corresponding to the plurality of sub data units SDU1˜SDUx and the first parity data PRT1. The code group PCG may include a plurality of column vectors PV1˜PVx corresponding to parity bits PB1˜PBx of the first parity data PRT1. The code group CG1may include a plurality of column vectors CV11˜CV1k(k is an integer equal to or greater than four) corresponding to data bits in the sub data unit SDU1, the code group CG2may include a plurality of column vectors CV21˜CV2kcorresponding to data bits in the sub data unit SDU2, and the code group CGx may include a plurality of column vectors CVx1˜CVxk corresponding to data bits in the sub data unit SDUx. FIG.11illustrates an example of the ECC inFIG.10according to example embodiments. InFIG.11, it is assumed that the main data MD includes 256-bit data bits d1˜d256and the first parity data PRT1may include parity bits PB1˜PB16. That is, it is assumed that x is sixteen inFIG.10. Referring toFIG.11, the data bits d1˜d256of the main data MD may be divided into first through sixteenth sub data units SDU1˜SDU16. Each of the first through sixteenth sub data units SDU1˜SDU16includes 16-bit data bits. The sub data unit SDU1includes data bits d1˜d16, the sub data unit SDU2includes data bits d17˜d32, the sub data unit SDU3includes data bits d33˜d48, the sub data unit SDU4includes data bits d49˜d64, and so on, with the sub data unit SDU16including data bits d241˜d256. An ECCa (i.e., the parity check matrix) includes first through sixteenth code groups CG1˜CG16corresponding to the first through sixteenth sub data units SDU1˜SDU16and the code group PCG corresponding to the parity bits PB1˜PB16. The first through sixteenth code groups CG1˜CG16include column vectors CV11˜CV116, CV211˜CV216, CV31˜CV316, CV41˜CV416, . . . , CV161˜CV1616and the code group PCG includes column vectors PV1-PV16. FIG.12illustrates that the on-die ECC engine modifies an ECC used in the normal mode to provide a modified ECC used in the poison mode. InFIG.12, it is illustrated as an example for convenience of explanation that the on-die ECC engine400modifies a (7, 5) single error correction/double error detection (SECDED) code ECCb used in the normal mode to provide a modified (8, 4) single error correction (SEC) code ECCb_M used in the poison mode. Referring toFIG.12, the (7, 5) SECDED code ECCb may include column vectors CV1, CV2, CV3, CV4, CV5, CV6and CV7corresponding to data bits d1, d2, d3, d4, d5, d6and d7and parity vectors PV1, PV2, PV3, PV4and PV5corresponding to parity bits PB1, PB2, PB3, PB4and PB5. Each of the column vectors CV1, CV2, CV3, CV4, CV5, CV6and CV7may include odd number of elements having a first logic level (a logic high level) and the (7, 5) SECDED code ECCb corresponds to an odd parity. The ECC encoder420inFIG.9, in the normal mode, performs an ECC encoding operation on the data bits d1, d2, d3, d4, d5, d6and d7using the (7, 5) SECDED code ECCb to generate the parity bits PB1, PB2, PB3, PB4and PB5and performs an ECC decoding operation on the data bits d1, d2, d3, d4, d5, d6and d7based on the parity bits PB1, PB2, PB3, PB4and PB5using the (7, 5) SECDED code ECCb to correct one error bit and detect two error bits in the data bits d1, d2, d3, d4, d5, d6and d7. When the mode register212is set to the poison mode, the control logic circuit210inFIG.3may don't care process a first row associated with the first parity bit PB1among a plurality of rows of the SECDED code ECCb to be replaced with the poison flag PF. The control logic circuit210inFIG.3may also change elements of the first parity vector PV1associated with the first parity bit PB1such that the changed elements of the first parity vector PV1are not overlapped with elements of each of the column vectors CV1, CV2, CV3, CV4, CV5, CV6and CV7and elements of each of the parity vectors PB2, PB3, PB4and PB5to generate a modified (8, 4) SEC code ECCb_M including the changed parity vector PV1′. The parity vectors PB2, PB3, PB4and PB5(the parity vectors except the first parity vector PV1) may be referred to as second parity vectors. The ECC encoder420inFIG.9, in the poison mode, performs an ECC encoding operation on the data bits d1, d2, d3, d4, d5, d6and d7and the poison flag PF using the modified SEC code ECCb_M to generate the parity bits PB2, PB3, PB4and PB5and performs an ECC decoding operation on the data bits d1, d2, d3, d4, d5, d6and d7based on the parity bits PB2, PB3, PB4and PB5using the modified SEC code ECCb_M to correct one error bit in the data bits d1, d2, d3, d4, d5, d6and d7and the poison flag PF. FIG.13is a circuit diagram illustrating an example of the selective poison flag injector in the on-die ECC engine ofFIG.9according to example embodiments. InFIG.13, it is assumed that the main data MD inFIG.9includes 256 bits and the first parity data PRT1includes 16-bit parity bits PB1˜PB16. Referring toFIG.13, the selective poison flag injector430may include a plurality of multiplexers431,432and433and a plurality of XOR gates434and435. The multiplexer431may output one of the first parity bit PB1and the poison flag PF to replace the first parity bit PB1as a selected bit SPB1in response to the poison mode signal PMS. The XOR gate434may perform an XOR operation on the poison flag PF and the parity bit PB4associated with a changed parity vector and the multiplexer432may output one of an output of the XOR gate434and the parity bit PB4as a selected bit SPB4in response to the poison mode signal PMS. The XOR gate435may perform an XOR operation on the poison flag PF and the parity bit PB5associated with the changed parity vector and the multiplexer433may output one of an output of the XOR gate435and the parity bit PB5as a selected bit SPB5in response to the poison mode signal PMS. In response to the poison mode signal PMS designating the normal mode, the multiplexer431outputs the first parity bit PB1, the multiplexer432outputs the parity bit PB4, the multiplexer433outputs the parity bit PB5, and thus the selective poison flag injector430outputs the first parity data PRT1in the normal mode. In response to the poison mode signal PMS designating the poison mode, the multiplexer431outputs the poison flag PF, the multiplexer432outputs the output of the XOR gate434, the multiplexer433outputs the output of XOR gate435, and thus the selective poison flag injector430outputs the second parity data PRT2which includes the poison flag PF and in which logic levels of the parity bits PB4and PB5are selectively inverted based on a logic level of the poison flag PF, in the poison mode. That is, the selective poison flag injector430, in the poison mode, may replace the first parity bit PB1with the poison flag PF and may replace second parity bits PB4and PB5associated with the changed first parity vector PV1′ among the plurality of parity vectors with a result of performing an XOR operation on each of the second parity bits PB4and PB5and the poison flag PF. The selective poison flag injector430may provide the first parity bit PB1to the ECC decoder440. FIG.14illustrates an example of the ECC decoder in the on-die ECC engine ofFIG.9according to example embodiments. Referring toFIG.14, the ECC decoder440may include a syndrome generation circuit450, and modified syndrome generator460, an error locator470and a data corrector480. The syndrome generation circuit450may include a check bit generator451and a syndrome generator453. The check bit generator451generates check bits CHB based on a read main data RMD by performing an XOR array operation and the syndrome generator453generates a syndrome SDR by comparing corresponding bits of the check bits CHB and the first parity data PRT1or the second parity data PRT2and the poison flag PF. The modified syndrome generator460may generate a modified syndrome MSDR by selectively replacing a portion of syndrome bits of the syndrome SDR with the first parity bit PB1based on the poison mode signal PMS. The modified syndrome generator460may generate the modified syndrome MSDR by maintaining the syndrome bits of the syndrome SDR in the normal mode and by replacing the portion of syndrome bits of the syndrome SDR with the first parity bit PB1in the poison mode. The error locator470may detect an error in the read main data RMD or detect an error in the read main data RMD and the poison flag PF based on the modified syndrome MSDR and may output an error vector EV indicating a position of the detected error. The data corrector480may receive the read main data RMD, may correct one error bit and/or detect two error bit in the read main data RMD based on the error vector EV and may output the corrected main data C_MD, in the normal mode. The data corrector480may receive the read main data RMD and the poison flag PF, may correct one error bit in the read main data RMD and the poison flag PF based on the error vector EV and may output the corrected main data C_MD and the poison flag PF or the corrected main data C_MD and the corrected poison flag C_PF in the poison mode. FIG.15is a circuit diagram illustrating an example of the modified syndrome generator in the ECC decoder ofFIG.14according to example embodiments. InFIG.15, it is assumed that the main data MD inFIG.9includes 256 bits and the syndrome SDR inFIG.14includes 16-bit syndrome bits SB1˜SB16. Referring toFIG.15, the modified syndrome generator460may include a plurality of multiplexers461,462and463and a plurality of XOR gates464and465. The multiplexer461may output one of a first syndrome bit SB1and the first parity bit PB1to replace the first syndrome bit SB1as a selected bit SSB1in response to the poison mode signal PMS. The XOR gate464may perform an XOR operation on the first parity bit PB1and the syndrome bit SB4associated with a changed parity vector and the multiplexer462may output one of an output of the XOR gate464and the syndrome bit SB4as a selected bit SSB4in response to the poison mode signal PMS. The XOR gate465may perform an XOR operation on the first parity bit PB1and the syndrome bit SB5associated with the changed parity vector and the multiplexer463may output one of an output of the XOR gate465and the syndrome bit SB5as a selected bit SSB5in response to the poison mode signal PMS. In response to the poison mode signal PMS designating the normal mode, the multiplexer461outputs the first syndrome bit SB1, the multiplexer462outputs the syndrome bit SB4, the multiplexer463outputs the syndrome parity bit SB5, and thus, the modified syndrome generator460outputs the syndrome SDR as the modified syndrome MSDR in the normal mode. In response to the poison mode signal PMS designating the poison mode, the multiplexer461outputs the first parity bit PB1, the multiplexer462outputs the output of the XOR gate464, the multiplexer463outputs the output of XOR gate465, and thus the modified syndrome generator460outputs the modified syndrome MSDR which includes the first parity bit PB1and in which logic levels of the syndrome bits SB4and SB5are selectively inverted based on a logic level of the first parity bit PB1, in the poison mode. That is, the modified syndrome generator460, in the poison mode, may replace the first syndrome bit SB1with the first parity bit PB1and may replace second syndrome bits SB4and SB5associated with the changed first parity vector PV1′, among the plurality of parity vectors with a result of performing an XOR operation on each of the second syndrome bits SB4and SB5and the first parity bit PB1. FIG.16illustrates that the semiconductor memory device ofFIG.3performs a write operation in a normal mode. Referring toFIGS.3,9and16, when the command CMD is a write command and the mode register212is set to the normal mode, the ECC encoder420performs an ECC encoding on the main data MD to generate the first parity data PRT1as a reference numeral522indicates and provides the first parity data PRT1to the selective poison flag injector (SPFI)430. Because the poison mode signal PMS has a second logic level (a logic low level, as indicated by the reference PMS (‘L’)) in response to the normal mode, the selective poison flag injector430maintains the first parity data PRT1and the I/O gating circuit290writes the main data MD and the first parity data PRT1in a target page TPG in the first bank array310as a reference numeral524indicates. FIG.17illustrates that the semiconductor memory device ofFIG.3performs a write operation in a normal mode. Referring toFIGS.3,9and17, when the command CMD is a write command and the mode register212is set to the poison mode, the ECC encoder420performs an ECC encoding on the main data MD to generate the first parity data PRT1as a reference numeral522indicates and provides the first parity data PRT1to the selective poison flag injector430. Because the poison mode signal PMS has a first logic level (a logic high level, as indicated by the reference PMS (‘H’)) in response to the poison mode, the selective poison flag injector430replaces a portion of the first parity data PRT1with the poison flag PF to output the second parity data PRT2and the poison flag PF. The I/O gating circuit290writes the main data MD and the second parity data PRT2and the poison flag PF in a target page TPG in the first bank array310as a reference numeral525indicates. The first parity data PRT1may include 16 bits and the second parity data PRT2may include 15 bits. FIG.18illustrates that the semiconductor memory device ofFIG.3performs a read operation in a normal mode. Referring toFIGS.3,9and18, when the command CMD is a write command and the mode register212is set to the normal mode (as indicated by the reference PMS (‘L’), the ECC decoder440reads a codeword RCW including the main data MD and the first parity data PRT1from a target page in the first bank array310as a reference numeral531indicates, performs an ECC decoding on the main data MD based on the first parity data PRT1, corrects an error bit EB in the main data MD, and outputs a corrected main data MD′ as a reference numeral532indicates. FIG.19illustrates that the semiconductor memory device ofFIG.3performs a read operation in a normal mode. Referring toFIGS.3,9and19, when the command CMD is a write command and the mode register212is set to the poison mode (as indicated by the reference PMS (‘H’), the ECC decoder440reads a codeword RCW including the main data MD, the second parity data PRT2and the poison flag PF from a target page in the first bank array310as a reference numeral533indicates, performs an ECC decoding operation on the main data MD and the poison flag PF based on the second parity data PRT2, corrects an error bit EB in the main data MD, and outputs a corrected main data MD′ and the poison flag PF as a reference numeral534indicates. The ECC decoder440performs an ECC decoding operation by using the poison flag PF and the first parity bit PB1that is replaced. FIG.20is a flow chart illustrating a method of operating a semiconductor memory device andFIG.21is a sequence chart illustrating a method of operating a semiconductor memory device. InFIGS.20and21, it is assumed that the mode register212in the control logic circuit210is set to the poison mode. Referring toFIGS.1through21, the memory controller100transmits the main data MD and the poison flag PF to the semiconductor memory device200(DRAM) (operation S105), and the semiconductor memory device200receives the main data MD and the poison flag PF (operation S110). The poison flag PF may indicate whether the main data MD correspond to poisoned data. The on-die ECC engine400performs an ECC encoding operation on the main data MD to generate the first parity data PRT1(operation S120), and provides the first parity data PRT1to the selective poison flag injector430. In response to the poison mode signal PMS having a first logic level, the selective poison flag injector430replaces a portion of the first parity data PRT1with the poison flag PF to generate the second parity data PRT2(operation S130), and outputs the second parity data PRT2and the poison flag PF. The I/O gating circuit290stores the main data MD, the poison flag PF, and the second parity data PRT2in a target page (operation S140). The I/O gating circuit290reads the main data MD, the poison flag PF and the second parity data PRT2from the target page based on a read command from the memory controller (operation S150) and provides the main data MD, the poison flag PF and the second parity data PRT2to the on-die ECC engine400. The on-die ECC engine400performs an ECC decoding operation on the main data MD and the poison flag PF based on the second parity data PRT2to correct an error bit in the main data MD and the poison flag PF (operation S160). The semiconductor memory device200transmits the main data MD and the poison flag PF to the memory controller (operation S170). Accordingly, in the semiconductor memory device and the memory system according to example embodiments, the on-die ECC engine performs an ECC encoding on the main data to generate the first parity data using an ECC, replaces a portion of the first parity data with the poison flag to generate the second parity data in the poison mode, stores the poison flag and the second parity data in a parity cell region, and performs an ECC decoding operation on the main data and the poison flag based on the second parity data to protect the poison flag. That is, the semiconductor memory device may store the poison flag as meta data in the parity cell region without size overhead and the on-die ECC engine may perform the ECC encoding operation based on the ECC which is used as a different code in the normal mode and the poison mode based on the poison mode signal. That is, a different code is used in normal code for the ECC encoding operation as compared to the code used if in poison mode. FIG.22is a block diagram illustrating an example of memory module according to example embodiments. Referring toFIG.22, a memory module600includes a register clock driver (RCD)690disposed (or mounted) in a circuit board601, a plurality of semiconductor memory devices201a˜201e,202a˜202e,203a˜203e, and204a˜204e, a plurality of data buffers641˜645and651˜655, module resistance units660and670, a serial presence detection (SPD) chip680, and a power management integrated circuit (PMIC)685. Here, the circuit board601, which may be a printed circuit board, may extend in a second direction D2, perpendicular to a first direction D1, between a first edge portion603and a second edge portion605. The first edge portion603and the second edge portion605may extend in the first direction D1. The RCD690may be on or near a center of the circuit board601. The plurality of semiconductor memory devices201a˜201e,202a˜202e,203a˜203e, and204a˜204emay be arranged in a plurality of rows between the RCD690and the first edge portion603and between the RCD690and the second edge portion605. In this case, the semiconductor memory devices201a˜201eand202a˜202emay be arranged along a plurality of rows between the RCD690and the first edge portion103. The semiconductor memory devices203a˜203e, and204a˜204emay be arranged along a plurality of rows between the RCD690and the second edge portion105. A portion of the semiconductor memory devices201a˜201eand202a˜202emay be an error correction code (ECC) memory device. The ECC memory device may perform an ECC encoding operation to generate parity bits about data to be written at memory cells of the plurality of semiconductor memory devices201a˜201e,202a˜202e,203a˜203e, and204a˜204e, and an ECC decoding operation to correct an error occurring in the data read from the memory cells. Each of the plurality of semiconductor memory devices201a˜201e,202a˜202e,203a˜203e, and204a˜204emay be coupled to a corresponding one of the data buffers141˜145and151˜155through a data transmission line for receiving/transmitting main data MD and a poison flag PF. The poison flag PF may have a corresponding level and may indicate whether the main data MD which each of the plurality of semiconductor memory devices201a˜201e,202a˜202e,203a˜203e, and204a˜204ereceives corresponds to poisoned data. Each of the plurality of semiconductor memory devices201a˜201e,202a˜202e,203a˜203e, and204a˜204emay employ the semiconductor memory device ofFIG.3. Therefore, each of the plurality of semiconductor memory devices201a˜201e,202a˜202e,203a˜203e, and204a˜204emay include a memory cell array, a control logic circuit and an on-die ECC engine. The memory cell array may include a normal cell region and a parity cell region and each of the plurality of semiconductor memory devices201a˜201e,202a˜202e,203a˜203e, and204a˜204emay be set to one of a normal mode and a poison mode, individually. When some of the plurality of semiconductor memory devices201a˜201e,202a˜202e,203a˜203e, and204a˜204eare set to the poison mode, some of the plurality of semiconductor memory devices201a˜201e,202a˜202e,203a˜203e, and204a˜204emay store the poison flag PF in the parity cell region as meta data. The RCD690may provide a command/address signal (e.g., CA) to the semiconductor memory devices201a˜201ethrough a command/address transmission line661and may provide a command/address signal to the semiconductor memory devices202a˜202ethrough a command/address transmission line663. In addition, the RCD690may provide a command/address signal to the semiconductor memory devices203a˜203ethrough a command/address transmission line671and may provide a command/address signal to the semiconductor memory devices204a˜204ethrough a command/address transmission line673. The command/address transmission lines661and663may be connected in common to the module resistance unit660adjacent to the first edge portion603, and the command/address transmission lines671and673may be connected in common to the module resistance unit670adjacent to the second edge portion605. Each of the module resistance units660and670may include a termination resistor Rtt/2 connected to a termination voltage Vtt. In this case, an arrangement of the module resistance units160and170may reduce the number of the module resistance units, thus reducing an area where termination resistors are disposed. The SPD chip680may be adjacent to the RCD690and the PMIC685may be between the semiconductor memory device203eand the second edge portion605. The PMIC685may generate a power supply voltage VDD based on the input voltage VIN and may provide the power supply voltage VDD to the semiconductor memory devices201a˜201e,202a˜202e,203a˜203e, and204a˜204e. The RCD690may control the semiconductor memory devices201a˜201e,202a˜202e,203a˜203e, and204a˜204eand the PMIC685under control of the memory controller100. The RCD690may receive an address ADDR, a command CMD, and a clock signal CK from the memory controller100. The SPD chip680may be a programmable read only memory (e.g., EEPROM). The SPD chip680may include initial information or device information DI of the memory module600. In example embodiments, the SPD chip680may include the initial information or the device information DI such as a module form, a module configuration, a storage capacity, a module type, an execution environment, or the like of the memory module600. When a memory system including the memory module600is booted up, a host may read the device information DI from the SPD chip680and may recognize the memory module600based on the device information DI. The host may control the memory module600based on the device information DI from the SPD chip680. For example, the host may recognize a type of the semiconductor memory devices201a˜201e,202a˜202e,203a˜203e, and204a˜204eincluded in the memory module600based on the device information DI from the SPD chip680. In example embodiments, the SPD chip680may communicate with the host through a serial bus. For example, the host may exchange a signal with the SPD chip680through the serial bus. The SPD chip680may also communicate with the RCD690through the serial bus. The serial bus may include at least one of 2-line serial buses such as an inter-integrated circuit (I2C), a system management bus (SMBus), a power management bus (PMBus), an intelligent platform management interface (IPMI), a management component transport protocol (MCTP), or the like. FIG.23is a block diagram illustrating a semiconductor memory device according to example embodiments. Referring toFIG.23, a semiconductor memory device700may include at least one buffer die710and a group die720providing a soft error analyzing and correcting function in a stacked chip structure. The group die720may include a plurality of memory dies720-1to720-u(u is a natural number greater than two) which is stacked on the at least one buffer die710and convey data through a plurality of silicon via (TSV) lines. Each of the memory dies720-1to720umay include a cell core722and an on-die ECC engine724and the cell core722may include a plurality of volatile memory cells coupled to a plurality of word-lines and a plurality of bit-lines. The on-die ECC engine624may employ the on-die ECC engine400ofFIG.9. Therefore, the on-die ECC engine624performs an ECC encoding on main data to generate first parity data using an ECC, replaces a portion of the first parity data with a poison flag to generate second parity data in a poison mode, stores the poison flag and the second parity data in a parity cell region, and performs an ECC decoding operation on the main data and the poison flag based on the second parity data to protect the poison flag. That is, the semiconductor memory device700may store the poison flag as meta data in the parity cell region without size overhead and the on-die ECC engine624may perform the ECC encoding operation based on the ECC which is used as a different code in the normal mode and the poison mode based on the poison mode signal. That is, a different code is used in normal mode for the ECC encoding operation as compared to the code used if in poison mode The at least one buffer die710may include a via ECC engine712which corrects a transmission error using the transmission parity bits when a transmission error is detected from the transmission data received through the TSV lines and generate error-corrected data. The semiconductor memory device700may be a stack chip type memory device or a stacked memory device which conveys data and control signals through the TSV lines. The TSV lines may also be called through electrodes. A transmission error which occurs at the transmission data may be due to noise which occurs at the TSV lines. Since data fail due to the noise occurring at the TSV lines may be distinguishable from data fail due to a false operation of the memory die, it may be regarded as soft data fail (or a soft error). The soft data fail may be generated due to transmission fail on a transmission path, and may be detected and remedied by an ECC operation. With the above description, a data TSV line group732which is formed at one memory die720-umay include TSV lines L1to Lu, and a parity TSV line group734may include TSV lines L10to Lv. The TSV lines L1to Lu of the data TSV line group732and the parity TSV lines L10to Lv of the parity TSV line group734may be connected to micro bumps MCB which are correspondingly formed among the memory dies720-1to720-u. Each of the memory dies720-1to720-umay include DRAM cells each including at least one access transistor and one storage capacitor. The semiconductor memory device700may have a three-dimensional (3D) chip structure or a 2.5D chip structure to communicate with a memory controller through a data bus B10. The at least one buffer die710may be connected with the memory controller through the data bus B10. The via ECC engine712may determine whether a transmission error occurs at the transmission data received through the data TSV line group732, based on the transmission parity bits received through the parity TSV line group734. When a transmission error is detected, the via ECC engine712may correct the transmission error on the transmission data using the transmission parity bits. When the transmission error is uncorrectable, the via ECC engine712may output information indicating occurrence of an uncorrectable data error. FIG.24is a diagram illustrating a semiconductor package including the stacked memory device according to example embodiments. Referring toFIG.24, a semiconductor package900may include one or more stacked memory devices910and a GPU920(graphic processing unit) and the GPU920may include a memory controller925. The stacked memory devices910and the GPU920may be on an interposer930(such as via mounting), and the interposer on which the stacked memory devices910and the GPU920are on may be on a package substrate940(such as by mounting). The package substrate940may be on solder balls950(such as by mounting). The memory controller925may employ the memory controller100inFIG.1. Each of the stacked memory devices910may be implemented in various forms, and may be a memory device in a high bandwidth memory (HBM) form in which a plurality of layers are stacked. Accordingly, each of the stacked memory devices910may include a buffer die and a plurality of memory dies, and each of the plurality of memory dies may include a memory cell array and an on-die ECC engine and the buffer die may include a via ECC engine. The plurality of stacked memory devices910may be on the interposer930(such as by mounting), and the GPU920may communicate with the plurality of stacked memory devices910. For example, each of the stacked memory devices910and the GPU920may include a physical region, and communication may be performed between the stacked memory devices910and the GPU920through the physical regions. As mentioned above, in the semiconductor memory device according to example embodiments, the on-die ECC engine performs an ECC encoding on the main data to generate the first parity data using an ECC, replaces a portion of the first parity data with the poison flag to generate the second parity data in the poison mode, stores the poison flag and the second parity data in a parity cell region, and performs an ECC decoding operation on the main data and the poison flag based on the second parity data to protect the poison flag. That is, the semiconductor memory device may store the poison flag as meta data in the parity cell region without size overhead and the on-die ECC engine may perform the ECC encoding operation based on the ECC which is used as a different code in the normal mode and the poison mode based on the poison mode signal. That is, a different code is used in normal mode for the ECC encoding operation as compared to the code used if in poison mode The present disclosure may be applied to semiconductor memory devices and memory systems employing the ECC. The foregoing is illustrative of example embodiments and is not to be construed as limiting thereof. Although a few example embodiments have been described, those skilled in the art will readily appreciate that many modifications are possible in the example embodiments without materially departing from the novel teachings and advantages of the present disclosure. Accordingly, all such modifications are intended to be included within the scope of the present disclosure as defined in the claims.
56,218
11860735
DETAILED DESCRIPTION OF THE INVENTION FIG.1is a schematic block diagram of an embodiment of a distributed computing system10that includes a user device12and/or a user device14, a distributed storage and/or task (DST) processing unit16, a distributed storage and/or task network (DSTN) managing unit18, a DST integrity processing unit20, and a distributed storage and/or task network (DSTN) module22. The components of the distributed computing system10are coupled via a network24, which may include one or more wireless and/or wire lined communication systems; one or more private intranet systems and/or public internet systems; and/or one or more local area networks (LAN) and/or wide area networks (WAN). The DSTN module22includes a plurality of distributed storage and/or task (DST) execution units36that may be located at geographically different sites (e.g., one in Chicago, one in Milwaukee, etc.). Each of the DST execution units is operable to store dispersed error encoded data and/or to execute, in a distributed manner, one or more tasks on data. The tasks may be a simple function (e.g., a mathematical function, a logic function, an identify function, a find function, a search engine function, a replace function, etc.), a complex function (e.g., compression, human and/or computer language translation, text-to-voice conversion, voice-to-text conversion, etc.), multiple simple and/or complex functions, one or more algorithms, one or more applications, etc. Each of the user devices12-14, the DST processing unit16, the DSTN managing unit18, and the DST integrity processing unit20include a computing core26and may be a portable computing device and/or a fixed computing device. A portable computing device may be a social networking device, a gaming device, a cell phone, a smart phone, a personal digital assistant, a digital music player, a digital video player, a laptop computer, a handheld computer, a tablet, a video game controller, and/or any other portable device that includes a computing core. A fixed computing device may be a personal computer (PC), a computer server, a cable set-top box, a satellite receiver, a television set, a printer, a fax machine, home entertainment equipment, a video game console, and/or any type of home or office computing equipment. User device12and DST processing unit16are configured to include a DST client module34. With respect to interfaces, each interface30,32, and33includes software and/or hardware to support one or more communication links via the network24indirectly and/or directly. For example, interface30supports a communication link (e.g., wired, wireless, direct, via a LAN, via the network24, etc.) between user device14and the DST processing unit16. As another example, interface32supports communication links (e.g., a wired connection, a wireless connection, a LAN connection, and/or any other type of connection to/from the network24) between user device12and the DSTN module22and between the DST processing unit16and the DSTN module22. As yet another example, interface33supports a communication link for each of the DSTN managing unit18and DST integrity processing unit20to the network24. The distributed computing system10is operable to support dispersed storage (DS) error encoded data storage and retrieval, to support distributed task processing on received data, and/or to support distributed task processing on stored data. In general and with respect to DS error encoded data storage and retrieval, the distributed computing system10supports three primary operations: storage management, data storage and retrieval (an example of which will be discussed with reference toFIGS.20-26), and data storage integrity verification. In accordance with these three primary functions, data can be encoded, distributedly stored in physically different locations, and subsequently retrieved in a reliable and secure manner. Such a system is tolerant of a significant number of failures (e.g., up to a failure level, which may be greater than or equal to a pillar width minus a decode threshold minus one) that may result from individual storage device failures and/or network equipment failures without loss of data and without the need for a redundant or backup copy. Further, the system allows the data to be stored for an indefinite period of time without data loss and does so in a secure manner (e.g., the system is very resistant to attempts at hacking the data). The second primary function (i.e., distributed data storage and retrieval) begins and ends with a user device12-14. For instance, if a second type of user device14has data40to store in the DSTN module22, it sends the data40to the DST processing unit16via its interface30. The interface30functions to mimic a conventional operating system (OS) file system interface (e.g., network file system (NFS), flash file system (FFS), disk file system (DFS), file transfer protocol (FTP), web-based distributed authoring and versioning (WebDAV), etc.) and/or a block memory interface (e.g., small computer system interface (SCSI), internet small computer system interface (iSCSI), etc.). In addition, the interface30may attach a user identification code (ID) to the data40. To support storage management, the DSTN managing unit18performs DS management services. One such DS management service includes the DSTN managing unit18establishing distributed data storage parameters (e.g., vault creation, distributed storage parameters, security parameters, billing information, user profile information, etc.) for a user device12-14individually or as part of a group of user devices. For example, the DSTN managing unit18coordinates creation of a vault (e.g., a virtual memory block) within memory of the DSTN module22for a user device, a group of devices, or for public access and establishes per vault dispersed storage (DS) error encoding parameters for a vault. The DSTN managing unit18may facilitate storage of DS error encoding parameters for each vault of a plurality of vaults by updating registry information for the distributed computing system10. The facilitating includes storing updated registry information in one or more of the DSTN module22, the user device12, the DST processing unit16, and the DST integrity processing unit20. The DS error encoding parameters (e.g., or dispersed storage error coding parameters) include data segmenting information (e.g., how many segments data (e.g., a file, a group of files, a data block, etc.) is divided into), segment security information (e.g., per segment encryption, compression, integrity checksum, etc.), error coding information (e.g., pillar width, decode threshold, read threshold, write threshold, etc.), slicing information (e.g., the number of encoded data slices that will be created for each data segment); and slice security information (e.g., per encoded data slice encryption, compression, integrity checksum, etc.). The DSTN managing unit18creates and stores user profile information (e.g., an access control list (ACL)) in local memory and/or within memory of the DSTN module22. The user profile information includes authentication information, permissions, and/or the security parameters. The security parameters may include encryption/decryption scheme, one or more encryption keys, key generation scheme, and/or data encoding/decoding scheme. The DSTN managing unit18creates billing information for a particular user, a user group, a vault access, public vault access, etc. For instance, the DSTN managing unit18tracks the number of times a user accesses a private vault and/or public vaults, which can be used to generate a per-access billing information. In another instance, the DSTN managing unit18tracks the amount of data stored and/or retrieved by a user device and/or a user group, which can be used to generate a per-data-amount billing information. Another DS management service includes the DSTN managing unit18performing network operations, network administration, and/or network maintenance. Network operations includes authenticating user data allocation requests (e.g., read and/or write requests), managing creation of vaults, establishing authentication credentials for user devices, adding/deleting components (e.g., user devices, DST execution units, and/or DST processing units) from the distributed computing system10, and/or establishing authentication credentials for DST execution units36. Network administration includes monitoring devices and/or units for failures, maintaining vault information, determining device and/or unit activation status, determining device and/or unit loading, and/or determining any other system level operation that affects the performance level of the system10. Network maintenance includes facilitating replacing, upgrading, repairing, and/or expanding a device and/or unit of the system10. To support data storage integrity verification within the distributed computing system10, the DST integrity processing unit20performs rebuilding of ‘bad’ or missing encoded data slices. At a high level, the DST integrity processing unit20performs rebuilding by periodically attempting to retrieve/list encoded data slices, and/or slice names of the encoded data slices, from the DSTN module22. For retrieved encoded slices, they are checked for errors due to data corruption, outdated version, etc. If a slice includes an error, it is flagged as a ‘bad’ slice. For encoded data slices that were not received and/or not listed, they are flagged as missing slices. Bad and/or missing slices are subsequently rebuilt using other retrieved encoded data slices that are deemed to be good slices to produce rebuilt slices. The rebuilt slices are stored in memory of the DSTN module22. Note that the DST integrity processing unit20may be a separate unit as shown, it may be included in the DSTN module22, it may be included in the DST processing unit16, and/or distributed among the DST execution units36. To support distributed task processing on received data, the distributed computing system10has two primary operations: DST (distributed storage and/or task processing) management and DST execution on received data (an example of which will be discussed with reference toFIGS.3-19). With respect to the storage portion of the DST management, the DSTN managing unit18functions as previously described. With respect to the tasking processing of the DST management, the DSTN managing unit18performs distributed task processing (DTP) management services. One such DTP management service includes the DSTN managing unit18establishing DTP parameters (e.g., user-vault affiliation information, billing information, user-task information, etc.) for a user device12-14individually or as part of a group of user devices. Another DTP management service includes the DSTN managing unit18performing DTP network operations, network administration (which is essentially the same as described above), and/or network maintenance (which is essentially the same as described above). Network operations include, but are not limited to, authenticating user task processing requests (e.g., valid request, valid user, etc.), authenticating results and/or partial results, establishing DTP authentication credentials for user devices, adding/deleting components (e.g., user devices, DST execution units, and/or DST processing units) from the distributed computing system, and/or establishing DTP authentication credentials for DST execution units. To support distributed task processing on stored data, the distributed computing system10has two primary operations: DST (distributed storage and/or task) management and DST execution on stored data. With respect to the DST execution on stored data, if the second type of user device14has a task request38for execution by the DSTN module22, it sends the task request38to the DST processing unit16via its interface30. An example of DST execution on stored data will be discussed in greater detail with reference toFIGS.27-39. With respect to the DST management, it is substantially similar to the DST management to support distributed task processing on received data. FIG.2is a schematic block diagram of an embodiment of a computing core26that includes a processing module50, a memory controller52, main memory54, a video graphics processing unit55, an input/output (TO) controller56, a peripheral component interconnect (PCI) interface58, an IO interface module60, at least one IO device interface module62, a read only memory (ROM) basic input output system (BIOS)64, and one or more memory interface modules. The one or more memory interface module(s) includes one or more of a universal serial bus (USB) interface module66, a host bus adapter (HBA) interface module68, a network interface module70, a flash interface module72, a hard drive interface module74, and a DSTN interface module76. The DSTN interface module76functions to mimic a conventional operating system (OS) file system interface (e.g., network file system (NFS), flash file system (FFS), disk file system (DFS), file transfer protocol (FTP), web-based distributed authoring and versioning (WebDAV), etc.) and/or a block memory interface (e.g., small computer system interface (SCSI), internet small computer system interface (iSCSI), etc.). The DSTN interface module76and/or the network interface module70may function as the interface30of the user device14ofFIG.1. Further note that the IO device interface module62and/or the memory interface modules may be collectively or individually referred to as IO ports. FIG.3is a diagram of an example of the distributed computing system performing a distributed storage and task processing operation. The distributed computing system includes a DST (distributed storage and/or task) client module34(which may be in user device14and/or in DST processing unit16ofFIG.1), a network24, a plurality of DST execution units 1-n that includes two or more DST execution units36ofFIG.1(which form at least a portion of DSTN module22ofFIG.1), a DST managing module (not shown), and a DST integrity verification module (not shown). The DST client module34includes an outbound DST processing section80and an inbound DST processing section82. Each of the DST execution units 1-n includes a controller86, a processing module84, memory88, a DT (distributed task) execution module90, and a DST client module34. In an example of operation, the DST client module34receives data92and one or more tasks94to be performed upon the data92. The data92may be of any size and of any content, where, due to the size (e.g., greater than a few Terabytes), the content (e.g., secure data, etc.), and/or task(s) (e.g., MIPS intensive), distributed processing of the task(s) on the data is desired. For example, the data92may be one or more digital books, a copy of a company's emails, a large-scale Internet search, a video security file, one or more entertainment video files (e.g., television programs, movies, etc.), data files, and/or any other large amount of data (e.g., greater than a few Terabytes). Within the DST client module34, the outbound DST processing section80receives the data92and the task(s)94. The outbound DST processing section80processes the data92to produce slice groupings96. As an example of such processing, the outbound DST processing section80partitions the data92into a plurality of data partitions. For each data partition, the outbound DST processing section80dispersed storage (DS) error encodes the data partition to produce encoded data slices and groups the encoded data slices into a slice grouping96. In addition, the outbound DST processing section80partitions the task94into partial tasks98, where the number of partial tasks98may correspond to the number of slice groupings96. The outbound DST processing section80then sends, via the network24, the slice groupings96and the partial tasks98to the DST execution units 1-n of the DSTN module22ofFIG.1. For example, the outbound DST processing section80sends slice group 1 and partial task 1 to DST execution unit 1. As another example, the outbound DST processing section80sends slice group #n and partial task #n to DST execution unit #n. Each DST execution unit performs its partial task98upon its slice group96to produce partial results102. For example, DST execution unit #1 performs partial task #1 on slice group #1 to produce a partial result #1, for results. As a more specific example, slice group #1 corresponds to a data partition of a series of digital books and the partial task #1 corresponds to searching for specific phrases, recording where the phrase is found, and establishing a phrase count. In this more specific example, the partial result #1 includes information as to where the phrase was found and includes the phrase count. Upon completion of generating their respective partial results102, the DST execution units send, via the network24, their partial results102to the inbound DST processing section82of the DST client module34. The inbound DST processing section82processes the received partial results102to produce a result104. Continuing with the specific example of the preceding paragraph, the inbound DST processing section82combines the phrase count from each of the DST execution units36to produce a total phrase count. In addition, the inbound DST processing section82combines the ‘where the phrase was found’ information from each of the DST execution units36within their respective data partitions to produce ‘where the phrase was found’ information for the series of digital books. In another example of operation, the DST client module34requests retrieval of stored data within the memory of the DST execution units36(e.g., memory of the DSTN module). In this example, the task94is retrieve data stored in the memory of the DSTN module. Accordingly, the outbound DST processing section80converts the task94into a plurality of partial tasks98and sends the partial tasks98to the respective DST execution units 1-n. In response to the partial task98of retrieving stored data, a DST execution unit36identifies the corresponding encoded data slices100and retrieves them. For example, DST execution unit #1 receives partial task #1 and retrieves, in response thereto, retrieved slices #1. The DST execution units36send their respective retrieved slices100to the inbound DST processing section82via the network24. The inbound DST processing section82converts the retrieved slices100into data92. For example, the inbound DST processing section82de-groups the retrieved slices100to produce encoded slices per data partition. The inbound DST processing section82then DS error decodes the encoded slices per data partition to produce data partitions. The inbound DST processing section82de-partitions the data partitions to recapture the data92. FIG.4is a schematic block diagram of an embodiment of an outbound distributed storage and/or task (DST) processing section80of a DST client module34FIG.1coupled to a DSTN module22of aFIG.1(e.g., a plurality of n DST execution units36) via a network24. The outbound DST processing section80includes a data partitioning module110, a dispersed storage (DS) error encoding module112, a grouping selector module114, a control module116, and a distributed task control module118. In an example of operation, the data partitioning module110partitions data92into a plurality of data partitions120. The number of partitions and the size of the partitions may be selected by the control module116via control160based on the data92(e.g., its size, its content, etc.), a corresponding task94to be performed (e.g., simple, complex, single step, multiple steps, etc.), DS encoding parameters (e.g., pillar width, decode threshold, write threshold, segment security parameters, slice security parameters, etc.), capabilities of the DST execution units36(e.g., processing resources, availability of processing recourses, etc.), and/or as may be inputted by a user, system administrator, or other operator (human or automated). For example, the data partitioning module110partitions the data92(e.g., 100 Terabytes) into 100,000 data segments, each being 1 Gigabyte in size. Alternatively, the data partitioning module110partitions the data92into a plurality of data segments, where some of data segments are of a different size, are of the same size, or a combination thereof. The DS error encoding module112receives the data partitions120in a serial manner, a parallel manner, and/or a combination thereof. For each data partition120, the DS error encoding module112DS error encodes the data partition120in accordance with control information160from the control module116to produce encoded data slices122. The DS error encoding includes segmenting the data partition into data segments, segment security processing (e.g., encryption, compression, watermarking, integrity check (e.g., CRC), etc.), error encoding, slicing, and/or per slice security processing (e.g., encryption, compression, watermarking, integrity check (e.g., CRC), etc.). The control information160indicates which steps of the DS error encoding are active for a given data partition and, for active steps, indicates the parameters for the step. For example, the control information160indicates that the error encoding is active and includes error encoding parameters (e.g., pillar width, decode threshold, write threshold, read threshold, type of error encoding, etc.). The grouping selector module114groups the encoded slices122of a data partition into a set of slice groupings96. The number of slice groupings corresponds to the number of DST execution units36identified for a particular task94. For example, if five DST execution units36are identified for the particular task94, the grouping selector module groups the encoded slices122of a data partition into five slice groupings96. The grouping selector module114outputs the slice groupings96to the corresponding DST execution units36via the network24. The distributed task control module118receives the task94and converts the task94into a set of partial tasks98. For example, the distributed task control module118receives a task to find where in the data (e.g., a series of books) a phrase occurs and a total count of the phrase usage in the data. In this example, the distributed task control module118replicates the task94for each DST execution unit36to produce the partial tasks98. In another example, the distributed task control module118receives a task to find where in the data a first phrase occurs, where in the data a second phrase occurs, and a total count for each phrase usage in the data. In this example, the distributed task control module118generates a first set of partial tasks98for finding and counting the first phrase and a second set of partial tasks for finding and counting the second phrase. The distributed task control module118sends respective first and/or second partial tasks98to each DST execution unit36. FIG.5is a logic diagram of an example of a method for outbound distributed storage and task (DST) processing that begins at step126where a DST client module receives data and one or more corresponding tasks. The method continues at step128where the DST client module determines a number of DST units to support the task for one or more data partitions. For example, the DST client module may determine the number of DST units to support the task based on the size of the data, the requested task, the content of the data, a predetermined number (e.g., user indicated, system administrator determined, etc.), available DST units, capability of the DST units, and/or any other factor regarding distributed task processing of the data. The DST client module may select the same DST units for each data partition, may select different DST units for the data partitions, or a combination thereof. The method continues at step130where the DST client module determines processing parameters of the data based on the number of DST units selected for distributed task processing. The processing parameters include data partitioning information, DS encoding parameters, and/or slice grouping information. The data partitioning information includes a number of data partitions, size of each data partition, and/or organization of the data partitions (e.g., number of data blocks in a partition, the size of the data blocks, and arrangement of the data blocks). The DS encoding parameters include segmenting information, segment security information, error encoding information (e.g., dispersed storage error encoding function parameters including one or more of pillar width, decode threshold, write threshold, read threshold, generator matrix), slicing information, and/or per slice security information. The slice grouping information includes information regarding how to arrange the encoded data slices into groups for the selected DST units. As a specific example, if the DST client module determines that five DST units are needed to support the task, then it determines that the error encoding parameters include a pillar width of five and a decode threshold of three. The method continues at step132where the DST client module determines task partitioning information (e.g., how to partition the tasks) based on the selected DST units and data processing parameters. The data processing parameters include the processing parameters and DST unit capability information. The DST unit capability information includes the number of DT (distributed task) execution units, execution capabilities of each DT execution unit (e.g., MIPS capabilities, processing resources (e.g., quantity and capability of microprocessors, CPUs, digital signal processors, co-processor, microcontrollers, arithmetic logic circuitry, and/or any other analog and/or digital processing circuitry), availability of the processing resources, memory information (e.g., type, size, availability, etc.)), and/or any information germane to executing one or more tasks. The method continues at step134where the DST client module processes the data in accordance with the processing parameters to produce slice groupings. The method continues at step136where the DST client module partitions the task based on the task partitioning information to produce a set of partial tasks. The method continues at step138where the DST client module sends the slice groupings and the corresponding partial tasks to respective DST units. FIG.6is a schematic block diagram of an embodiment of the dispersed storage (DS) error encoding module112of an outbound distributed storage and task (DST) processing section. The DS error encoding module112includes a segment processing module142, a segment security processing module144, an error encoding module146, a slicing module148, and a per slice security processing module150. Each of these modules is coupled to a control module116to receive control information160therefrom. In an example of operation, the segment processing module142receives a data partition120from a data partitioning module and receives segmenting information as the control information160from the control module116. The segmenting information indicates how the segment processing module142is to segment the data partition120. For example, the segmenting information indicates how many rows to segment the data based on a decode threshold of an error encoding scheme, indicates how many columns to segment the data into based on a number and size of data blocks within the data partition120, and indicates how many columns to include in a data segment152. The segment processing module142segments the data120into data segments152in accordance with the segmenting information. The segment security processing module144, when enabled by the control module116, secures the data segments152based on segment security information received as control information160from the control module116. The segment security information includes data compression, encryption, watermarking, integrity check (e.g., cyclic redundancy check (CRC), etc.), and/or any other type of digital security. For example, when the segment security processing module144is enabled, it may compress a data segment152, encrypt the compressed data segment, and generate a CRC value for the encrypted data segment to produce a secure data segment154. When the segment security processing module144is not enabled, it passes the data segments152to the error encoding module146or is bypassed such that the data segments152are provided to the error encoding module146. The error encoding module146encodes the secure data segments154in accordance with error correction encoding parameters received as control information160from the control module116. The error correction encoding parameters (e.g., also referred to as dispersed storage error coding parameters) include identifying an error correction encoding scheme (e.g., forward error correction algorithm, a Reed-Solomon based algorithm, an online coding algorithm, an information dispersal algorithm, etc.), a pillar width, a decode threshold, a read threshold, a write threshold, etc. For example, the error correction encoding parameters identify a specific error correction encoding scheme, specifies a pillar width of five, and specifies a decode threshold of three. From these parameters, the error encoding module146encodes a data segment154to produce an encoded data segment156. The slicing module148slices the encoded data segment156in accordance with the pillar width of the error correction encoding parameters received as control information160. For example, if the pillar width is five, the slicing module148slices an encoded data segment156into a set of five encoded data slices. As such, for a plurality of encoded data segments156for a given data partition, the slicing module outputs a plurality of sets of encoded data slices158. The per slice security processing module150, when enabled by the control module116, secures each encoded data slice158based on slice security information received as control information160from the control module116. The slice security information includes data compression, encryption, watermarking, integrity check (e.g., CRC, etc.), and/or any other type of digital security. For example, when the per slice security processing module150is enabled, it compresses an encoded data slice158, encrypts the compressed encoded data slice, and generates a CRC value for the encrypted encoded data slice to produce a secure encoded data slice122. When the per slice security processing module150is not enabled, it passes the encoded data slices158or is bypassed such that the encoded data slices158are the output of the DS error encoding module112. Note that the control module116may be omitted and each module stores its own parameters. FIG.7is a diagram of an example of a segment processing of a dispersed storage (DS) error encoding module. In this example, a segment processing module142receives a data partition120that includes 45 data blocks (e.g., d1-d45), receives segmenting information (i.e., control information160) from a control module, and segments the data partition120in accordance with the control information160to produce data segments152. Each data block may be of the same size as other data blocks or of a different size. In addition, the size of each data block may be a few bytes to megabytes of data. As previously mentioned, the segmenting information indicates how many rows to segment the data partition into, indicates how many columns to segment the data partition into, and indicates how many columns to include in a data segment. In this example, the decode threshold of the error encoding scheme is three; as such the number of rows to divide the data partition into is three. The number of columns for each row is set to 15, which is based on the number and size of data blocks. The data blocks of the data partition are arranged in rows and columns in a sequential order (i.e., the first row includes the first 15 data blocks; the second row includes the second 15 data blocks; and the third row includes the last 15 data blocks). With the data blocks arranged into the desired sequential order, they are divided into data segments based on the segmenting information. In this example, the data partition is divided into 8 data segments; the first 7 include 2 columns of three rows and the last includes 1 column of three rows. Note that the first row of the 8 data segments is in sequential order of the first 15 data blocks; the second row of the 8 data segments in sequential order of the second 15 data blocks; and the third row of the 8 data segments in sequential order of the last 15 data blocks. Note that the number of data blocks, the grouping of the data blocks into segments, and size of the data blocks may vary to accommodate the desired distributed task processing function. FIG.8is a diagram of an example of error encoding and slicing processing of the dispersed error encoding processing the data segments ofFIG.7. In this example, data segment 1 includes 3 rows with each row being treated as one word for encoding. As such, data segment 1 includes three words for encoding: word 1 including data blocks d1 and d2, word 2 including data blocks d16 and d17, and word 3 including data blocks d31 and d32. Each of data segments 2-7 includes three words where each word includes two data blocks. Data segment 8 includes three words where each word includes a single data block (e.g., d15, d30, and d45). In operation, an error encoding module146and a slicing module148convert each data segment into a set of encoded data slices in accordance with error correction encoding parameters as control information160. More specifically, when the error correction encoding parameters indicate a unity matrix Reed-Solomon based encoding algorithm, 5 pillars, and decode threshold of 3, the first three encoded data slices of the set of encoded data slices for a data segment are substantially similar to the corresponding word of the data segment. For instance, when the unity matrix Reed-Solomon based encoding algorithm is applied to data segment 1, the content of the first encoded data slice (DS1_d1&2) of the first set of encoded data slices (e.g., corresponding to data segment 1) is substantially similar to content of the first word (e.g., d1 & d2); the content of the second encoded data slice (DS1_d16&17) of the first set of encoded data slices is substantially similar to content of the second word (e.g., d16 & d17); and the content of the third encoded data slice (DS1_d31&32) of the first set of encoded data slices is substantially similar to content of the third word (e.g., d31 & d32). The content of the fourth and fifth encoded data slices (e.g., ES1_1 and ES1_2) of the first set of encoded data slices include error correction data based on the first-third words of the first data segment. With such an encoding and slicing scheme, retrieving any three of the five encoded data slices allows the data segment to be accurately reconstructed. The encoding and slicing of data segments 2-7 yield sets of encoded data slices similar to the set of encoded data slices of data segment 1. For instance, the content of the first encoded data slice (DS2_d3&4) of the second set of encoded data slices (e.g., corresponding to data segment 2) is substantially similar to content of the first word (e.g., d3 & d4); the content of the second encoded data slice (DS2_d18&19) of the second set of encoded data slices is substantially similar to content of the second word (e.g., d18 & d19); and the content of the third encoded data slice (DS2_d33&34) of the second set of encoded data slices is substantially similar to content of the third word (e.g., d33 & d34). The content of the fourth and fifth encoded data slices (e.g., ES1_1 and ES1_2) of the second set of encoded data slices includes error correction data based on the first-third words of the second data segment. FIG.9is a diagram of an example of grouping selection processing of an outbound distributed storage and task (DST) processing in accordance with group selection information as control information160from a control module. Encoded slices for data partition122are grouped in accordance with the control information160to produce slice groupings96. In this example, a grouping selector module114organizes the encoded data slices into five slice groupings (e.g., one for each DST execution unit of a distributed storage and task network (DSTN) module). As a specific example, the grouping selector module114creates a first slice grouping for a DST execution unit #1, which includes first encoded slices of each of the sets of encoded slices. As such, the first DST execution unit receives encoded data slices corresponding to data blocks 1-15 (e.g., encoded data slices of contiguous data). The grouping selector module114also creates a second slice grouping for a DST execution unit #2, which includes second encoded slices of each of the sets of encoded slices. As such, the second DST execution unit receives encoded data slices corresponding to data blocks 16-30. The grouping selector module114further creates a third slice grouping for DST execution unit #3, which includes third encoded slices of each of the sets of encoded slices. As such, the third DST execution unit receives encoded data slices corresponding to data blocks 31-45. The grouping selector module114creates a fourth slice grouping for DST execution unit #4, which includes fourth encoded slices of each of the sets of encoded slices. As such, the fourth DST execution unit receives encoded data slices corresponding to first error encoding information (e.g., encoded data slices of error coding (EC) data). The grouping selector module114further creates a fifth slice grouping for DST execution unit #5, which includes fifth encoded slices of each of the sets of encoded slices. As such, the fifth DST execution unit receives encoded data slices corresponding to second error encoding information. FIG.10is a diagram of an example of converting data92into slice groups that expands on the preceding figures. As shown, the data92is partitioned in accordance with a partitioning function164into a plurality of data partitions (1-x, where x is an integer greater than 4). Each data partition (or chunkset of data) is encoded and grouped into slice groupings as previously discussed by an encoding and grouping function166. For a given data partition, the slice groupings are sent to distributed storage and task (DST) execution units. From data partition to data partition, the ordering of the slice groupings to the DST execution units may vary. For example, the slice groupings of data partition #1 is sent to the DST execution units such that the first DST execution receives first encoded data slices of each of the sets of encoded data slices, which corresponds to a first continuous data chunk of the first data partition (e.g., refer toFIG.9), a second DST execution receives second encoded data slices of each of the sets of encoded data slices, which corresponds to a second continuous data chunk of the first data partition, etc. For the second data partition, the slice groupings may be sent to the DST execution units in a different order than it was done for the first data partition. For instance, the first slice grouping of the second data partition (e.g., slice group 2_1) is sent to the second DST execution unit; the second slice grouping of the second data partition (e.g., slice group 2_2) is sent to the third DST execution unit; the third slice grouping of the second data partition (e.g., slice group 2_3) is sent to the fourth DST execution unit; the fourth slice grouping of the second data partition (e.g., slice group 2_4, which includes first error coding information) is sent to the fifth DST execution unit; and the fifth slice grouping of the second data partition (e.g., slice group 2_5, which includes second error coding information) is sent to the first DST execution unit. The pattern of sending the slice groupings to the set of DST execution units may vary in a predicted pattern, a random pattern, and/or a combination thereof from data partition to data partition. In addition, from data partition to data partition, the set of DST execution units may change. For example, for the first data partition, DST execution units 1-5 may be used; for the second data partition, DST execution units 6-10 may be used; for the third data partition, DST execution units 3-7 may be used; etc. As is also shown, the task is divided into partial tasks that are sent to the DST execution units in conjunction with the slice groupings of the data partitions.FIG.11is a schematic block diagram of an embodiment of a DST (distributed storage and/or task) execution unit that includes an interface169, a controller86, memory88, one or more DT (distributed task) execution modules90, and a DST client module34. The memory88is of sufficient size to store a significant number of encoded data slices (e.g., thousands of slices to hundreds-of-millions of slices) and may include one or more hard drives and/or one or more solid-state memory devices (e.g., flash memory, DRAM, etc.). In an example of storing a slice group, the DST execution module receives a slice grouping96(e.g., slice group #1) via interface169. The slice grouping96includes, per partition, encoded data slices of contiguous data or encoded data slices of error coding (EC) data. For slice group #1, the DST execution module receives encoded data slices of contiguous data for partitions #1 and #x (and potentially others between 3 and x) and receives encoded data slices of EC data for partitions #2 and #3 (and potentially others between 3 and x). Examples of encoded data slices of contiguous data and encoded data slices of error coding (EC) data are discussed with reference toFIG.9. The memory88stores the encoded data slices of slice groupings96in accordance with memory control information174it receives from the controller86. The controller86(e.g., a processing module, a CPU, etc.) generates the memory control information174based on a partial task(s)98and distributed computing information (e.g., user information (e.g., user ID, distributed computing permissions, data access permission, etc.), vault information (e.g., virtual memory assigned to user, user group, temporary storage for task processing, etc.), task validation information, etc.). For example, the controller86interprets the partial task(s)98in light of the distributed computing information to determine whether a requestor is authorized to perform the task98, is authorized to access the data, and/or is authorized to perform the task on this particular data. When the requestor is authorized, the controller86determines, based on the task98and/or another input, whether the encoded data slices of the slice grouping96are to be temporarily stored or permanently stored. Based on the foregoing, the controller86generates the memory control information174to write the encoded data slices of the slice grouping96into the memory88and to indicate whether the slice grouping96is permanently stored or temporarily stored. With the slice grouping96stored in the memory88, the controller86facilitates execution of the partial task(s)98. In an example, the controller86interprets the partial task98in light of the capabilities of the DT execution module(s)90. The capabilities include one or more of MIPS capabilities, processing resources (e.g., quantity and capability of microprocessors, CPUs, digital signal processors, co-processor, microcontrollers, arithmetic logic circuitry, and/or any other analog and/or digital processing circuitry), availability of the processing resources, etc. If the controller86determines that the DT execution module(s)90have sufficient capabilities, it generates task control information176. The task control information176may be a generic instruction (e.g., perform the task on the stored slice grouping) or a series of operational codes. In the former instance, the DT execution module90includes a co-processor function specifically configured (fixed or programmed) to perform the desired task98. In the latter instance, the DT execution module90includes a general processor topology where the controller stores an algorithm corresponding to the particular task98. In this instance, the controller86provides the operational codes (e.g., assembly language, source code of a programming language, object code, etc.) of the algorithm to the DT execution module90for execution. Depending on the nature of the task98, the DT execution module90may generate intermediate partial results102that are stored in the memory88or in a cache memory (not shown) within the DT execution module90. In either case, when the DT execution module90completes execution of the partial task98, it outputs one or more partial results102. The partial results102may also be stored in memory88. If, when the controller86is interpreting whether capabilities of the DT execution module(s)90can support the partial task98, the controller86determines that the DT execution module(s)90cannot adequately support the task98(e.g., does not have the right resources, does not have sufficient available resources, available resources would be too slow, etc.), it then determines whether the partial task98should be fully offloaded or partially offloaded. If the controller86determines that the partial task98should be fully offloaded, it generates DST control information178and provides it to the DST client module34. The DST control information178includes the partial task98, memory storage information regarding the slice grouping96, and distribution instructions. The distribution instructions instruct the DST client module34to divide the partial task98into sub-partial tasks172, to divide the slice grouping96into sub-slice groupings170, and identify other DST execution units. The DST client module34functions in a similar manner as the DST client module34ofFIGS.3-10to produce the sub-partial tasks172and the sub-slice groupings170in accordance with the distribution instructions. The DST client module34receives DST feedback168(e.g., sub-partial results), via the interface169, from the DST execution units to which the task was offloaded. The DST client module34provides the sub-partial results to the DST execution unit, which processes the sub-partial results to produce the partial result(s)102. If the controller86determines that the partial task98should be partially offloaded, it determines what portion of the task98and/or slice grouping96should be processed locally and what should be offloaded. For the portion that is being locally processed, the controller86generates task control information176as previously discussed. For the portion that is being offloaded, the controller86generates DST control information178as previously discussed. When the DST client module34receives DST feedback168(e.g., sub-partial results) from the DST executions units to which a portion of the task was offloaded, it provides the sub-partial results to the DT execution module90. The DT execution module90processes the sub-partial results with the sub-partial results it created to produce the partial result(s)102. The memory88may be further utilized to retrieve one or more of stored slices100, stored results104, partial results102when the DT execution module90stores partial results102and/or results104in the memory88. For example, when the partial task98includes a retrieval request, the controller86outputs the memory control174to the memory88to facilitate retrieval of slices100and/or results104. FIG.12is a schematic block diagram of an example of operation of a distributed storage and task (DST) execution unit storing encoded data slices and executing a task thereon. To store the encoded data slices of a partition 1 of slice grouping 1, a controller86generates write commands as memory control information174such that the encoded slices are stored in desired locations (e.g., permanent or temporary) within memory88. Once the encoded slices are stored, the controller86provides task control information176to a distributed task (DT) execution module90. As a first step of executing the task in accordance with the task control information176, the DT execution module90retrieves the encoded slices from memory88. The DT execution module90then reconstructs contiguous data blocks of a data partition. As shown for this example, reconstructed contiguous data blocks of data partition 1 include data blocks 1-15 (e.g., d1-d15). With the contiguous data blocks reconstructed, the DT execution module90performs the task on the reconstructed contiguous data blocks. For example, the task may be to search the reconstructed contiguous data blocks for a particular word or phrase, identify where in the reconstructed contiguous data blocks the particular word or phrase occurred, and/or count the occurrences of the particular word or phrase on the reconstructed contiguous data blocks. The DST execution unit continues in a similar manner for the encoded data slices of other partitions in slice grouping 1. Note that with using the unity matrix error encoding scheme previously discussed, if the encoded data slices of contiguous data are uncorrupted, the decoding of them is a relatively straightforward process of extracting the data. If, however, an encoded data slice of contiguous data is corrupted (or missing), it can be rebuilt by accessing other DST execution units that are storing the other encoded data slices of the set of encoded data slices of the corrupted encoded data slice. In this instance, the DST execution unit having the corrupted encoded data slices retrieves at least three encoded data slices (of contiguous data and of error coding data) in the set from the other DST execution units (recall for this example, the pillar width is 5 and the decode threshold is 3). The DST execution unit decodes the retrieved data slices using the DS error encoding parameters to recapture the corresponding data segment. The DST execution unit then re-encodes the data segment using the DS error encoding parameters to rebuild the corrupted encoded data slice. Once the encoded data slice is rebuilt, the DST execution unit functions as previously described. FIG.13is a schematic block diagram of an embodiment of an inbound distributed storage and/or task (DST) processing section82of a DST client module coupled to DST execution units of a distributed storage and task network (DSTN) module via a network24. The inbound DST processing section82includes a de-grouping module180, a DS (dispersed storage) error decoding module182, a data de-partitioning module184, a control module186, and a distributed task control module188. Note that the control module186and/or the distributed task control module188may be separate modules from corresponding ones of outbound DST processing section or may be the same modules. In an example of operation, the DST execution units have completed execution of corresponding partial tasks on the corresponding slice groupings to produce partial results102. The inbound DST processing section82receives the partial results102via the distributed task control module188. The inbound DST processing section82then processes the partial results102to produce a final result, or results104. For example, if the task was to find a specific word or phrase within data, the partial results102indicate where in each of the prescribed portions of the data the corresponding DST execution units found the specific word or phrase. The distributed task control module188combines the individual partial results102for the corresponding portions of the data into a final result104for the data as a whole. In another example of operation, the inbound DST processing section82is retrieving stored data from the DST execution units (i.e., the DSTN module). In this example, the DST execution units output encoded data slices100corresponding to the data retrieval requests. The de-grouping module180receives retrieved slices100and de-groups them to produce encoded data slices per data partition122. The DS error decoding module182decodes, in accordance with DS error encoding parameters, the encoded data slices per data partition122to produce data partitions120. The data de-partitioning module184combines the data partitions120into the data92. The control module186controls the conversion of retrieved slices100into the data92using control signals190to each of the modules. For instance, the control module186provides de-grouping information to the de-grouping module180, provides the DS error encoding parameters to the DS error decoding module182, and provides de-partitioning information to the data de-partitioning module184. FIG.14is a logic diagram of an example of a method that is executable by distributed storage and task (DST) client module regarding inbound DST processing. The method begins at step194where the DST client module receives partial results. The method continues at step196where the DST client module retrieves the task corresponding to the partial results. For example, the partial results include header information that identifies the requesting entity, which correlates to the requested task. The method continues at step198where the DST client module determines result processing information based on the task. For example, if the task were to identify a particular word or phrase within the data, the result processing information would indicate to aggregate the partial results for the corresponding portions of the data to produce the final result. As another example, if the task were to count the occurrences of a particular word or phrase within the data, results of processing the information would indicate to add the partial results to produce the final results. The method continues at step200where the DST client module processes the partial results in accordance with the result processing information to produce the final result or results. FIG.15is a diagram of an example of de-grouping selection processing of an inbound distributed storage and task (DST) processing section of a DST client module. In general, this is an inverse process of the grouping module of the outbound DST processing section ofFIG.9. Accordingly, for each data partition (e.g., partition #1), the de-grouping module retrieves the corresponding slice grouping from the DST execution units (EU) (e.g., DST 1-5). As shown, DST execution unit #1 provides a first slice grouping, which includes the first encoded slices of each of the sets of encoded slices (e.g., encoded data slices of contiguous data of data blocks 1-15); DST execution unit #2 provides a second slice grouping, which includes the second encoded slices of each of the sets of encoded slices (e.g., encoded data slices of contiguous data of data blocks 16-30); DST execution unit #3 provides a third slice grouping, which includes the third encoded slices of each of the sets of encoded slices (e.g., encoded data slices of contiguous data of data blocks 31-45); DST execution unit #4 provides a fourth slice grouping, which includes the fourth encoded slices of each of the sets of encoded slices (e.g., first encoded data slices of error coding (EC) data); and DST execution unit #5 provides a fifth slice grouping, which includes the fifth encoded slices of each of the sets of encoded slices (e.g., first encoded data slices of error coding (EC) data). The de-grouping module de-groups the slice groupings (e.g., received slices100) using a de-grouping selector180controlled by a control signal190as shown in the example to produce a plurality of sets of encoded data slices (e.g., retrieved slices for a partition into sets of slices122). Each set corresponding to a data segment of the data partition. FIG.16is a schematic block diagram of an embodiment of a dispersed storage (DS) error decoding module182of an inbound distributed storage and task (DST) processing section. The DS error decoding module182includes an inverse per slice security processing module202, a de-slicing module204, an error decoding module206, an inverse segment security module208, a de-segmenting processing module210, and a control module186. In an example of operation, the inverse per slice security processing module202, when enabled by the control module186, unsecures each encoded data slice122based on slice de-security information received as control information190(e.g., the compliment of the slice security information discussed with reference toFIG.6) received from the control module186. The slice security information includes data decompression, decryption, de-watermarking, integrity check (e.g., CRC verification, etc.), and/or any other type of digital security. For example, when the inverse per slice security processing module202is enabled, it verifies integrity information (e.g., a CRC value) of each encoded data slice122, it decrypts each verified encoded data slice, and decompresses each decrypted encoded data slice to produce slice encoded data158. When the inverse per slice security processing module202is not enabled, it passes the encoded data slices122as the sliced encoded data158or is bypassed such that the retrieved encoded data slices122are provided as the sliced encoded data158. The de-slicing module204de-slices the sliced encoded data158into encoded data segments156in accordance with a pillar width of the error correction encoding parameters received as control information190from the control module186. For example, if the pillar width is five, the de-slicing module204de-slices a set of five encoded data slices into an encoded data segment156. The error decoding module206decodes the encoded data segments156in accordance with error correction decoding parameters received as control information190from the control module186to produce secure data segments154. The error correction decoding parameters include identifying an error correction encoding scheme (e.g., forward error correction algorithm, a Reed-Solomon based algorithm, an information dispersal algorithm, etc.), a pillar width, a decode threshold, a read threshold, a write threshold, etc. For example, the error correction decoding parameters identify a specific error correction encoding scheme, specify a pillar width of five, and specify a decode threshold of three. The inverse segment security processing module208, when enabled by the control module186, unsecures the secured data segments154based on segment security information received as control information190from the control module186. The segment security information includes data decompression, decryption, de-watermarking, integrity check (e.g., CRC, etc.) verification, and/or any other type of digital security. For example, when the inverse segment security processing module208is enabled, it verifies integrity information (e.g., a CRC value) of each secure data segment154, it decrypts each verified secured data segment, and decompresses each decrypted secure data segment to produce a data segment152. When the inverse segment security processing module208is not enabled, it passes the decoded data segment154as the data segment152or is bypassed. The de-segment processing module210receives the data segments152and receives de-segmenting information as control information190from the control module186. The de-segmenting information indicates how the de-segment processing module210is to de-segment the data segments152into a data partition120. For example, the de-segmenting information indicates how the rows and columns of data segments are to be rearranged to yield the data partition120. FIG.17is a diagram of an example of de-slicing and error decoding processing of a dispersed error decoding module. A de-slicing module204receives at least a decode threshold number of encoded data slices158for each data segment in accordance with control information190and provides encoded data156. In this example, a decode threshold is three. As such, each set of encoded data slices158is shown to have three encoded data slices per data segment. The de-slicing module204may receive three encoded data slices per data segment because an associated distributed storage and task (DST) client module requested retrieving only three encoded data slices per segment or selected three of the retrieved encoded data slices per data segment. As shown, which is based on the unity matrix encoding previously discussed with reference toFIG.8, an encoded data slice may be a data-based encoded data slice (e.g., DS1_d1&d2) or an error code based encoded data slice (e.g., ES3_1). An error decoding module206decodes the encoded data156of each data segment in accordance with the error correction decoding parameters of control information190to produce secured segments154. In this example, data segment 1 includes 3 rows with each row being treated as one word for encoding. As such, data segment 1 includes three words: word 1 including data blocks d1 and d2, word 2 including data blocks d16 and d17, and word 3 including data blocks d31 and d32. Each of data segments 2-7 includes three words where each word includes two data blocks. Data segment 8 includes three words where each word includes a single data block (e.g., d15, d30, and d45). FIG.18is a diagram of an example of de-segment processing of an inbound distributed storage and task (DST) processing. In this example, a de-segment processing module210receives data segments152(e.g., 1-8) and rearranges the data blocks of the data segments into rows and columns in accordance with de-segmenting information of control information190to produce a data partition120. Note that the number of rows is based on the decode threshold (e.g.,3in this specific example) and the number of columns is based on the number and size of the data blocks. The de-segmenting module210converts the rows and columns of data blocks into the data partition120. Note that each data block may be of the same size as other data blocks or of a different size. In addition, the size of each data block may be a few bytes to megabytes of data. FIG.19is a diagram of an example of converting slice groups into data92within an inbound distributed storage and task (DST) processing section. As shown, the data92is reconstructed from a plurality of data partitions (1-x, where x is an integer greater than 4). Each data partition (or chunk set of data) is decoded and re-grouped using a de-grouping and decoding function212and a de-partition function214from slice groupings as previously discussed. For a given data partition, the slice groupings (e.g., at least a decode threshold per data segment of encoded data slices) are received from DST execution units. From data partition to data partition, the ordering of the slice groupings received from the DST execution units may vary as discussed with reference toFIG.10. FIG.20is a diagram of an example of a distributed storage and/or retrieval within the distributed computing system. The distributed computing system includes a plurality of distributed storage and/or task (DST) processing client modules34(one shown) coupled to a distributed storage and/or task processing network (DSTN) module, or multiple DSTN modules, via a network24. The DST client module34includes an outbound DST processing section80and an inbound DST processing section82. The DSTN module includes a plurality of DST execution units. Each DST execution unit includes a controller86, memory88, one or more distributed task (DT) execution modules90, and a DST client module34. In an example of data storage, the DST client module34has data92that it desires to store in the DSTN module. The data92may be a file (e.g., video, audio, text, graphics, etc.), a data object, a data block, an update to a file, an update to a data block, etc. In this instance, the outbound DST processing module80converts the data92into encoded data slices216as will be further described with reference toFIGS.21-23. The outbound DST processing module80sends, via the network24, to the DST execution units for storage as further described with reference toFIG.24. In an example of data retrieval, the DST client module34issues a retrieve request to the DST execution units for the desired data92. The retrieve request may address each DST executions units storing encoded data slices of the desired data, address a decode threshold number of DST execution units, address a read threshold number of DST execution units, or address some other number of DST execution units. In response to the request, each addressed DST execution unit retrieves its encoded data slices100of the desired data and sends them to the inbound DST processing section82, via the network24. When, for each data segment, the inbound DST processing section82receives at least a decode threshold number of encoded data slices100, it converts the encoded data slices100into a data segment. The inbound DST processing section82aggregates the data segments to produce the retrieved data92. FIG.21is a schematic block diagram of an embodiment of an outbound distributed storage and/or task (DST) processing section80of a DST client module coupled to a distributed storage and task network (DSTN) module (e.g., a plurality of DST execution units) via a network24. The outbound DST processing section80includes a data partitioning module110, a dispersed storage (DS) error encoding module112, a grouping selector module114, a control module116, and a distributed task control module118. In an example of operation, the data partitioning module110is by-passed such that data92is provided directly to the DS error encoding module112. The control module116coordinates the by-passing of the data partitioning module110by outputting a bypass220message to the data partitioning module110. The DS error encoding module112receives the data92in a serial manner, a parallel manner, and/or a combination thereof. The DS error encoding module112DS error encodes the data in accordance with control information160from the control module116to produce encoded data slices218. The DS error encoding includes segmenting the data92into data segments, segment security processing (e.g., encryption, compression, watermarking, integrity check (e.g., CRC, etc.)), error encoding, slicing, and/or per slice security processing (e.g., encryption, compression, watermarking, integrity check (e.g., CRC, etc.)). The control information160indicates which steps of the DS error encoding are active for the data92and, for active steps, indicates the parameters for the step. For example, the control information160indicates that the error encoding is active and includes error encoding parameters (e.g., pillar width, decode threshold, write threshold, read threshold, type of error encoding, etc.). The grouping selector module114groups the encoded slices218of the data segments into pillars of slices216. The number of pillars corresponds to the pillar width of the DS error encoding parameters. In this example, the distributed task control module118facilitates the storage request. FIG.22is a schematic block diagram of an example of a dispersed storage (DS) error encoding module112for the example ofFIG.21. The DS error encoding module112includes a segment processing module142, a segment security processing module144, an error encoding module146, a slicing module148, and a per slice security processing module150. Each of these modules is coupled to a control module116to receive control information160therefrom. In an example of operation, the segment processing module142receives data92and receives segmenting information as control information160from the control module116. The segmenting information indicates how the segment processing module is to segment the data. For example, the segmenting information indicates the size of each data segment. The segment processing module142segments the data92into data segments152in accordance with the segmenting information. The segment security processing module144, when enabled by the control module116, secures the data segments152based on segment security information received as control information160from the control module116. The segment security information includes data compression, encryption, watermarking, integrity check (e.g., CRC, etc.), and/or any other type of digital security. For example, when the segment security processing module144is enabled, it compresses a data segment152, encrypts the compressed data segment, and generates a CRC value for the encrypted data segment to produce a secure data segment. When the segment security processing module144is not enabled, it passes the data segments152to the error encoding module146or is bypassed such that the data segments152are provided to the error encoding module146. The error encoding module146encodes the secure data segments in accordance with error correction encoding parameters received as control information160from the control module116. The error correction encoding parameters include identifying an error correction encoding scheme (e.g., forward error correction algorithm, a ReedSolomon based algorithm, an information dispersal algorithm, etc.), a pillar width, a decode threshold, a read threshold, a write threshold, etc. For example, the error correction encoding parameters identify a specific error correction encoding scheme, specifies a pillar width of five, and specifies a decode threshold of three. From these parameters, the error encoding module146encodes a data segment to produce an encoded data segment. The slicing module148slices the encoded data segment in accordance with a pillar width of the error correction encoding parameters. For example, if the pillar width is five, the slicing module slices an encoded data segment into a set of five encoded data slices. As such, for a plurality of data segments, the slicing module148outputs a plurality of sets of encoded data slices as shown within encoding and slicing function222as described. The per slice security processing module150, when enabled by the control module116, secures each encoded data slice based on slice security information received as control information160from the control module116. The slice security information includes data compression, encryption, watermarking, integrity check (e.g., CRC, etc.), and/or any other type of digital security. For example, when the per slice security processing module150is enabled, it may compress an encoded data slice, encrypt the compressed encoded data slice, and generate a CRC value for the encrypted encoded data slice to produce a secure encoded data slice tweaking. When the per slice security processing module150is not enabled, it passes the encoded data slices or is bypassed such that the encoded data slices218are the output of the DS error encoding module112. FIG.23is a diagram of an example of converting data92into pillar slice groups utilizing encoding, slicing and pillar grouping function224for storage in memory of a distributed storage and task network (DSTN) module. As previously discussed the data92is encoded and sliced into a plurality of sets of encoded data slices; one set per data segment. The grouping selector module organizes the sets of encoded data slices into pillars of data slices. In this example, the DS error encoding parameters include a pillar width of 5 and a decode threshold of 3. As such, for each data segment, 5 encoded data slices are created. The grouping selector module takes the first encoded data slice of each of the sets and forms a first pillar, which may be sent to the first DST execution unit. Similarly, the grouping selector module creates the second pillar from the second slices of the sets; the third pillar from the third slices of the sets; the fourth pillar from the fourth slices of the sets; and the fifth pillar from the fifth slices of the set. FIG.24is a schematic block diagram of an embodiment of a distributed storage and/or task (DST) execution unit that includes an interface169, a controller86, memory88, one or more distributed task (DT) execution modules90, and a DST client module34. A computing core26may be utilized to implement the one or more DT execution modules90and the DST client module34. The memory88is of sufficient size to store a significant number of encoded data slices (e.g., thousands of slices to hundreds-of-millions of slices) and may include one or more hard drives and/or one or more solid-state memory devices (e.g., flash memory, DRAM, etc.). In an example of storing a pillar of slices216, the DST execution unit receives, via interface169, a pillar of slices216(e.g., pillar #1 slices). The memory88stores the encoded data slices216of the pillar of slices in accordance with memory control information174it receives from the controller86. The controller86(e.g., a processing module, a CPU, etc.) generates the memory control information174based on distributed storage information (e.g., user information (e.g., user ID, distributed storage permissions, data access permission, etc.), vault information (e.g., virtual memory assigned to user, user group, etc.), etc.). Similarly, when retrieving slices, the DST execution unit receives, via interface169, a slice retrieval request. The memory88retrieves the slice in accordance with memory control information174it receives from the controller86. The memory88outputs the slice100, via the interface169, to a requesting entity. FIG.25is a schematic block diagram of an example of operation of an inbound distributed storage and/or task (DST) processing section82for retrieving dispersed error encoded data92. The inbound DST processing section82includes a de-grouping module180, a dispersed storage (DS) error decoding module182, a data de-partitioning module184, a control module186, and a distributed task control module188. Note that the control module186and/or the distributed task control module188may be separate modules from corresponding ones of an outbound DST processing section or may be the same modules. In an example of operation, the inbound DST processing section82is retrieving stored data92from the DST execution units (i.e., the DSTN module). In this example, the DST execution units output encoded data slices corresponding to data retrieval requests from the distributed task control module188. The de-grouping module180receives pillars of slices100and de-groups them in accordance with control information190from the control module186to produce sets of encoded data slices218. The DS error decoding module182decodes, in accordance with the DS error encoding parameters received as control information190from the control module186, each set of encoded data slices218to produce data segments, which are aggregated into retrieved data92. The data de-partitioning module184is by-passed in this operational mode via a bypass signal226of control information190from the control module186. FIG.26is a schematic block diagram of an embodiment of a dispersed storage (DS) error decoding module182of an inbound distributed storage and task (DST) processing section. The DS error decoding module182includes an inverse per slice security processing module202, a de-slicing module204, an error decoding module206, an inverse segment security module208, and a de-segmenting processing module210. The dispersed error decoding module182is operable to de-slice and decode encoded slices per data segment218utilizing a de-slicing and decoding function228to produce a plurality of data segments that are de-segmented utilizing a de-segment function230to recover data92. In an example of operation, the inverse per slice security processing module202, when enabled by the control module186via control information190, unsecures each encoded data slice218based on slice de-security information (e.g., the compliment of the slice security information discussed with reference toFIG.6) received as control information190from the control module186. The slice de-security information includes data decompression, decryption, de-watermarking, integrity check (e.g., CRC verification, etc.), and/or any other type of digital security. For example, when the inverse per slice security processing module202is enabled, it verifies integrity information (e.g., a CRC value) of each encoded data slice218, it decrypts each verified encoded data slice, and decompresses each decrypted encoded data slice to produce slice encoded data. When the inverse per slice security processing module202is not enabled, it passes the encoded data slices218as the sliced encoded data or is bypassed such that the retrieved encoded data slices218are provided as the sliced encoded data. The de-slicing module204de-slices the sliced encoded data into encoded data segments in accordance with a pillar width of the error correction encoding parameters received as control information190from a control module186. For example, if the pillar width is five, the de-slicing module de-slices a set of five encoded data slices into an encoded data segment. Alternatively, the encoded data segment may include just three encoded data slices (e.g., when the decode threshold is 3). The error decoding module206decodes the encoded data segments in accordance with error correction decoding parameters received as control information190from the control module186to produce secure data segments. The error correction decoding parameters include identifying an error correction encoding scheme (e.g., forward error correction algorithm, a Reed-Solomon based algorithm, an information dispersal algorithm, etc.), a pillar width, a decode threshold, a read threshold, a write threshold, etc. For example, the error correction decoding parameters identify a specific error correction encoding scheme, specify a pillar width of five, and specify a decode threshold of three. The inverse segment security processing module208, when enabled by the control module186, unsecures the secured data segments based on segment security information received as control information190from the control module186. The segment security information includes data decompression, decryption, de-watermarking, integrity check (e.g., CRC, etc.) verification, and/or any other type of digital security. For example, when the inverse segment security processing module is enabled, it verifies integrity information (e.g., a CRC value) of each secure data segment, it decrypts each verified secured data segment, and decompresses each decrypted secure data segment to produce a data segment152. When the inverse segment security processing module208is not enabled, it passes the decoded data segment152as the data segment or is bypassed. The de-segmenting processing module210aggregates the data segments152into the data92in accordance with control information190from the control module186. FIG.27is a schematic block diagram of an example of a distributed storage and task processing network (DSTN) module that includes a plurality of distributed storage and task (DST) execution units (#1 through #n, where, for example, n is an integer greater than or equal to three). Each of the DST execution units includes a DST client module34, a controller86, one or more DT (distributed task) execution modules90, and memory88. In this example, the DSTN module stores, in the memory of the DST execution units, a plurality of DS (dispersed storage) encoded data (e.g., 1 through n, where n is an integer greater than or equal to two) and stores a plurality of DS encoded task codes (e.g., 1 through k, where k is an integer greater than or equal to two). The DS encoded data may be encoded in accordance with one or more examples described with reference toFIGS.3-19(e.g., organized in slice groupings) or encoded in accordance with one or more examples described with reference toFIGS.20-26(e.g., organized in pillar groups). The data that is encoded into the DS encoded data may be of any size and/or of any content. For example, the data may be one or more digital books, a copy of a company's emails, a large-scale Internet search, a video security file, one or more entertainment video files (e.g., television programs, movies, etc.), data files, and/or any other large amount of data (e.g., greater than a few Terabytes). The tasks that are encoded into the DS encoded task code may be a simple function (e.g., a mathematical function, a logic function, an identify function, a find function, a search engine function, a replace function, etc.), a complex function (e.g., compression, human and/or computer language translation, text-to-voice conversion, voice-to-text conversion, etc.), multiple simple and/or complex functions, one or more algorithms, one or more applications, etc. The tasks may be encoded into the DS encoded task code in accordance with one or more examples described with reference toFIGS.3-19(e.g., organized in slice groupings) or encoded in accordance with one or more examples described with reference toFIGS.20-26(e.g., organized in pillar groups). In an example of operation, a DST client module of a user device or of a DST processing unit issues a DST request to the DSTN module. The DST request may include a request to retrieve stored data, or a portion thereof, may include a request to store data that is included with the DST request, may include a request to perform one or more tasks on stored data, may include a request to perform one or more tasks on data included with the DST request, etc. In the cases where the DST request includes a request to store data or to retrieve data, the client module and/or the DSTN module processes the request as previously discussed with reference to one or more ofFIGS.3-19(e.g., slice groupings) and/or20-26(e.g., pillar groupings). In the case where the DST request includes a request to perform one or more tasks on data included with the DST request, the DST client module and/or the DSTN module process the DST request as previously discussed with reference to one or more ofFIGS.3-19. In the case where the DST request includes a request to perform one or more tasks on stored data, the DST client module and/or the DSTN module processes the DST request as will be described with reference to one or more ofFIGS.28-39. In general, the DST client module identifies data and one or more tasks for the DSTN module to execute upon the identified data. The DST request may be for a one-time execution of the task or for an on-going execution of the task. As an example of the latter, as a company generates daily emails, the DST request may be to daily search new emails for inappropriate content and, if found, record the content, the email sender(s), the email recipient(s), email routing information, notify human resources of the identified email, etc. FIG.28is a schematic block diagram of an example of a distributed computing system performing tasks on stored data. In this example, two distributed storage and task (DST) client modules 1-2 are shown: the first may be associated with a user device and the second may be associated with a DST processing unit or a high priority user device (e.g., high priority clearance user, system administrator, etc.). Each DST client module includes a list of stored data234and a list of tasks codes236. The list of stored data234includes one or more entries of data identifying information, where each entry identifies data stored in the DSTN module22. The data identifying information (e.g., data ID) includes one or more of a data file name, a data file directory listing, DSTN addressing information of the data, a data object identifier, etc. The list of tasks236includes one or more entries of task code identifying information, when each entry identifies task codes stored in the DSTN module22. The task code identifying information (e.g., task ID) includes one or more of a task file name, a task file directory listing, DSTN addressing information of the task, another type of identifier to identify the task, etc. As shown, the list of data234and the list of tasks236are each smaller in number of entries for the first DST client module than the corresponding lists of the second DST client module. This may occur because the user device associated with the first DST client module has fewer privileges in the distributed computing system than the device associated with the second DST client module. Alternatively, this may occur because the user device associated with the first DST client module serves fewer users than the device associated with the second DST client module and is restricted by the distributed computing system accordingly. As yet another alternative, this may occur through no restraints by the distributed computing system, it just occurred because the operator of the user device associated with the first DST client module has selected fewer data and/or fewer tasks than the operator of the device associated with the second DST client module. In an example of operation, the first DST client module selects one or more data entries238and one or more tasks240from its respective lists (e.g., selected data ID and selected task ID). The first DST client module sends its selections to a task distribution module232. The task distribution module232may be within a stand-alone device of the distributed computing system, may be within the user device that contains the first DST client module, or may be within the DSTN module22. Regardless of the task distribution module's location, it generates DST allocation information242from the selected task ID240and the selected data ID238. The DST allocation information242includes data partitioning information, task execution information, and/or intermediate result information. The task distribution module232sends the DST allocation information242to the DSTN module22. Note that one or more examples of the DST allocation information will be discussed with reference to one or more ofFIGS.29-39. The DSTN module22interprets the DST allocation information242to identify the stored DS encoded data (e.g., DS error encoded data 2) and to identify the stored DS error encoded task code (e.g., DS error encoded task code 1). In addition, the DSTN module22interprets the DST allocation information242to determine how the data is to be partitioned and how the task is to be partitioned. The DSTN module22also determines whether the selected DS error encoded data238needs to be converted from pillar grouping to slice grouping. If so, the DSTN module22converts the selected DS error encoded data into slice groupings and stores the slice grouping DS error encoded data by overwriting the pillar grouping DS error encoded data or by storing it in a different location in the memory of the DSTN module22(i.e., does not overwrite the pillar grouping DS encoded data). The DSTN module22partitions the data and the task as indicated in the DST allocation information242and sends the portions to selected DST execution units of the DSTN module22. Each of the selected DST execution units performs its partial task(s) on its slice groupings to produce partial results. The DSTN module22collects the partial results from the selected DST execution units and provides them, as result information244, to the task distribution module. The result information244may be the collected partial results, one or more final results as produced by the DSTN module22from processing the partial results in accordance with the DST allocation information242, or one or more intermediate results as produced by the DSTN module22from processing the partial results in accordance with the DST allocation information242. The task distribution module232receives the result information244and provides one or more final results104therefrom to the first DST client module. The final result(s)104may be result information244or a result(s) of the task distribution module's processing of the result information244. In concurrence with processing the selected task of the first DST client module, the distributed computing system may process the selected task(s) of the second DST client module on the selected data(s) of the second DST client module. Alternatively, the distributed computing system may process the second DST client module's request subsequent to, or preceding, that of the first DST client module. Regardless of the ordering and/or parallel processing of the DST client module requests, the second DST client module provides its selected data238and selected task240to a task distribution module232. If the task distribution module232is a separate device of the distributed computing system or within the DSTN module, the task distribution modules232coupled to the first and second DST client modules may be the same module. The task distribution module232processes the request of the second DST client module in a similar manner as it processed the request of the first DST client module. FIG.29is a schematic block diagram of an embodiment of a task distribution module232facilitating the example ofFIG.28. The task distribution module232includes a plurality of tables it uses to generate distributed storage and task (DST) allocation information242for selected data and selected tasks received from a DST client module. The tables include data storage information248, task storage information250, distributed task (DT) execution module information252, and task⇔sub-task mapping information246. The data storage information table248includes a data identification (ID) field260, a data size field262, an addressing information field264, distributed storage (DS) information266, and may further include other information regarding the data, how it is stored, and/or how it can be processed. For example, DS encoded data #1 has a data ID of 1, a data size of AA (e.g., a byte size of a few Terabytes or more), addressing information of Addr_1_AA, and DS parameters of 3/5; SEG_1; and SLC_1. In this example, the addressing information may be a virtual address corresponding to the virtual address of the first storage word (e.g., one or more bytes) of the data and information on how to calculate the other addresses, may be a range of virtual addresses for the storage words of the data, physical addresses of the first storage word or the storage words of the data, may be a list of slice names of the encoded data slices of the data, etc. The DS parameters may include identity of an error encoding scheme, decode threshold/pillar width (e.g., 3/5 for the first data entry), segment security information (e.g., SEG_1), per slice security information (e.g., SLC_1), and/or any other information regarding how the data was encoded into data slices. The task storage information table250includes a task identification (ID) field268, a task size field270, an addressing information field272, distributed storage (DS) information274, and may further include other information regarding the task, how it is stored, and/or how it can be used to process data. For example, DS encoded task #2 has a task ID of 2, a task size of XY, addressing information of Addr_2_XY, and DS parameters of 3/5; SEG 2; and SLC_2. In this example, the addressing information may be a virtual address corresponding to the virtual address of the first storage word (e.g., one or more bytes) of the task and information on how to calculate the other addresses, may be a range of virtual addresses for the storage words of the task, physical addresses of the first storage word or the storage words of the task, may be a list of slices names of the encoded slices of the task code, etc. The DS parameters may include identity of an error encoding scheme, decode threshold/pillar width (e.g., 3/5 for the first data entry), segment security information (e.g., SEG_2), per slice security information (e.g., SLC_2), and/or any other information regarding how the task was encoded into encoded task slices. Note that the segment and/or the per-slice security information include a type of encryption (if enabled), a type of compression (if enabled), watermarking information (if enabled), and/or an integrity check scheme (if enabled). The task ⇔sub-task mapping information table246includes a task field256and a sub-task field258. The task field256identifies a task stored in the memory of a distributed storage and task network (DSTN) module and the corresponding sub-task fields258indicates whether the task includes sub-tasks and, if so, how many and if any of the sub-tasks are ordered. In this example, the task ⇔sub-task mapping information table246includes an entry for each task stored in memory of the DSTN module (e.g., task 1 through task k). In particular, this example indicates that task 1 includes 7 sub-tasks; task 2 does not include sub-tasks, and task k includes r number of sub-tasks (where r is an integer greater than or equal to two). The DT execution module table252includes a DST execution unit ID field276, a DT execution module ID field278, and a DT execution module capabilities field280. The DST execution unit ID field276includes the identity of DST units in the DSTN module. The DT execution module ID field278includes the identity of each DT execution unit in each DST unit. For example, DST unit 1 includes three DT executions modules (e.g., 1_1, 1_2, and 1_3). The DT execution capabilities field280includes identity of the capabilities of the corresponding DT execution unit. For example, DT execution module 1_1 includes capabilities X, where X includes one or more of MIPS capabilities, processing resources (e.g., quantity and capability of microprocessors, CPUs, digital signal processors, co-processor, microcontrollers, arithmetic logic circuitry, and/or any other analog and/or digital processing circuitry), availability of the processing resources, memory information (e.g., type, size, availability, etc.), and/or any information germane to executing one or more tasks. From these tables, the task distribution module232generates the DST allocation information242to indicate where the data is stored, how to partition the data, where the task is stored, how to partition the task, which DT execution units should perform which partial task on which data partitions, where and how intermediate results are to be stored, etc. If multiple tasks are being performed on the same data or different data, the task distribution module factors such information into its generation of the DST allocation information. FIG.30is a diagram of a specific example of a distributed computing system performing tasks on stored data as a task flow318. In this example, selected data92is data 2 and selected tasks are tasks 1, 2, and 3. Task 1 corresponds to analyzing translation of data from one language to another (e.g., human language or computer language); task 2 corresponds to finding specific words and/or phrases in the data; and task 3 corresponds to finding specific translated words and/or phrases in translated data. In this example, task 1 includes 7 sub-tasks: task 1_1—identify non-words (non-ordered); task 1_2—identify unique words (non-ordered); task 1_3—translate (non-ordered); task 1_4—translate back (ordered after task 1_3); task 1_5—compare to ID errors (ordered after task 1-4); task 1_6—determine non-word translation errors (ordered after task 1_5 and 1_1); and task 1_7—determine correct translations (ordered after 1_5 and 1_2). The sub-task further indicates whether they are an ordered task (i.e., are dependent on the outcome of another task) or non-order (i.e., are independent of the outcome of another task). Task 2 does not include sub-tasks and task 3 includes two sub-tasks: task 3_1 translate; and task 3_2 find specific word or phrase in translated data. In general, the three tasks collectively are selected to analyze data for translation accuracies, translation errors, translation anomalies, occurrence of specific words or phrases in the data, and occurrence of specific words or phrases on the translated data. Graphically, the data92is translated306into translated data282; is analyzed for specific words and/or phrases300to produce a list of specific words and/or phrases286; is analyzed for non-words302(e.g., not in a reference dictionary) to produce a list of non-words290; and is analyzed for unique words316included in the data92(i.e., how many different words are included in the data) to produce a list of unique words298. Each of these tasks is independent of each other and can therefore be processed in parallel if desired. The translated data282is analyzed (e.g., sub-task 3_2) for specific translated words and/or phrases304to produce a list of specific translated words and/or phrases288. The translated data282is translated back308(e.g., sub-task 1_4) into the language of the original data to produce re-translated data284. These two tasks are dependent on the translate task (e.g., task 1_3) and thus must be ordered after the translation task, which may be in a pipelined ordering or a serial ordering. The re-translated data284is then compared310with the original data92to find words and/or phrases that did not translate (one way and/or the other) properly to produce a list of incorrectly translated words294. As such, the comparing task (e.g., sub-task 1_5)310is ordered after the translation306and re-translation tasks308(e.g., sub-tasks 1_3 and 1_4). The list of words incorrectly translated294is compared312to the list of non-words290to identify words that were not properly translated because the words are non-words to produce a list of errors due to non-words292. In addition, the list of words incorrectly translated294is compared314to the list of unique words298to identify unique words that were properly translated to produce a list of correctly translated words296. The comparison may also identify unique words that were not properly translated to produce a list of unique words that were not properly translated. Note that each list of words (e.g., specific words and/or phrases, non-words, unique words, translated words and/or phrases, etc.,) may include the word and/or phrase, how many times it is used, where in the data it is used, and/or any other information requested regarding a word and/or phrase. FIG.31is a schematic block diagram of an example of a distributed storage and task processing network (DSTN) module storing data and task codes for the example ofFIG.30. As shown, DS encoded data 2 is stored as encoded data slices across the memory (e.g., stored in memories88) of DST execution units 1-5; the DS encoded task code 1 (of task 1) and DS encoded task 3 are stored as encoded task slices across the memory of DST execution units 1-5; and DS encoded task code 2 (of task 2) is stored as encoded task slices across the memory of DST execution units 3-7. As indicated in the data storage information table and the task storage information table ofFIG.29, the respective data/task has DS parameters of 3/5 for their decode threshold/pillar width; hence spanning the memory of five DST execution units. FIG.32is a diagram of an example of distributed storage and task (DST) allocation information242for the example ofFIG.30. The DST allocation information242includes data partitioning information320, task execution information322, and intermediate result information324. The data partitioning information320includes the data identifier (ID), the number of partitions to split the data into, address information for each data partition, and whether the DS encoded data has to be transformed from pillar grouping to slice grouping. The task execution information322includes tabular information having a task identification field326, a task ordering field328, a data partition field ID330, and a set of DT execution modules332to use for the distributed task processing per data partition. The intermediate result information324includes tabular information having a name ID field334, an ID of the DST execution unit assigned to process the corresponding intermediate result336, a scratch pad storage field338, and an intermediate result storage field340. Continuing with the example ofFIG.30, where tasks 1-3 are to be distributedly performed on data 2, the data partitioning information includes the ID of data 2. In addition, the task distribution module determines whether the DS encoded data 2 is in the proper format for distributed computing (e.g., was stored as slice groupings). If not, the task distribution module indicates that the DS encoded data 2 format needs to be changed from the pillar grouping format to the slice grouping format, which will be done by the DSTN module. In addition, the task distribution module determines the number of partitions to divide the data into (e.g., 2_1 through 2_z) and addressing information for each partition. The task distribution module generates an entry in the task execution information section for each sub-task to be performed. For example, task 1_1 (e.g., identify non-words on the data) has no task ordering (i.e., is independent of the results of other sub-tasks), is to be performed on data partitions 2_1 through 2_z by DT execution modules 1_1, 2_1, 3_1, 4_1, and 5_1. For instance, DT execution modules 1_1, 2_1, 3_1, 4_1, and 5_1 search for non-words in data partitions 2_1 through 2_z to produce task 1_1 intermediate results (R1-1, which is a list of non-words). Task 1_2 (e.g., identify unique words) has similar task execution information as task 1_1 to produce task 1_2 intermediate results (R1-2, which is the list of unique words). Task 1_3 (e.g., translate) includes task execution information as being non-ordered (i.e., is independent), having DT execution modules 1_1, 2_1, 3_1, 4_1, and 5_1 translate data partitions 2_1 through 2_4 and having DT execution modules 1_2, 2_2, 3_2, 4_2, and 5_2 translate data partitions 2_5 through 2_z to produce task 1_3 intermediate results (R1-3, which is the translated data). In this example, the data partitions are grouped, where different sets of DT execution modules perform a distributed sub-task (or task) on each data partition group, which allows for further parallel processing. Task 1_4 (e.g., translate back) is ordered after task 1_3 and is to be executed on task 1_3's intermediate result (e.g., R1-3_1) (e.g., the translated data). DT execution modules 1_1, 2_1, 3_1, 4_1, and 5_1 are allocated to translate back task 1_3 intermediate result partitions R1-3_1 through R1-3_4 and DT execution modules 1_2, 2_2, 6_1, 7_1, and 7_2 are allocated to translate back task 1_3 intermediate result partitions R1-3_5 through R1-3_zto produce task 1-4 intermediate results (R1-4, which is the translated back data). Task 1_5 (e.g., compare data and translated data to identify translation errors) is ordered after task 1_4 and is to be executed on task 1_4's intermediate results (R4-1) and on the data. DT execution modules 1_1, 2_1, 3_1, 4_1, and 5_1 are allocated to compare the data partitions (2_1 through 2_z) with partitions of task 1-4 intermediate results partitions R1-4_1 through R1-4_z to produce task 1_5 intermediate results (R1-5, which is the list words translated incorrectly). Task 1_6 (e.g., determine non-word translation errors) is ordered after tasks 1_1 and 1_5 and is to be executed on tasks 1_1's and 1_5's intermediate results (R1-1 and R1-5). DT execution modules 1_1, 2_1, 3_1, 4_1, and 5_1 are allocated to compare the partitions of task 1_1 intermediate results (R1-1_1 through R1-1_z) with partitions of task 1-5 intermediate results partitions (R1-5_1 through R1-5_z) to produce task 1_6 intermediate results (R1-6, which is the list translation errors due to non-words). Task 1_7 (e.g., determine words correctly translated) is ordered after tasks 1_2 and 1_5 and is to be executed on tasks 1_2's and 1_5's intermediate results (R1-1 and R1-5). DT execution modules 1_2, 2_2, 3_2, 4_2, and 5_2 are allocated to compare the partitions of task 1_2 intermediate results (R1-2_1 through R1-2_z) with partitions of task 1-5 intermediate results partitions (R1-5_1 through R1-5_z) to produce task 1_7 intermediate results (R1-7, which is the list of correctly translated words). Task 2 (e.g., find specific words and/or phrases) has no task ordering (i.e., is independent of the results of other sub-tasks), is to be performed on data partitions 2_1 through 2_z by DT execution modules 3_1, 4_1, 5_1, 6_1, and 7_1. For instance, DT execution modules 3_1, 4_1, 5_1, 6_1, and 7_1 search for specific words and/or phrases in data partitions 2_1 through 2_z to produce task 2 intermediate results (R2, which is a list of specific words and/or phrases). Task 3_2 (e.g., find specific translated words and/or phrases) is ordered after task 1_3 (e.g., translate) is to be performed on partitions R1-3_1 through R1-3_z by DT execution modules 1_2, 2_2, 3_2, 4_2, and 5_2. For instance, DT execution modules 1_2, 2_2, 3_2, 4_2, and 5_2 search for specific translated words and/or phrases in the partitions of the translated data (R1-3_1 through R1-3_z) to produce task 3_2 intermediate results (R3-2, which is a list of specific translated words and/or phrases). For each task, the intermediate result information indicates which DST unit is responsible for overseeing execution of the task and, if needed, processing the partial results generated by the set of allocated DT execution units. In addition, the intermediate result information indicates a scratch pad memory for the task and where the corresponding intermediate results are to be stored. For example, for intermediate result R1-1 (the intermediate result of task 1_1), DST unit 1 is responsible for overseeing execution of the task 1_1 and coordinates storage of the intermediate result as encoded intermediate result slices stored in memory of DST execution units 1-5. In general, the scratch pad is for storing non-DS encoded intermediate results and the intermediate result storage is for storing DS encoded intermediate results. FIGS.33-38are schematic block diagrams of the distributed storage and task network (DSTN) module performing the example ofFIG.30. InFIG.33, the DSTN module accesses the data92and partitions it into a plurality of partitions 1-z in accordance with distributed storage and task network (DST) allocation information. For each data partition, the DSTN identifies a set of its DT (distributed task) execution modules90to perform the task (e.g., identify non-words (i.e., not in a reference dictionary) within the data partition) in accordance with the DST allocation information. From data partition to data partition, the set of DT execution modules90may be the same, different, or a combination thereof (e.g., some data partitions use the same set while other data partitions use different sets). For the first data partition, the first set of DT execution modules (e.g., 1_1, 2_1, 3_1, 4_1, and 5_1 per the DST allocation information ofFIG.32) executes task 1_1 to produce a first partial result102of non-words found in the first data partition. The second set of DT execution modules (e.g., 1_1, 2_1, 3_1, 4_1, and 5_1 per the DST allocation information ofFIG.32) executes task 1_1 to produce a second partial result102of non-words found in the second data partition. The sets of DT execution modules (as per the DST allocation information) perform task 1_1 on the data partitions until the “z” set of DT execution modules performs task 1_1 on the “zth” data partition to produce a “zth” partial result102of non-words found in the “zth” data partition. As indicated in the DST allocation information ofFIG.32, DST execution unit 1 is assigned to process the first through “zth” partial results to produce the first intermediate result (R1-1), which is a list of non-words found in the data. For instance, each set of DT execution modules90stores its respective partial result in the scratchpad memory of DST execution unit 1 (which is identified in the DST allocation or may be determined by DST execution unit 1). A processing module of DST execution 1 is engaged to aggregate the first through “zth” partial results to produce the first intermediate result (e.g., R1_1). The processing module stores the first intermediate result as non-DS error encoded data in the scratchpad memory or in another section of memory of DST execution unit 1. DST execution unit 1 engages its DST client module to slice grouping based DS error encode the first intermediate result (e.g., the list of non-words). To begin the encoding, the DST client module determines whether the list of non-words is of a sufficient size to partition (e.g., greater than a Terra-Byte). If yes, it partitions the first intermediate result (R1-1) into a plurality of partitions (e.g., R1-1_1 through R1-1_m). If the first intermediate result is not of sufficient size to partition, it is not partitioned. For each partition of the first intermediate result, or for the first intermediate result, the DST client module uses the DS error encoding parameters of the data (e.g., DS parameters of data 2, which includes 3/5 decode threshold/pillar width ratio) to produce slice groupings. The slice groupings are stored in the intermediate result memory (e.g., allocated memory in the memories of DST execution units 1-5). InFIG.34, the DSTN module is performing task 1_2 (e.g., find unique words) on the data92. To begin, the DSTN module accesses the data92and partitions it into a plurality of partitions 1-z in accordance with the DST allocation information or it may use the data partitions of task 1_1 if the partitioning is the same. For each data partition, the DSTN identifies a set of its DT execution modules to perform task 1_2 in accordance with the DST allocation information. From data partition to data partition, the set of DT execution modules may be the same, different, or a combination thereof. For the data partitions, the allocated set of DT execution modules executes task 1_2 to produce a partial results (e.g., 1stthrough “zth”) of unique words found in the data partitions. As indicated in the DST allocation information ofFIG.32, DST execution unit 1 is assigned to process the first through “zth” partial results102of task 1_2 to produce the second intermediate result (R1-2), which is a list of unique words found in the data92. The processing module of DST execution 1 is engaged to aggregate the first through “zth” partial results of unique words to produce the second intermediate result. The processing module stores the second intermediate result as non-DS error encoded data in the scratchpad memory or in another section of memory of DST execution unit 1. DST execution unit 1 engages its DST client module to slice grouping based DS error encode the second intermediate result (e.g., the list of non-words). To begin the encoding, the DST client module determines whether the list of unique words is of a sufficient size to partition (e.g., greater than a Terra-Byte). If yes, it partitions the second intermediate result (R1-2) into a plurality of partitions (e.g., R1-2_1 through R1-2_m). If the second intermediate result is not of sufficient size to partition, it is not partitioned. For each partition of the second intermediate result, or for the second intermediate results, the DST client module uses the DS error encoding parameters of the data (e.g., DS parameters of data 2, which includes 3/5 decode threshold/pillar width ratio) to produce slice groupings. The slice groupings are stored in the intermediate result memory (e.g., allocated memory in the memories of DST execution units 1-5). InFIG.35, the DSTN module is performing task 1_3 (e.g., translate) on the data92. To begin, the DSTN module accesses the data92and partitions it into a plurality of partitions 1-z in accordance with the DST allocation information or it may use the data partitions of task 1_1 if the partitioning is the same. For each data partition, the DSTN identifies a set of its DT execution modules to perform task 1_3 in accordance with the DST allocation information (e.g., DT execution modules 1_1, 2_1, 3_1, 4_1, and 5_1 translate data partitions 2_1 through 2_4 and DT execution modules 1_2, 2_2, 3_2, 4_2, and 5_2 translate data partitions 2_5 through 2_z). For the data partitions, the allocated set of DT execution modules90executes task 1_3 to produce partial results102(e.g., 1stthrough “zth”) of translated data. As indicated in the DST allocation information ofFIG.32, DST execution unit 2 is assigned to process the first through “zth” partial results of task 1_3 to produce the third intermediate result (R1-3), which is translated data. The processing module of DST execution 2 is engaged to aggregate the first through “zth” partial results of translated data to produce the third intermediate result. The processing module stores the third intermediate result as non-DS error encoded data in the scratchpad memory or in another section of memory of DST execution unit 2. DST execution unit 2 engages its DST client module to slice grouping based DS error encode the third intermediate result (e.g., translated data). To begin the encoding, the DST client module partitions the third intermediate result (R1-3) into a plurality of partitions (e.g., R1-3_1 through R1-3_y). For each partition of the third intermediate result, the DST client module uses the DS error encoding parameters of the data (e.g., DS parameters of data 2, which includes 3/5 decode threshold/pillar width ratio) to produce slice groupings. The slice groupings are stored in the intermediate result memory (e.g., allocated memory in the memories of DST execution units 2-6 per the DST allocation information). As is further shown inFIG.35, the DSTN module is performing task 1_4 (e.g., retranslate) on the translated data of the third intermediate result. To begin, the DSTN module accesses the translated data (from the scratchpad memory or from the intermediate result memory and decodes it) and partitions it into a plurality of partitions in accordance with the DST allocation information. For each partition of the third intermediate result, the DSTN identifies a set of its DT execution modules90to perform task 1_4 in accordance with the DST allocation information (e.g., DT execution modules 1_1, 2_1, 3_1, 4_1, and 5_1 are allocated to translate back partitions R1-3_1 through R1-3_4 and DT execution modules 1_2, 2_2, 6_1, 7_1, and 7_2 are allocated to translate back partitions R1-3_5 through R1-3_z). For the partitions, the allocated set of DT execution modules executes task 1_4 to produce partial results102(e.g., 1stthrough “zth”) of re-translated data. As indicated in the DST allocation information ofFIG.32, DST execution unit 3 is assigned to process the first through “zth” partial results of task 1_4 to produce the fourth intermediate result (R1-4), which is retranslated data. The processing module of DST execution 3 is engaged to aggregate the first through “zth” partial results of retranslated data to produce the fourth intermediate result. The processing module stores the fourth intermediate result as non-DS error encoded data in the scratchpad memory or in another section of memory of DST execution unit 3. DST execution unit 3 engages its DST client module to slice grouping based DS error encode the fourth intermediate result (e.g., retranslated data). To begin the encoding, the DST client module partitions the fourth intermediate result (R1-4) into a plurality of partitions (e.g., R1-4_1 through R1-4_z). For each partition of the fourth intermediate result, the DST client module uses the DS error encoding parameters of the data (e.g., DS parameters of data 2, which includes 3/5 decode threshold/pillar width ratio) to produce slice groupings. The slice groupings are stored in the intermediate result memory (e.g., allocated memory in the memories of DST execution units 3-7 per the DST allocation information). InFIG.36, a distributed storage and task network (DSTN) module is performing task 1_5 (e.g., compare) on data92and retranslated data ofFIG.35. To begin, the DSTN module accesses the data92and partitions it into a plurality of partitions in accordance with the DST allocation information or it may use the data partitions of task 1_1 if the partitioning is the same. The DSTN module also accesses the retranslated data from the scratchpad memory, or from the intermediate result memory and decodes it, and partitions it into a plurality of partitions in accordance with the DST allocation information. The number of partitions of the retranslated data corresponds to the number of partitions of the data. For each pair of partitions (e.g., data partition 1 and retranslated data partition 1), the DSTN identifies a set of its DT execution modules90to perform task 1_5 in accordance with the DST allocation information (e.g., DT execution modules 1_1, 2_1, 3_1, 4_1, and 5_1). For each pair of partitions, the allocated set of DT execution modules executes task 1_5 to produce partial results102(e.g., 1stthrough “zth”) of a list of incorrectly translated words and/or phrases. As indicated in the DST allocation information ofFIG.32, DST execution unit 1 is assigned to process the first through “zth” partial results of task 1_5 to produce the fifth intermediate result (R1-5), which is the list of incorrectly translated words and/or phrases. In particular, the processing module of DST execution 1 is engaged to aggregate the first through “zth” partial results of the list of incorrectly translated words and/or phrases to produce the fifth intermediate result. The processing module stores the fifth intermediate result as non-DS error encoded data in the scratchpad memory or in another section of memory of DST execution unit 1. DST execution unit 1 engages its DST client module to slice grouping based DS error encode the fifth intermediate result. To begin the encoding, the DST client module partitions the fifth intermediate result (R1-5) into a plurality of partitions (e.g., R1-5_1 through R1-5_z). For each partition of the fifth intermediate result, the DST client module uses the DS error encoding parameters of the data (e.g., DS parameters of data 2, which includes 3/5 decode threshold/pillar width ratio) to produce slice groupings. The slice groupings are stored in the intermediate result memory (e.g., allocated memory in the memories of DST execution units 1-5 per the DST allocation information). As is further shown inFIG.36, the DSTN module is performing task 1_6 (e.g., translation errors due to non-words) on the list of incorrectly translated words and/or phrases (e.g., the fifth intermediate result R1-5) and the list of non-words (e.g., the first intermediate result R1-1). To begin, the DSTN module accesses the lists and partitions them into a corresponding number of partitions. For each pair of partitions (e.g., partition R1-1_1 and partition R1-5_1), the DSTN identifies a set of its DT execution modules90to perform task 1_6 in accordance with the DST allocation information (e.g., DT execution modules 1_1, 2_1, 3_1, 4_1, and 5_1). For each pair of partitions, the allocated set of DT execution modules executes task 1_6 to produce partial results102(e.g., 1stthrough “zth”) of a list of incorrectly translated words and/or phrases due to non-words. As indicated in the DST allocation information ofFIG.32, DST execution unit 2 is assigned to process the first through “zth” partial results of task 1_6 to produce the sixth intermediate result (R1-6), which is the list of incorrectly translated words and/or phrases due to non-words. In particular, the processing module of DST execution 2 is engaged to aggregate the first through “zth” partial results of the list of incorrectly translated words and/or phrases due to non-words to produce the sixth intermediate result. The processing module stores the sixth intermediate result as non-DS error encoded data in the scratchpad memory or in another section of memory of DST execution unit 2. DST execution unit 2 engages its DST client module to slice grouping based DS error encode the sixth intermediate result. To begin the encoding, the DST client module partitions the sixth intermediate result (R1-6) into a plurality of partitions (e.g., R1-6_1 through R1-6_z). For each partition of the sixth intermediate result, the DST client module uses the DS error encoding parameters of the data (e.g., DS parameters of data 2, which includes 3/5 decode threshold/pillar width ratio) to produce slice groupings. The slice groupings are stored in the intermediate result memory (e.g., allocated memory in the memories of DST execution units 2-6 per the DST allocation information). As is still further shown inFIG.36, the DSTN module is performing task 1_7 (e.g., correctly translated words and/or phrases) on the list of incorrectly translated words and/or phrases (e.g., the fifth intermediate result R1-5) and the list of unique words (e.g., the second intermediate result R1-2). To begin, the DSTN module accesses the lists and partitions them into a corresponding number of partitions. For each pair of partitions (e.g., partition R1-2_1 and partition R1-5_1), the DSTN identifies a set of its DT execution modules90to perform task 1_7 in accordance with the DST allocation information (e.g., DT execution modules 1_2, 2_2, 3_2, 4_2, and 5_2). For each pair of partitions, the allocated set of DT execution modules executes task 1_7 to produce partial results102(e.g., 1stthrough “zth”) of a list of correctly translated words and/or phrases. As indicated in the DST allocation information ofFIG.32, DST execution unit 3 is assigned to process the first through “zth” partial results of task 1_7 to produce the seventh intermediate result (R1-7), which is the list of correctly translated words and/or phrases. In particular, the processing module of DST execution 3 is engaged to aggregate the first through “zth” partial results of the list of correctly translated words and/or phrases to produce the seventh intermediate result. The processing module stores the seventh intermediate result as non-DS error encoded data in the scratchpad memory or in another section of memory of DST execution unit 3. DST execution unit 3 engages its DST client module to slice grouping based DS error encode the seventh intermediate result. To begin the encoding, the DST client module partitions the seventh intermediate result (R1-7) into a plurality of partitions (e.g., R1-7_1 through R1-7_z). For each partition of the seventh intermediate result, the DST client module uses the DS error encoding parameters of the data (e.g., DS parameters of data 2, which includes 3/5 decode threshold/pillar width ratio) to produce slice groupings. The slice groupings are stored in the intermediate result memory (e.g., allocated memory in the memories of DST execution units 3-7 per the DST allocation information). InFIG.37, the distributed storage and task network (DSTN) module is performing task 2 (e.g., find specific words and/or phrases) on the data92. To begin, the DSTN module accesses the data and partitions it into a plurality of partitions 1-z in accordance with the DST allocation information or it may use the data partitions of task 1_1 if the partitioning is the same. For each data partition, the DSTN identifies a set of its DT execution modules90to perform task 2 in accordance with the DST allocation information. From data partition to data partition, the set of DT execution modules may be the same, different, or a combination thereof. For the data partitions, the allocated set of DT execution modules executes task 2 to produce partial results102(e.g., 1stthrough “zth”) of specific words and/or phrases found in the data partitions. As indicated in the DST allocation information ofFIG.32, DST execution unit 7 is assigned to process the first through “zth” partial results of task 2 to produce task 2 intermediate result (R2), which is a list of specific words and/or phrases found in the data. The processing module of DST execution 7 is engaged to aggregate the first through “zth” partial results of specific words and/or phrases to produce the task 2 intermediate result. The processing module stores the task 2 intermediate result as non-DS error encoded data in the scratchpad memory or in another section of memory of DST execution unit 7. DST execution unit 7 engages its DST client module to slice grouping based DS error encode the task 2 intermediate result. To begin the encoding, the DST client module determines whether the list of specific words and/or phrases is of a sufficient size to partition (e.g., greater than a Terra-Byte). If yes, it partitions the task 2 intermediate result (R2) into a plurality of partitions (e.g., R2_1 through R2_m). If the task 2 intermediate result is not of sufficient size to partition, it is not partitioned. For each partition of the task 2 intermediate result, or for the task 2 intermediate results, the DST client module uses the DS error encoding parameters of the data (e.g., DS parameters of data 2, which includes 3/5 decode threshold/pillar width ratio) to produce slice groupings. The slice groupings are stored in the intermediate result memory (e.g., allocated memory in the memories of DST execution units 1-4, and 7). InFIG.38, the distributed storage and task network (DSTN) module is performing task 3 (e.g., find specific translated words and/or phrases) on the translated data (R1-3). To begin, the DSTN module accesses the translated data (from the scratchpad memory or from the intermediate result memory and decodes it) and partitions it into a plurality of partitions in accordance with the DST allocation information. For each partition, the DSTN identifies a set of its DT execution modules to perform task 3 in accordance with the DST allocation information. From partition to partition, the set of DT execution modules may be the same, different, or a combination thereof. For the partitions, the allocated set of DT execution modules90executes task 3 to produce partial results102(e.g., 1stthrough “zth”) of specific translated words and/or phrases found in the data partitions. As indicated in the DST allocation information ofFIG.32, DST execution unit 5 is assigned to process the first through “zth” partial results of task 3 to produce task 3 intermediate result (R3), which is a list of specific translated words and/or phrases found in the translated data. In particular, the processing module of DST execution 5 is engaged to aggregate the first through “zth” partial results of specific translated words and/or phrases to produce the task 3 intermediate result. The processing module stores the task 3 intermediate result as non-DS error encoded data in the scratchpad memory or in another section of memory of DST execution unit 7. DST execution unit 5 engages its DST client module to slice grouping based DS error encode the task 3 intermediate result. To begin the encoding, the DST client module determines whether the list of specific translated words and/or phrases is of a sufficient size to partition (e.g., greater than a Terra-Byte). If yes, it partitions the task 3 intermediate result (R3) into a plurality of partitions (e.g., R3_1 through R3_m). If the task 3 intermediate result is not of sufficient size to partition, it is not partitioned. For each partition of the task 3 intermediate result, or for the task 3 intermediate results, the DST client module uses the DS error encoding parameters of the data (e.g., DS parameters of data 2, which includes 3/5 decode threshold/pillar width ratio) to produce slice groupings. The slice groupings are stored in the intermediate result memory (e.g., allocated memory in the memories of DST execution units 1-4, 5, and 7). FIG.39is a diagram of an example of combining result information into final results104for the example ofFIG.30. In this example, the result information includes the list of specific words and/or phrases found in the data (task 2 intermediate result), the list of specific translated words and/or phrases found in the data (task 3 intermediate result), the list of non-words found in the data (task 1 first intermediate result R1-1), the list of unique words found in the data (task 1 second intermediate result R1-2), the list of translation errors due to non-words (task 1 sixth intermediate result R1-6), and the list of correctly translated words and/or phrases (task 1 seventh intermediate result R1-7). The task distribution module provides the result information to the requesting DST client module as the results104. FIG.40Ais a schematic block diagram of an embodiment of a data obfuscation system that includes an encryptor350, a deterministic function352, a key masking function354, a combiner356, an encoder358, and a dispersed storage network (DSN) memory360. The DSN memory360includes at least one set of storage units. The encryptor350encrypts data362using an encryption key364to produce encrypted data366in accordance with an encryption function. The key364is obtained from at least one of a local memory, received in a message, generated based on a random number, and deterministically generated from at least part of the data362. The deterministic function352performs a deterministic function on the encrypted data366using a password368to produce transformed data370, where the transformed data370has a same number of bits as the encryption key364. The password368includes any private sequence of information (e.g., alphanumeric digits). The password368may be obtained by one or more of a lookup, receiving from a user interface input, retrieving from the DSN memory, and performing a user device query. The deterministic function352may be based on one or more of a hashing function, a hash based message authentication code function, a mask generating function, a concatenation function, a sponge function, and a key generation function. The method of operation of the deterministic function is described in greater detail with reference toFIGS.40B-40D. The key masking function354masks the key364using the transformed data370to produce a masked key372, where the masked key372includes the same number of bits as the key364. The masking may include at least one of a logical mathematical function, a deterministic function, and an encryption function. For example, the masking includes performing an exclusiveOR (XOR) logical function on the key364and the transformed data370to produce the masked key372. The combiner356combines the encrypted data366and the masked key372to produce a secure package374. The combining may include at least one of pre-appending, post-appending, inserting, and interleaving. The encoder358performs a dispersed storage error coding function on the secure package374to produce one or more sets of slices376in accordance with dispersed storage error coding function parameters for storage in the DSN memory360. FIG.40Bis a schematic block diagram of an embodiment of a deterministic function module352that includes a hash based message authentication code function module (HMAC)378. The HMAC function378performs a hash based message authentication code function on encrypted data366using a password368as a key of the HMAC to produce transformed data370. FIG.40Cis a schematic block diagram of another embodiment of a deterministic function module352that includes a concatenation function380and a hashing function382. The concatenation function380concatenates encrypted data366and a password368to produce an intermediate result. For example, the concatenation function380combines the encrypted data366and the password368by appending the password368to the encrypted data366to produce the intermediate result. The hashing function382performs a deterministic hashing algorithm on the intermediate result to produce transformed data370. Alternatively, a mask generating function may be utilized as the hashing function382. FIG.40Dis a schematic block diagram of another embodiment of a deterministic function module352that includes a hashing function382, a key generation function384, and a sub-key masking function386. The hashing function382performs a deterministic hashing algorithm on encrypted data366to produce a hash of the encrypted data. Alternatively, a mask generating function may be utilized as the hashing function382. The key generation function384generates an intermediate key based on a password368, where the intermediate key includes a same number of bits as the encryption key utilized in the system ofFIG.40A. The key generation function384includes at least one of a key derivation function, a hashing function, and a mask generating function. The sub-key masking function386may include at least one of a logical mathematical function, a deterministic function, and an encryption function. For example, the sub-key masking includes performing an exclusiveOR (XOR) logical function on the intermediate key and the hash of the encrypted data to produce transformed data370. FIG.40Eis a flowchart illustrating an example of obfuscating data. The method begins at step388where a processing module (e.g., of a dispersed storage processing module) encrypts data using a key to produce encrypted data. The method continues at step390where the processing module performs a deterministic function on the encrypted data and a password to produce transformed data. The method continues at step392where the processing module masks the key utilizing a masking function based on the transformed data to produce a masked key. For example, the processing module performs an exclusiveOR function on the key and the transformed data to produce the masked key. The method continues at step394where the processing module combines (e.g., pre-append, post-append, insert, interleave, etc.) the encrypted data and the masked key to produce a secure package. The method continues at step396where the processing module encodes the secure package to produce a set of encoded data slices using a dispersed storage error coding function. The method continues at step398where the processing module outputs the set of encoded data slices. For example, the processing module outputs the set of encoded data slices to a dispersed storage network memory for storage therein. As another example, the processing module outputs the set of encoded data slices to a communication network for transmission to one or more receiving entities. FIG.40Fis a schematic block diagram of an embodiment of a data de-obfuscation system that includes a dispersed storage network (DSN) memory360, a decoder400, a de-combiner402, a deterministic function352, a key de-masking function404, and a decryptor406. The decoder400obtains (e.g., retrieves, receives) one or more sets of encoded data slices376from the DSN memory360. The decoder400decodes the one or more sets of encoded data slices376using a dispersed storage error coding function in accordance with dispersed storage error coding function parameters to reproduce at least one secure package374. For example, the decoder decodes a first set of encoded data slices to produce a first secure package. For each secure package374, the de-combiner402de-combines the secure package374to reproduce encrypted data366and a masked key372. The de-combining includes at least one of de-appending, un-inserting, and de-interleaving in accordance with a de-combining scheme. The deterministic function352performs a deterministic function on the encrypted data366using a password368to reproduce transformed data370, where the transformed data370has a same number of bits as a recovered encryption key364. The password368includes any private sequence of information and is substantially identical to a password368of a complementary encoder. The key de-masking function404de-masks the masked key372using the transformed data370to produce the recovered key364, where the recovered key364includes a same number of bits as the masked key372. The de-masking may include at least one of a logical mathematical function, a deterministic function, and an encryption function. For example, the de-masking includes performing an exclusiveOR (XOR) logical function on the masked key and the transformed data to produce the recovered key. The decryptor406decrypts the encrypted data366using the recovered key364to reproduce data362in accordance with a decryption function. FIG.40Gis a flowchart illustrating an example of de-obfuscating data, which includes similar steps toFIG.40E. The method begins at step408where a processing module (e.g., of a dispersed storage processing module) obtains a set of encoded data slices. The obtaining includes at least one of retrieving and receiving. For example, the processing module receives the set of encoded data slices from a dispersed storage network memory. As another example, the processing module receives the set of encoded data slices from a communication network. The method continues at step410where the processing module decodes the set of encoded data slices to reproduce a secure package using a dispersed storage error coding function and in accordance with dispersed storage error coding function parameters. The method continues at step412where the processing module de-combines the secure package to produce encrypted data and a masked key. For example, the processing module partitions the secure package to produce the encrypted data and the masked key in accordance with a partitioning scheme. The method continues with step390ofFIG.40Ewhere the processing module performs a deterministic function on encrypted data and a password to reproduce transformed data. The method continues at step414where the processing module de-masks the masked key utilizing a de-masking function based on the transformed data to reproduce a recovered key. For example, the processing module performs an exclusiveOR function on the masked key and the transformed data to produce the recovered key. The method continues at step416where the processing module decrypts the encrypted data using the recovered key to reproduce data. FIG.41Ais a schematic block diagram of an embodiment of a dispersed storage system that includes a client module420, a dispersed storage (DS) processing module422, and a DS unit set424. The DS unit set424includes a set of DS units426utilized to access slices stored in the set of DS units426. The DS unit426may be implemented using the distribute storage and task (DST) execution unit36ofFIG.1. The client module420may be implemented utilizing at least one of a user device, a distributed storage and task (DST) client module, a DST processing unit, a DST execution unit, and a DS processing unit. The DS processing module422may be implemented utilizing at least one of a DST client module, a DST processing unit, a DS processing unit, a user device, a DST execution unit, and a DS unit. The system is operable to facilitate storage of one or more queue entries of a queue in the DS unit set424. In an example of operation, the client module420generates a write queue entry request428where the write queue entry request428includes one or more of a queue entry, a queue name, and an entry number. The client module420may utilize the entry number to facilitate ordering of two or more queue entries. The client module420outputs the write queue entry request428to the DS processing module422. The DS processing module422encodes the queue entry using a dispersed storage error coding function to produce a set of queue entry slices432. For each DS unit426of the DS unit set424, the DS processing module422generates a write request430and outputs the write request430to the DS unit426to facilitate storage of the queue entry slices by the DS unit set424. The write request430includes one or more of a queue entry slice432of the set of queue entry slices and a slice name434corresponding to the queue entry slice432. The DS processing module422generates the slice name434based on the write queue entry request428. The slice name434includes a slice index field436and a vault source name field438. The slice index field436includes a slice index entry that corresponds to a pillar number of a set of pillar numbers associated with a pillar width dispersal parameter utilized in the dispersed storage error coding function. The vault source name field438includes a queue vault identifier (ID) field440and a queue entry ID field442. The queue vault ID440includes an identifier of a vault of the dispersed storage system associated with the queue (e.g., a vault associated with the client module420). The DS processing module422generates a queue vault ID entry for the queue vault ID field440by a one or more of a dispersed storage network registry lookup based on an identifier of a requesting entity associated with the write queue entry request428, receiving the queue vault ID, and generating a new queue vault ID when a new queue name is requested (e.g., not previously utilized in the dispersed storage network). The queue entry ID field442includes a queue name field444, a DS processing module ID field446, a client ID field448, and a timestamp field450. The DS processing module422generates a queue name entry for the queue name field444based on the queue name of the write queue entry request428. The DS processing module422generates a DS processing module ID entry for the DS processing module ID field446as an identifier associated with the DS processing module422by at least one of a lookup, receiving, and generating when the ID has not been assigned so far. The DS processing module422generates a client ID entry for the client ID field448as an identifier associated with the client module420(e.g., requesting entity) by at least one of a lookup, extracting from the write queue entry request428, initiating a query, and receiving. The DS processing module422generates a timestamp entry for the timestamp field450as at least one of a current timestamp, the entry number of the write queue entry request428(e.g., when provided), and a combination of the current timestamp and the entry number. In an implementation example, the slice name is 48 bytes, the queue entry ID field is 24 bytes, the queue name field is 8 bytes, the DS processing module ID is 4 bytes, the client ID field is 4 bytes, and the timestamp field is 8 bytes. FIG.41Bis a flowchart illustrating an example of storing a queue entry. The method begins at step452where a processing module (e.g., of a dispersed storage (DS) processing module) receives a write queue entry request. The request includes one or more of a requesting entity identifier (ID), a queue entry, a queue name, and an entry number. The method continues at step454where the processing module identifies a queue vault ID. The identifying may be based on one or more of the requesting entity ID, the queue name, and a look up. For example, the processing module accesses a queue directory utilizing the queue name to identify the queue vault ID. The method continues at step456where the processing module identifies a DS processing module ID associated with processing of the write queue entry request. The identifying may be based on one or more of generating a new ID, extracting from the request, a lookup, initiating a query, and receiving the identifier. The method continues at step458where the processing module identifies a client ID associated with the requesting entity. The identifying may be based on one or more of extracting from the request, a lookup, initiating a query, and receiving the identifier. The method continues at step460where the processing module generates a timestamp. The generating includes at least one of obtaining a real-time time value and utilizing the entry number of the write queue entry request when provided. The method continues at step462where the processing module generates a set of slice names based on one or more of the queue vault ID, the DS processing module ID, the client ID, and the timestamp. For example, the processing module generates a slice name of the set of slice names to include a slice index corresponding to a slice to be associated with the slice name, the queue vault ID, the queue name of the write queue entry request, the DS processing module ID, the client ID, and the timestamp as depicted inFIG.41A. The method continues at step464where the processing module encodes the queue entry of the write queue entry request using a dispersed storage error coding function to produce a set of queue entry slices. The method continues at step466where the processing module generates a set of write requests that includes the set of queue entry slices and the set of slice names. The method continues at step468where the processing module outputs the set of write requests to a set of DS units to facilitate storage of the set of queue entry slices. FIG.42Ais a schematic block diagram of another embodiment of a dispersed storage system that includes a dispersed storage (DS) processing module422and a DS unit426. The DS unit426includes a controller470, a queue memory472, and a main memory474. The queue memory472and the main memory474may be implemented utilizing one or more memory devices. Each memory device of the one or more memory devices may be implemented utilizing at least one of solid-state memory device, a magnetic disk drive, and an optical disk drive. The queue memory472may be implemented with memory technology to provide improved performance (e.g., lower access latency, higher bandwidth) as compared to the main memory474. For example, the queue memory472is implemented utilizing dynamic random access memory (DRAM) to be utilized for storage of small sets of small encoded queue slices and/or lock slices. The main memory474may be implemented with other memory technology to provide improved cost (e.g., lowered cost) as compared to the queue memory. For example, the main memory474is implemented utilizing magnetic disk memory technology to be utilized for storage of large sets of large encoded data slices. The DS processing module422generates a write request430that includes a queue entry slice432and a slice name434for outputting to the DS unit426. The controller470receives the write request430and determines whether to utilize queue memory472or main memory474for storage of the queue entry slice432of the write request430. The determining may be based on one or more of a queue entry slice identifier, a requesting entity identifier, and matching the slice name434to a queue entry slice name address range. When the controller470determines to utilize the queue memory472, the controller470stores the queue entry slice432in the queue memory472. The method of operation is discussed in greater detail with reference toFIG.42B. FIG.42Bis a flowchart illustrating an example of accessing data. The method begins at step476where a processing module (e.g., of a dispersed storage (DS) unit) receives a slice access request. The method continues at step478where the processing module identifies a slice type to produce an identified slice type. The slice type includes at least one of a queue entry slice, a lock slice, an index node slice, and a data node. The identifying may be based on one or more of mapping a slice name of the slice access request to an address range associated with a slice type of a plurality of slice types, extracting a slice type indicator from the request, and analyzing an encoded data slice of the request. The method continues at step480where the processing module selects a memory type based on the identified slice type to produce a selected memory type. For example, the processing module selects a queue memory when the identified slice type is a queue entry slice. As another example, the processing module selects a main memory when the identified slice type is not a queue entry slice and not a lock entry slice. The method continues at step482where the processing module selects a memory based on the selected memory type to produce a selected memory. The selecting may be based on one or more of available memory capacity, a slice size indicator, a memory reliability indicator, and a memory size threshold level. For example, the processing module selects a tenth queue memory device of the queue memory when the tenth queue memory has available memory capacity greater than the slice size of a queue entry slice for storage and a first through a ninth queue memory devices are full for a write request. Alternatively, the processing module may select another memory type to identify a memory of the other memory type when all memory devices of the selected memory type are unavailable for a request. For example, the processing module selects a second main memory device of the main memory when all queue memory devices of the queue memory are full and the slice type is a queue entry slice for a write request. The method continues at step484where the processing module facilitates the slice access request utilizing the selected memory. For example, when the slice access request is a write request, the processing module stores a received slice of the request in the selected memory. As another example, when the slice access request is a read request, the processing module retrieves a slice from the selected memory and outputs the retrieved slice to a requesting entity. In various embodiments, a method is presented for execution by a processing system that includes a processing circuit. A method includes receiving a write request to store a data object; identifying object parameters associated with the data object; selecting a memory type based on the identified object parameters; selecting a selected memory based on the memory type; and facilitating storage of the data object in the selected memory, wherein the data object is dispersed error encoded. In various embodiments, the object parameters include a size indicator associated with the data object, such as data segment is dispersed error encoded into a plurality of data slices. The object parameters can also include temporary storage identifier associated with the data object, that for example, a identifies a data object for queue entry. The memory type can include a temporary storage, such as a queue memory device. The temporary storage can be implemented via a solid state memory device that has a lower latency and/or a lower access latency compared to other memory devices associated with at least one other memory type. The memory type can further include a main memory space that is implemented via a random access memory space. FIG.43Ais a schematic block diagram of another embodiment of a dispersed storage system that includes a legacy data storage system486, a dispersed storage (DS) processing module422, and a DS unit set424. The DS unit set424includes a set of DS units426utilized to access slices stored in the set of DS units426. The legacy data storage system486may be implemented utilizing any one of a variety of industry-standard storage technologies. The DS processing module422may be implemented utilizing at least one of a distributed storage and task (DST) client module, a DST processing unit, a DS processing unit, a user device, a DST execution unit, and a DS unit. The system is operable to facilitate migration of data from the legacy data storage system486to the DS unit set424. The legacy data storage system486provides object information488and data objects492to the DS processing module422. The object information488includes one or more of object names of the data objects492stored in the legacy data storage system486and object sizes corresponding to the data objects492. The processing module422receives the object information488and the data objects492from the legacy storage system486and stores at least some of the object information488in a dispersed index in the DS unit set424. The dispersed index includes a plurality of index nodes and a plurality of leaf nodes where each of the plurality of index nodes and the plurality of leaf nodes are stored as a set of encoded index slices490in the DS unit set424. Each leaf node of the dispersed index includes at least one entry corresponding to a data object492stored in the DS unit set424, where the entry includes an index key associated with the data object492. The plurality of index nodes provide a hierarchical structure to the dispersed index to identify a leaf node associated with the data object492based on the index key (e.g., searching through the hierarchy of index nodes based on comparing the index key to minimum index keys of each index node). The storing in the dispersed index includes generating the index key associated with the corresponding data object492for each portion of the object information488and adding/modifying an entry of the dispersed index to include one or more of the index key, the object name, the object size, and an index entry state. The index entry state includes an indication of a migration state with regards to migrating the data object492from the legacy data storage system486to the DS unit set424. The indication of migration state includes one of to be moved, moving, and moved. For example, the indication of migration state indicates to be moved when the data object492has been identified for migration from the legacy data storage system486to the DS unit set424when the moving has not been initiated. The DS processing module422initializes the index entry state to indicate to be moved. The initializing includes encoding a corresponding leaf node to produce a set of index slices490and outputting the set of index slices490to the DS unit set424. The DS processing module422encodes the data object492to produce data slices494and outputs the data slices494to the DS unit set424for storage. The DS processing module422updates the index entry state for the data object492to indicate the moving state (e.g., and not the to be moved state). When storage in the DS unit set424of a threshold number (e.g., a write threshold) of data slices494has been confirmed, the DS processing module422issues a delete request496to the legacy data storage system to delete the data object492from the legacy data storage system486. When deletion of the data object492from the legacy data storage system486has been confirmed, the DS processing module422updates the index entry state for the data object492to indicate the moved state. The DS processing module422detects confirmation of deletion of the data object from the legacy data storage system486when receiving a favorable delete response498from the legacy data storage system486with regards to the data object492. The method of operation is discussed in greater detail with reference toFIG.43B. FIG.43Bis a flowchart illustrating an example of migrating data. The method begins at step500where a processing module (e.g., a dispersed storage (DS) processing module) receives object information for a data object (e.g., from a legacy data storage system). The receiving may include outputting an object information request, receiving the data object, receiving the object information, receiving a migration request, and initiating a query. The method continues at step502where the processing module stores the object information in a dispersed index where the data object is associated with a to-be-moved index entry state. The storing includes establishing an index key of the data object based on one or more of the data object, a data object size indicator, and a data object identifier of the data object and modifying/updating a leaf node entry of a leaf node corresponding to the data object to include the index key, the object information, and an index entry state to indicate to be moved. The method continues at step504where the processing module encodes the data object to produce data slices for storage in a set of DS units. The encoding includes encoding the data object using a dispersed storage error coding function to produce a plurality of encoded data slices, generating a plurality of slice names corresponding to the plurality of encoded data slices, generating a plurality of write slice requests that includes a plurality of slice names and the plurality of encoded data slices, and outputting the plurality of write slice requests to the DS unit set. The method continues at step506where the processing module updates the dispersed index to indicate that the index entry state for the data object has changed to moving. For example, the processing module retrieves the leaf node (e.g., retrieves a set of index slices from the set of DS units, decodes the set of index slices to reproduce the leaf node), updates the index entry state to indicate moving to produce a modified leaf node, and stores the modified leaf node in the set of DS units (e.g., encodes the leaf node to produce a set of index slices, outputs the set of index slices to the set of DS units for storage). When storage is confirmed, the method continues at step508where the processing module outputs a delete data object request to the legacy data storage system. For example, the processing module receives at least a write threshold number of favorable write slice responses from the set of DS units, generates the delete data object request to include the data object identifier, and outputs the delete data object request to the legacy data storage system. When deletion of the data object is confirmed, the method continues at step510where the processing module updates the dispersed index to indicate that the index entry state for the data object has changed to moved. For example, the processing module receives a delete data response from the legacy data storage system indicating that the deletion of the data object is confirmed, retrieves the leaf node, updates the index entry state to indicate moved to produce a further modified leaf node, and stores the further modified leaf node in the set of DS units. FIG.44Ais a schematic block diagram of another embodiment of a dispersed storage system that includes a legacy data storage system486, a dispersed storage (DS) processing module422, and a DS unit set424. The DS unit set424includes a set of DS units426utilized to access slices stored in the set of DS units426. The legacy data storage system486may be implemented utilizing any one of a variety of industry-standard storage technologies. The DS processing module422may be implemented utilizing at least one of a distributed storage and task (DST) client module, a DST processing unit, a DS processing unit, a user device, a DST execution unit, and a DS unit. The system is operable to facilitate accessing migrating data while the data is being migrated from the legacy data storage system486to the DS unit set424. The DS processing module422receives a data access request512(e.g., from a client module, from a user device, from a requesting entity) where the data access request512includes at least one of a read request, a write request, a delete request, and a list request. The DS processing module422processes the data access request512, generates a data access response514based on the processing, and outputs the data access response514(e.g., to the client module, to the user device, to the requesting entity). In an example of processing the data access request512, the data access request includes the read request such that the DS processing module422receives the data access request512to read a data object. Having received the read requests, the DS processing module422accesses a dispersed index to identify an index entry state corresponding to the data object. The accessing includes generating a set of index slice requests520corresponding to a leaf node of the dispersed index associated with the data object, outputting the set of index slice requests520to the DS unit set424, receiving at least a decode threshold number of index slice responses522, and decoding the at least the decode threshold number of index slice responses522to reproduce the leaf node containing the index entry state corresponding to the data object. When the state indicates moved, the DS processing module422retrieves the data object from the DS unit set424(e.g., issuing data slice access requests524to the DS unit set424, receiving data slice access responses526, and decoding the data slice access responses526to reproduce the data object). When the state does not indicate moved, the DS processing module422retrieves the data object from the legacy data storage system486(e.g., issuing a data object request516to the legacy data storage system486and receiving a data object response518that includes the data object). In another example of processing the data access request512, the DS processing module422receives a data access request512to write another data object. The DS processing module422accesses the dispersed index to identify a dispersed storage network (DSN) address associated with storage of the other data object (e.g., retrieves the leaf node associated with the data object to produce the DSN address). The DS processing module422stores the other data object in the DS unit set424utilizing the DSN address (e.g., issuing data slice access requests524that includes slice names based on the DSN address and encoded data slices produced from encoding the other data object using a dispersed storage error coding function). In another example of processing the data access request512, the DS processing module422receives a data access request512to delete the data object. The DS processing module422accesses the dispersed index to determine the index entry state corresponding to the data object. When the index entry state indicates moved, the DS processing module422facilitates deletion of the data object from the DS unit set424(e.g., issuing data slice access requests524that includes delete requests to the DS unit set424). When the index entry state indicates moving, the DS processing module422facilitates deletion of the data object from the DS unit set424and from the legacy data storage system486(e.g., issuing another data object request516that includes a delete data object request to the legacy data storage system486). When the index entry state indicates to be moved, the DS processing module422facilitates deletion of the data object from the legacy data storage system486. In yet another example of processing the data access request512, the DS processing module422receives a data access request512to list data. The request to list data may include one or more data object names and/or a DSN address range. The DS processing module422accesses the dispersed index to identify one or more DSN addresses associated with the one or more data object names of the request to list data. The DS processing module422facilitates issuing a series of data slice access requests524that includes a series of list requests to the DS unit set424for slices associated with the one or more DSN addresses and/or the DSN address range. The DS processing module422receives data slice access responses526that includes list responses. The DS processing module422issues data object requests516to the legacy data storage system486where the data object requests516includes list requests for the data objects. The DS processing module422receives data object responses518that includes list data object responses. The DS processing module422aggregates list responses from the legacy data storage system486and the DS unit set424to produce a compiled list response. The DS processing module422issues a data access response514to a requesting entity, where the data access response514includes the compiled list response. FIG.44Bis a flowchart illustrating an example of accessing migrating data. The method begins at step528where a processing module (e.g., a dispersed storage (DS) processing module) receives a data access request from a requesting entity. When the data access request includes a read request, the method branches to step530. When the data access request includes a delete request, the method branches to step538, when the data access request includes a list request, the method continues to step546. When the data access request includes the read request, the method continues at step530where the processing module determines an index entry state corresponding to a data object of the request (e.g., retrieve a leaf node of a dispersed index corresponding to the data object to extract the index entry state). When the index entry state indicates moved, the method continues at step532where the processing module retrieves the data object from a dispersed storage network (DSN). The retrieving includes generating data slice access requests, receiving data slice access responses, and decoding data slices of the data slice access responses to reproduce the data object. The method branches to step536. When the index entry state does not indicate moved (e.g., indicates to be moved or moving), the method continues at step534where the processing module retrieves the data object from the legacy data storage system. The retrieving includes generating a data object request, outputting the data object request to the legacy data storage system, and receiving a data object response from the legacy data storage system that includes the data object. The method continues at step536where the processing module outputs a data access response that includes the data object. When the data access request includes the delete request, the method continues at step538where the processing module determines the index entry state corresponding to the data object of the request. When the index entry state indicates to be moved, the method continues at step544where the processing module deletes the data object from the legacy data storage system (e.g., issues data object requests that includes a delete request to the legacy data storage system). When the index entry state indicates moved, the method continues at step540where the processing module deletes the data object from the DSN (e.g., issues delete data access slice requests to the DSN). When the index entry state indicates moving, the method continues at step542where the processing module deletes the data object from the legacy data storage system and the DSN. When the data access request includes the list request, the method continues at step546where the processing module identifies a DSN address of the data object (e.g., based on an index lookup using a data object identifier of the request). The method continues at step548where the processing module performs a listing function for the data object with the DSN to produce DSN listing results (e.g., issuing list data slice access requests, receiving list data slice access responses to produce the DSN listing results). The method continues at step550where the processing module performs a listing function for the data object with the legacy data storage system to produce legacy system listing results (e.g., issuing a list data object request to the legacy data storage system, receiving a list data object response to produce the legacy system listing results). The method continues at step552where the processing module combines the DSN listing results and the legacy system listing results to produce a compiled list response. The combining includes at least one of appending, concatenating, interleaving, and sorting. The method continues at step554where the processing module outputs a data access response that includes the compiled list response to the requesting entity. FIG.45Ais a schematic block diagram of another embodiment of a dispersed storage system that includes one or more dispersed storage (DS) unit sets556and424, a scanning module558, and a rebuilding module560. Each DS unit set556and424includes a set of DS units426. In a first embodiment, as illustrated, the one or more DS unit sets556and424are implemented as two separate sets of DS units426. Alternatively, in another embodiment, the one or more DS unit sets are implemented as a common DS unit set (e.g., DS unit set424). The scanning module558and rebuilding module560may be implemented utilizing one or more of a user device, a server, a processing module, a computer, a DS processing unit, a DS processing module, a DS unit, a distributed storage and task (DST) processing unit, a DST processing module, a DST client module, and a DST execution unit. For example, the scanning module558is implemented in a first DST execution unit and the rebuilding module560is implemented in a second DST execution unit. As another example, the scanning module558and the rebuilding module560are implemented utilizing a common DST execution unit. The system functions to detect one or more stored slices in error (e.g., missing and/or corrupted slices that should be stored in one or more DS units of a first DS unit set556) and to remedy (e.g., rebuild) the one or more stored slices in error. The scanning module558functions to detect the one or more stored slices in error and the rebuilding module functions560to remedy the one or more stored slices in error. The scanning module558communicates identities of the one or more stored slices in error to the rebuilding module560by utilizing entries of one or more dispersed queues stored in the second DS unit set424. In an example of operation, the scanning module558detects the one or more stored slices in error and updates the dispersed queue with an entry pertaining to at least one stored slice in error. The scanning module558functions to detect the one or more stored slices in error through a series of steps. A first step includes generating a set of list slice requests562that include a range of slice names to be scanned associated with the first DS unit set556. A second step includes outputting the set of list slice requests562to the first DS unit set556. A third step includes comparing list slice responses564from the first DS unit set556to identify one or more slice names associated with the one or more stored slices in error. For example, the scanning module558identifies a slice name that is not listed in a list slice response564from a DS unit426of the DS unit set556when slice names of a set of slice names that are associated with the slice name are received via other list slice responses564from other DS units426of the DS unit set556. Having identified the one or more stored slices in error, the scanning module558updates the one or more dispersed queues by sending write queue entry requests566to the second DS unit set424through a series of steps. A first step includes determining a number of slice errors per set of encoded data slices that includes the slice error. A second step includes generating a queue entry that includes one or more of the slice name, the number of slice errors, a rebuilding task indicator, and identity of the set of slice names that are associated with the slice name (e.g., a source name). A third step includes identifying a dispersed queue of the one or more dispersed queues based on the number of slice errors. A fourth step includes storing the queue entry in the identified dispersed queue associated with the second DS unit set424. The storing includes encoding the queue entry to produce a set of entry slices, identifying a rebuilding dispersed queue, generating a set of entry slice names for the queue entry, generating a set of write slice requests that includes the set of entry slices and the set of entry slice names, and outputting the set of write slice requests to the second DS unit set424. With the queue entry in place, the rebuilding module560remedies the one or more stored slices in error through a series of steps. A first step includes retrieving a queue entry from a dispersed queue of the one or more dispersed queues where the dispersed queue is associated with a highest number of slice errors. The retrieving includes outputting a set of queue entry requests568to the second DS unit set424that includes a set of list requests associated with a slice name range of a highest priority queue entry (e.g., oldest), receiving a set of queue entry responses that includes a set of list responses, identifying a set of slice names associated with the queue entry (e.g., lowest slice names of a range of slice names associated with a first in first out (FIFO) approach), generating and outputting a set of delete read slice requests that includes the set of slice names to the second DS unit set424, receiving at least a decode threshold number of queue entry responses570that includes entry slices, and decoding the at least a decode threshold number of entry slices to reproduce the queue entry. A second step to remedy the one or more stored slices in error includes extracting the slice name of the slice in error from the queue entry. A third step includes facilitating rebuilding of the slice in error (e.g., directly rebuilding, issuing a rebuilding request to another rebuilding module). When directly rebuilding, the rebuilding module560outputs, to the first DS unit set556, at least a decode threshold number of read slice requests572that includes slice names associated with the slice in error, receives at least a decode threshold number of read slice responses574that includes slices associated with the slice in error, decodes the slices associated with the slice in error to produce a recovered data segment, and encodes the recovered data segment to produce a rebuilt slice. A fourth step includes generating and outputting, to the first DS unit set556, a write slice request576that includes the slice name of the slice in error and the rebuilt slice. A fifth step includes receiving a write slice response578that includes status of writing the rebuilt slice (e.g., succeeded/failed). When the status of writing the rebuilt slice is favorable (e.g., succeeded), the rebuilding module560generates and outputs, to the second DS unit set424, a set of queue entry requests568that includes a set of commit requests associated with the delete read requests previously output to the second DS unit set424with regards to retrieving the queue entry. Such a set of requests completes deletion of the queue entry to remove the queue entry from the dispersed queue since the slice in error has been successfully rebuilt. FIG.45Bis a flowchart illustrating an example of generating a rebuilding task queue entry. The method begins at step580where a processing module (e.g., of scanning module) identifies a slice name of a slice in error of a set of slices stored in a set of dispersed storage (DS) units. The identifying includes generating and outputting, to the set of DS units, a set of list slice requests to include a slice name range to be scanned for errors, receiving list slice responses, and identifying the slice name of the slice in error based on a comparison of list slice responses. The method continues at step582where the processing module identifies a number of slice errors of the set of slices (e.g., counting). The method continues at step584where the processing module generates a queue entry that includes the slice name of the slice in error, a rebuilding task indicator (e.g., a rebuilding opcode), identity of the set of slices (e.g., the source name of the common set of slices), and the number of slice errors. The method continues at step586where the processing module identifies a rebuilding dispersed queue based on the number of slice errors. The identifying may include one or more of a lookup (e.g., a queue list by number of slice errors), a query, and receiving. The method continues at step588where the processing module facilitates storing the queue entry in the identified rebuilding queue in another set of DS units. Alternatively, the processing module facilitates storage of the queue entry in the identified rebuilding queue in the set of DS units. The facilitating storage of the queue entry in the identified rebuilding queue includes a series of steps. A first step includes generating a set of queue entry slice names based on one or more of a queue vault identifier, a queue name associated with the identified rebuilding queue, a DS processing module identifier associated with the processing module, a client identifier based on a vault lookup, and a current timestamp. A second step includes encoding the queue entry using a dispersed storage error coding function to produce a set of queue entry slices. A third step includes generating a set of write slice requests that includes the set of queue entry slices and the set of queue entry slice names. A fourth step includes outputting the set of write slice requests to the other set of DS units when utilizing the other set of DS units for storage of the queue entry. In addition, a rebuilding module may remove a queue entry from a queue associated with a highest number of missing slices first to facilitate rebuilding of the slice in error. When completing rebuilding of the slice in error, the rebuilding module facilitates deletion of the queue entry from the queue. FIG.46is a flowchart illustrating another example of generating a rebuilding task queue entry, that includes similar steps toFIG.45B. The method begins with steps580,582, and584ofFIG.45Bwhere a processing module (e.g., of a scanning module) identifies a slice name of a slice in error of a set of slices stored in a set of dispersed storage (DS) units, identifies a number of slice errors of the set of slices, and generates a queue entry that includes the slice name of the slice in error, a rebuilding task indicator, identity of the set of slices, and the number of slice errors. The method continues at step590where the processing module generates a vault source name based on the number of slice errors. The generating includes at least one of including a queue vault identifier (ID), a queue name to include the number of slice errors, a DS processing module ID, a client ID, and a timestamp of a current real-time. The method continues at step592where the processing module facilitates storing the queue entry in another set of DS units using the vault source name. The facilitating includes generating a set of slice names using the vault source name, encoding the queue entry to produce a set of queue entry slices, generating a set of write slice requests that includes the set of queue entry slices and the set of slice names, and outputting the set of write slice requests to the other set of DS units. In addition, a rebuilding module may remove the queue entry that is associated with a highest number of slices in error by generating a vault source name with a higher order queue name. FIGS.47A-B, E-H are schematic block diagrams of embodiments of a dispersed storage network (DSN) illustrating examples of steps of storing data. The DSN includes the user device14, the distributed storage and task (DST) processing unit16, and the network24ofFIG.1; and a set of DST execution units 1-n, where each DST execution unit may be implemented with the DST execution unit36ofFIG.1. The user device14includes a computing core26ofFIG.2. The DST processing unit16includes the DST client module34ofFIG.3. The DST client module34includes the DST processing module80ofFIG.3and a request module600. The request module600may be implemented utilizing a processing module84ofFIG.3. Each DST execution unit includes the processing module84and the memory88ofFIG.3. FIG.47Aillustrates initial steps of the examples of the steps of storing the data. As a specific example, the request module600receives, from the user device14, a request602to store data A in the DSN. Having received the request602, the request module600determines, for the request602, dispersed storage error encoding parameters for encoding the data into sets of encoded data slices. The dispersed storage error encoding parameters includes a per set decode threshold, a per set write threshold, and a per set total number. The per set decode threshold indicates a number of encoded data slices of a set of encoded data slices required to construct a corresponding segment of the data (e.g., where the data is divided into segments), the per set write threshold indicates a number of encoded data slices of the set of encoded data slices that are to be stored for a successful storage operation, and the per set total number indicates the number of encoded data slices in the set of encoded data slices (e.g., a pillar width number). For example, the request module600determines the dispersed storage error encoding parameters by determining a vault based on at least one of the request602and the user device14, and determining the per set decode threshold, the per set write threshold, and the per set total number based on information regarding the vault (e.g., extracting parameters from a registry associated with the vault). Having determined the dispersed storage error encoding parameters, the request module600determines whether the request602includes a desired write reliability indication. The desired write reliability indication indicates a desired level of write reliability that meets or exceeds the per set write threshold. For example, desired write reliability indication includes a value in a range between the per set write threshold and the per set total number. For instance, the desired level of write reliability indication indicates 14 slices when the decode threshold is 10, the write threshold is 12, and the total number is 16. More parameter examples are discussed in greater detail with reference toFIG.47C. When the request602does not include the desired write reliability indication, the DST processing module80executes storage of the sets of encoded data slices in accordance with the dispersed storage error encoding parameters and may subsequently send storage reliability information to the user device14indicating how many encoded data slices per set of encoded data slices were successfully stored. As a specific example, the DST processing module80encodes the data using a dispersed storage error coding function in accordance with the dispersed storage error encoding parameters to produce the sets of encoded data slices. For instance, the DST processing module80encodes a first data segment of the data A to produce slices A-1-1, A-2-1, through A-n-1. The DST processing module80issues, via the network24, one or more sets of write slice requests604to the set of DST execution units 1-n as write slice requests 1-n, where the one or more sets of write slice requests604includes the sets of encoded data slices. For each DST execution unit, the processing module84stores a corresponding encoded data slice in the memory88of the DST execution unit. When the request602includes the desired write reliability indication, the DST processing module80executes the storage of the sets of encoded data slices in accordance with the dispersed storage error encoding parameters and subsequently determines whether the storage of the sets of encoded data slices is meeting the desired write reliability indication. The determining whether the storage of the sets of encoded data slices is meeting the desired write reliability indication is discussed in greater detail with reference toFIG.47B. FIG.47Billustrates further steps of the examples of the steps of storing the data. As a specific example, while executing storage of the sets of encoded data slices in accordance with the dispersed storage error encoding parameters, the DST processing module80determines whether the storage of the sets of encoded data slices is meeting the desired write reliability indication. For example, while the DST processing module80executes the storage of the sets of encoded data slices in accordance with the dispersed storage error encoding parameters, the DST processing module80enters a loop that includes causing the DST processing module80to determine whether the storage of one of the sets of encoded data slices is meeting the desired write reliability indication. For instance, the DST processing module80receives write slice responses606, via the network24, from write slice responses of write slice responses 1-n from the set of DST execution units 1-n. Each write slice response indicates whether a corresponding encoded data slice was successfully stored in an associated DST execution unit. The DST processing module80indicates that the one set of encoded data slices is meeting the desired write reliability indication when a number of favorable (e.g., indicating successful storage) write slice responses606is greater than or equal to the value of the write reliability indication. When the storage of the one of the sets of encoded data slices is not meeting the desired write reliability indication, the DST processing module80flags the one of the sets of encoded data slices and determines whether the one of the sets of encoded data slices is a last set of the sets of encoded data slices (e.g., for all segments). When the storage of the one of the sets of encoded data slices is meeting the desired write reliability indication, the DST processing module80determines whether the one of the sets of encoded data slices is the last set of the sets of encoded data slices. When the one of the sets of encoded data slices is not the last set of the sets of encoded data slices, the DST processing module80repeats the loop for another one of the sets of encoded data slices. When the one of the sets of encoded data slices is the last set, the DST processing module80exits the loop. When exiting the loop, the DST processing module80compiles a list of the sets of encoded data slices of all the sets of encoded data slices that did not meet the desired write reliability indication to produce a list of sets. Having produced the list of sets, the DST processing module80determines a storage compliance process for the list of sets and executes the storage compliance process for the sets of encoded data slices based on the list of sets. The determining and execution of the storage compliance process is discussed in greater detail with reference toFIGS.47D and47G. When the storage of the set of encoded data slices is meeting the desired write reliability indication, the request module600indicates that the set of encoded data slices met the desired write reliability indication by issuing storage of reliability information608with regards to data A to the user device14. The reliability information608includes one or more of a number of encoded data slices stored for each segment, an estimated storage reliability level for each data segment, an estimated storage reliability level for all data segments, a net stored indicator, a stored indicator, a stored with low reliability indicator, a stored with desired reliability indicator, and a stored with high reliability indicator. The user device14may delete data A based on the storage of reliability information608. For example, the user device14deletes data A from the computing core26when the storage reliability information indicates that each data segment was stored with the desired write reliability indication. FIG.47Cis a diagram illustrating an example of a dispersed storage (DS) parameters table610that includes entries of a desired level field612and corresponding entries of parameter sets of a decode threshold field614, a write threshold field616, a desired threshold field618, and a total number field620. The entries of the desired level field612corresponds to names of candidate levels of the desired write reliability indication. For example, the candidate levels includes names of a range from highest to lowest. A parameter set of entries of the decode threshold field614, the write threshold field616, the desired threshold field618, and the total number field620corresponds to one of the candidate levels. For example, the highest desired level612corresponds to a parameter set that includes a decode threshold entry of 10, a write threshold of 12, a desired threshold value of 16, and a total number of 16. As such, when the highest desired level is selected, the desired write reliability indication is met only when a value of the desired threshold is 16. For instance, all 16 encoded data slices of a set of 16 encoded data slices were successfully stored to achieve the desired write reliability indication. As another example, the medium desired level612corresponds to another parameter set that includes the decode threshold entry of 10, the write threshold of 12, a desired threshold value of 14, and the total number of 16. As such, when the medium desired level is selected, the desired write reliability indication is met when the value of the desired threshold is 14 or more. For instance, the desired write reliability indication is achieved when 14 or more encoded data slices of the set of 16 encoded data slices were successfully stored. FIG.47Dis a diagram illustrating an example of a storage compliance table622that includes entries of the desired level field612ofFIG.47C, an actual stored field624, and a compliance process field626. An entry of the storage compliance table622may be utilized (e.g., by the user device14, by the DST processing module80) to determine the storage compliance process. As a specific example, a delete original compliance process626is selected when the highest desired level612is selected (e.g., requiring at least 16 successfully stored encoded data slices per set) and the number of encoded data slices actually stored is 16 (e.g., corresponding to an entry of 16 in the actual stored624field). As such, the user device may delete the data being stored in the DSN. As another specific example, a re-store compliance process626is selected when the highest desired level612is selected and the number of encoded data slices actually stored is 15. As such, the storage compliance process includes retrying storage of the sets of encoded data slices that were not successfully stored (e.g., missing one slice) during the execution of storage. As yet another example, a retrying slice compliance process626is selected when the medium-high desired level612is selected (e.g., requiring at least 15 successfully stored encoded data slices per set) and the number of encoded data slices actually stored is 14. As such, the storage compliance process includes initiating a storage unit retry process for encoded data slices of the set of encoded data slices that were not successfully stored (e.g., 2 slices) during the execution of storage. As a further example, a re-store segment compliance process626is selected when the medium-high desired level612is selected (e.g., requiring at least 15 successfully stored encoded data slices per set) and the number of encoded data slices actually stored is 13. As such, the storage compliance process includes initiating a storage unit retry process for the set of encoded data slices that were not successfully stored (e.g., all 16 slices) during the execution of storage. As a still further example, a rebuild slice compliance process626is selected when the medium-high desired level612is selected (e.g., requiring at least 15 successfully stored encoded data slices per set) and the number of encoded data slices actually stored is 14. As such, the storage compliance process includes initiating a rebuilding process for encoded data slices of the set of encoded data slices that were not successfully stored (e.g., 4 slices) during the execution of storage. FIG.47Eillustrates further steps of the examples of the steps of storing the data. As a specific example, the request module600receives, from a user device14, a request602to store data B in the DSN. The request module600determines, for the request602to store data B, dispersed storage error encoding parameters for encoding the data B into sets of encoded data slices. The dispersed storage error encoding parameters includes the per set decode threshold, the per set write threshold, and the per set total number. Having determined the parameters, the request module600determines whether the request602includes the desired write reliability indication. The DST processing module80encodes data B to produce the sets of encoded data slices and executes storage of the sets of encoded data slices in accordance with the dispersed storage error encoding parameters (e.g., issuing one or more sets of write slice requests604, via the network24, that includes write slice requests 1-n to the set of DST execution units 1-n). FIG.47Fillustrates further steps of the examples of the steps of storing the data. As a specific example, when the request includes the desired write reliability indication, while executing storage of the sets of encoded data slices in accordance with the dispersed storage error encoding parameters, the DST processing module80determines whether the storage of the sets of encoded data slices is meeting the desired write reliability indication. For example, the DST processing module80receives write slice responses606, via the network24, that includes write slice responses of the write slice responses 1-n from the DST execution units 1-n, and determines whether the level of the desired write reliability indication is being met. As a more specific example, the DST processing module80enters a loop where the DST processing module80determines whether the storage of one of the sets of encoded data slices is meeting the desired write reliability indication. When the storage of the one of the sets of encoded data slices is not meeting the desired write reliability indication, the DST processing module80flags the one of the sets of encoded data slices and determines whether the one of the sets of encoded data slices is a last set of the sets of encoded data slices. When the one of the sets of encoded data slices is not the last set of the sets of encoded data slices, the DST processing module80repeats the loop for another one of the sets of encoded data slices. When the one of the sets of encoded data slices is the last set of encoded data slices, the DST processing module80exits the loop. Having exited the loop, the DST processing module80compiles a list of the sets of encoded data slices that did not meet the desired write reliability indication to produce the list of sets. When storage of the set of encoded data slices of the sets of encoded data slices is not meeting the desired write reliability indication, the DST processing module80determines a storage compliance process for the set of encoded data slices to meet the desired write reliability indication. For example, the DST processing module80determines the storage compliance process for the list of sets. Having determined the storage compliance process, the DST processing module80executes the storage compliance process for the set of encoded data slices. For example, the DST processing module80executes the storage compliance process for the sets of encoded data slices based on the list of sets. As a specific example of executing the storage compliance process, the request module600sends a message that includes the storage reliability information608of data B to the user device14indicating that storage of the set of encoded data slices met the per set write threshold but did not meet the desired write reliability indication. FIG.47Gillustrates further steps of the examples of the steps of storing the data. As a specific example, continuing the steps ofFIG.47F, the request module600receives a store data request602for data B (e.g., a response to the storage reliability information608) from the user device14requesting a storage retry of at least the encoded data slices of the set of encoded data slices that were not successfully stored during the execution of storage. Having received the storage request602, the DST processing module80retries storage of the encoded data slices of the set of encoded data slices that were not successfully stored during the execution of storage. For example, the DST processing module80issues a set of write slice request604, via the network24, that includes a corresponding set of write slice requests 1-n to the set of DST execution units 1-n. As another specific example of executing the storage compliance process, the DST processing module80initiates a rebuilding process for encoded data slices of the set of encoded data slices that were not successfully stored during the execution of storage. As yet another specific example of executing the storage compliance process, the DST processing module80initiates a storage unit retry process for encoded data slices of the set of encoded data slices that were not successfully stored during the execution of storage. FIG.47Hillustrates further steps of the examples of the steps of storing the data. As a specific example, when the request602to re-store at least a portion of data B includes the desired write reliability indication, while executing storage of the sets of encoded data slices in accordance with the dispersed storage error encoding parameters, the DST processing module80determines whether the storage of the sets of encoded data slices is meeting the desired write reliability indication. For example, the DST processing module80receives write slice responses606, via the network24, that includes write slice responses of the write slice responses 1-n from the DST execution units 1-n, and determines whether the level of the desired write reliability indication is being met. When storage of the set of encoded data slices is meeting the desired write reliability indication, the request module600issues the storage reliability information608to the user device14to indicate that the set of encoded data slices met the desired write reliability indication. The user device14may delete data B from the computing core26when receiving the indication that the set of encoded data slices met the desired rate reliability indication. FIG.47Iis a flowchart illustrating an achieving storage compliance. The method begins at step630where a processing module (e.g., of distributed storage and task (DST) client module) receives, from a device (e.g., a user device), a request to store data in a dispersed storage network (DSN). The method continues at step632where the processing module determines, for the request, dispersed storage error encoding parameters for encoding the data into sets of encoded data slices. The dispersed storage error encoding parameters includes a per set decode threshold, a per set write threshold, and a per set total number. The per set decode threshold indicates a number of encoded data slices of a set of encoded data slices required to construct a corresponding segment of the data, the per set write threshold indicates a number of encoded data slices of the set of encoded data slices that are to be stored for a successful storage operation, and the per set total number indicates the number of encoded data slices in the set of encoded data slices. As a specific example, the processing module determines the dispersed storage error encoding parameters by determining a vault based on at least one of the request and the device, and determining the per set decode threshold, the per set write threshold, and the per set total number based on information regarding the vault. The method continues at step634where the processing module determines whether the request includes a desired write reliability indication. The desired write reliability indication indicates a desired level of write reliability that meets or exceeds the per set write threshold. The desired write reliability indication includes a value in a range between the per set write threshold and the per set total number. As a specific example, the desired write reliability indication indicates a level of 14 encoded data slices when the write threshold is 12 and the total number is 16. When the request includes the desired write reliability indication, the method branches to step638. When the request does not include the desired write reliability indication, the method continues to step636. When the request does not include the desired write reliability indication, the method continues at step636where the processing module executes storage of the sets of encoded data slices in accordance with the dispersed storage error encoding parameters. As a specific example, the processing module issues sets of write slice requests to the DSN memory, where the sets of write slice requests includes the sets of encoded data slices, receives write slice responses regarding status of storage of the sets of encoded data slices, and issues a status message to the device indicating status of storage of the sets of encoded data slices (e.g., successful with regards to the write threshold, not successful with regards to the write threshold, number of encoded data slices successfully stored per set of encoded data slices, an estimated storage reliability level). When the request includes the desired write reliability indication, the method continues at step638where the processing module executes storage of the sets of encoded data slices and while executing storage of the sets of encoded data slices in accordance with the dispersed storage error encoding parameters, determines whether the storage of the sets of encoded data slices is meeting the desired write reliability indication. The method branches to step642when the storage is not meeting the desired write reliability indication. The method continues to step640when the storage is meeting the desired write reliability indication. As a specific example, while executing the storage of the sets of encoded data slices in accordance with the dispersed storage error encoding parameters, the processing module enters a loop that includes determining whether the storage of one of the sets of encoded data slices is meeting the desired write reliability indication. When the storage of the one of the sets of encoded data slices is not meeting the desired write reliability indication, the processing module flags the one of the sets of encoded data slices and determines whether the one of the sets of encoded data slices is a last set of the sets of encoded data slices. Alternatively, when storage of the one of the sets of encoded data slices is meeting the desired write reliability indication, the processing module determines whether the one of the sets of encoded data slices is the last set of the sets of encoded data slices. When the one of the sets of encoded data slices is not the last set of encoded data slices, the processing module repeats the loop for another one of the sets of encoded data slices. When the one of the sets of encoded data slices is the last set of encoded data slices, the processing module exits the loop. When exiting the loop, the processing module compiles a list of the sets of encoded data slices of all the sets of encoded data slices that did not meet the desired write reliability indication to produce a list of sets. When storage of the set of encoded data slices is meeting the desired write reliability indication, the method continues at step640where the processing module indicates that the set of encoded data slices met the desired write reliability indication. When storage of the set of encoded data is not meeting the desired write reliability indication, the method continues at step642where the processing module determines a storage compliance process for the set of encoded data slices to meet the desired write reliability indication. As a specific example, the processing module determines the storage compliance process for the list of sets. Having determined the storage compliance process, the method continues at step644where the processing module executes the storage compliance process for the set(s) of encoded data slices based on the list of sets. As a specific example of executing the storage compliance process, the processing module initiates a rebuilding process for encoded data slices of the set of encoded data slices that were not successfully stored during the execution of storage. For instance, the processing module issues a rebuilding request to a rebuilding entity that includes identification of the set of encoded data slices that were not successfully stored during the execution of storage. As another instance, the processing module retrieves at least a decode threshold number of encoded data slices of the set of encoded data slices that were not successfully stored, decodes the at least a decode threshold number of encoded data slices to reproduce a data segment, encodes the data segment using the dispersed storage error coding function to reproduce the set of encoded data slices, and stores the set of encoded data slices in the DSN memory. As another specific example of executing the storage compliance process, the processing module initiates a storage unit retry process for encoded data slices of the set of encoded data slices that were not successfully stored during the execution of storage. For instance, the processing module issues a redundant write slice request to a corresponding storage unit of the DSN memory for each encoded data slice of the set of encoded data slices that were not successfully stored. As yet another specific example of executing the storage compliance process, the processing module sends a message to the device indicating that storage of the set of encoded data slices met the per set write threshold but did not meet the desired write reliability indication. The processing module receives a response from the device requesting a storage retry of at least the encoded data slices of the set of encoded data slices that were not successfully stored during the execution of storage. Having received the response, the processing module retries storage of the encoded data slices of the set of encoded data slices that were not successfully stored during the execution of storage. For instance, the processing module encodes a portion of the data using the dispersed storage error coding function to reproduce the set of encoded data slices that were not successfully stored. Having reproduced the set of encoded data slices, the processing module sends the encoded data slices of the set of encoded data slices to the DSN memory for storage. FIG.48Ais a schematic block diagram of another embodiment of a dispersed storage system that includes a dispersed storage (DS) processing module422and a DS unit set424. The DS unit set424includes a set of DS units426utilized to access slices stored in the set of DS units426. The DS processing module422may be implemented utilizing at least one of a distributed storage and task (DST) client module, a DST processing unit, a DS processing unit, a user device, a DST execution unit, and a DS unit. Alternatively, another DS processing module may be utilized to store data in the DS unit set424as a plurality of encoded data slices associated with a plurality of slice names. The system is operable to facilitate deletion of data in the DS unit set424. The DS processing module422identifies a data object stored locally (e.g., in a cache memory of the DS processing module422) where the locally stored data object is associated with the plurality of sets of encoded data slices stored in the DS unit set424. The DS processing module422determines a threshold number (e.g., greater than a read threshold number) of slice names corresponding to at least a set of encoded data slices of the plurality of sets of encoded data slices corresponding to the locally stored data object. The threshold number of slice names may be associated with a preferred DS units of the set of DS units426where the preferred DS units are associated with a preferred performance levels (e.g., more available processing capacity) as compared to other DS units of the DS unit set. The DS processing module422generates a threshold number of watch requests650that includes the threshold number of slice names. The DS processing module422outputs the threshold number of watch requests650to corresponding DS units of the DS unit set424. Each DS unit426of the corresponding DS units generates a watch response652with regards to availability of a corresponding encoded data slice. For example, the DS unit426generates the watch response652to indicate that the encoded data slice is visible when the DS unit426received a write slice request and a commit request with regards to the encoded data slice. The DS unit426outputs the watch response652to the DS processing module422. The DS processing module422receives watch responses652from the DS unit set424. The DS processing module422determines whether to delete the locally stored data object based on the watch responses. For example, the DS processing module422determines to delete the locally stored object when a threshold number of favorable (e.g., encoded data slice is visible) watch responses652have been received. The method of operation is discussed in greater detail with reference toFIG.48B. FIG.48Bis a flowchart illustrating an example of deleting data. The method begins with step654where a processing module (e.g., of a dispersed storage (DS) processing module) identifies a data object, that is cached locally, for deletion. The identifying may be based on one or more of a memory utilization level indicator, an error message, a request, an expiration time, and a storage age indicator. The method continues at step656where the processing module identifies a threshold number of slice names corresponding to encoded data slices stored at a corresponding threshold number of DS units corresponding to some of the data object. The identifying includes at least one of selecting the threshold number of DS units based on one or more of a round-robin selection scheme, a DS unit activity indicator, an error message, and a predetermination. The identifying further includes generating one or more sets of slice names corresponding to the data object based on a data object identifier and selecting one or more subsets of slice names where each subset includes a threshold number of slice names. The method continues at step658where the processing module generates a threshold number of watch requests that includes the threshold number of slice names. The method continues at step660where the processing module outputs the threshold number of watch requests to the threshold number of DS units where each DS unit of the threshold number of DS units generates and outputs a watch response to indicate whether status of a corresponding encoded data slice has changed from not visible to visible. The watch response includes a slice name and a visibility status indicator. When receiving a threshold number of favorable (e.g., including visibility status indicator indicating that an associated encoded data slice is visible) watch responses, the method continues at step662where the processing module deletes the data object. FIG.49Ais a schematic block diagram of another embodiment of a dispersed storage system that includes a dispersed storage (DS) processing module422and a DS unit set424. The DS unit set424includes a set of DS units426utilized to access slices stored in the set of DS units426. The DS processing module422may be implemented utilizing at least one of a distributed storage and task (DST) client module, a DST processing unit, a DS processing unit, a user device, a DST execution unit, and a DS unit. The system is operable to facilitate access of data in the DS unit set424. The DS processing module422stores data as a plurality of encoded data slices in the DS unit set424and retrieves at least some of the encoded data slices from the DS unit set424to reproduce the data. The DS processing module422issues one or more sets of slice access requests664to the DS unit set424to store the encoded data slices in the DS unit set424. A slice access request664may include one or more of a request type indicator, a slice name666, and an encoded data slice. For example, the slice access request664includes a write slice request type (e.g., write, read, delete, list), the encoded data slice, and the slice name666corresponding to the encoded data slice when storing the encoded data slice in a DS unit426of the DS unit set424. The DS processing module422issues another one or more sets of slice access requests664to the DS unit set424to retrieve the encoded data slices from the DS unit set424where a slice access request664of the other one or more sets of slice access requests664includes a read slice request type and the slice name666corresponding to the encoded data slice associated with the retrieving. The DS processing module422receives a slice access response668from one or more DS units426of the DS unit set424in response to the slice access request664that includes the read slice request type. The slice access response668includes one or more of the request type indicator, the slice name666, the encoded data slice, and a status code. The status code indicates status of a requested operation of a slice access request. For example, the status code indicates whether the requested operation was successful. The slice name666utilized in the slice access request664and the slice access response668includes a slice index field670and a vault source name field672. The slice index field670includes a slice index entry that corresponds to a pillar number of a set of pillar numbers associated with a pillar width dispersal parameter utilized in a dispersed storage error coding function to encode data to produce encoded data slices. The vault source name field672includes a source name field674and a segment ID field676. The segment ID field676includes a segment ID entry corresponding to each data segment of the plurality of data segments that comprise the data. The source name field674includes a vault identifier (ID) field678, a generation field680, and an object ID field682. The vault ID field678includes a vault ID entry that identifies a vault of the dispersed storage system associated with the requesting entity. The generation field680includes a generation entry corresponding to a generation of a data set associated with the vault. Multiple generations of data may be utilized for the vault to distinguish major divisions of a large amount of data. The object ID field682includes an object ID entry that is associated with a data name corresponding to the data. As a specific example of storing the data, the DS processing module422receives the data and the data name associated with the data. The DS processing module422segments the data to produce a plurality of data segments in accordance with a segmentation scheme. The DS processing module422generates a set of slice names666for each data segment of the plurality of data segments. The generating includes a series of steps. In a first step, the DS processing module422identifies a vault ID based on the request. For example, the DS processing module performs a registry lookup to identify the vault ID based on a requesting entity ID associated with the request. In a second step, the DS processing module422generates a generation field entry based on utilization of other generation entries associated with the vault ID. The DS processing module selects a generation ID associated with a generation that is not yet full and is just greater than a previous generation ID corresponding to a generation that is full. For example, the DS processing module422accesses a generation utilization list of a registry to identify a fullness level associated with each potential generation ID to identify a generation ID that is not full and is just one generation ID larger than a previous generation ID that is full. In a third step, the DS processing module422generates an object ID entry. The generating includes at least one of generating an object ID based on a random number, performing a deterministic function (e.g., a hashing function) on the data name to produce the object ID entry, and performing the deterministic function on at least a portion of the data to produce the object ID entry. In a fourth step, for each data segment, the DS processing module422generates a set of slice names666where each slice name666includes a slice index entry corresponding to a pillar number of the slice name, the vault ID entry, the generation entry, the object ID entry, and a segment number corresponding to the data segment (e.g., starting at zero and increasing by one for each data segment). Having generated the set of slice names666, the DS processing module422encodes the plurality of data segments to produce a plurality of sets of encoded data slices using a dispersed storage error coding function. The DS processing module422generates one or more sets of slice access requests664that includes the sets of slice names666and the plurality of sets of encoded data slices. The DS processing module422outputs the one or more sets of slice access requests664to the DS unit set424. As a specific example of retrieving the data, the DS processing module422receives the data name associated with the data for retrieval. The DS processing module422obtains one or more source names associated with the data. The obtaining includes identifying the vault ID (e.g., based on a registry lookup, a dispersed storage network (DSN) index lookup based on the data name), identifying the object ID (e.g., a DSN index lookup based on the data name, performing a deterministic function on the data name), and selecting one or more generation field entries. The selecting of the one or more generation field entries includes identifying a fullness level associated with each viable generation field entry and selecting the one or more generation field entries based on the fullness levels of each of the one or more generation field entries. For example, the DS processing module422accesses the generation utilization list and selects generation field entries associated with each generation that is full and a next generation ID that is not full where the next generation ID is one greater than a greatest generation ID of generation IDs associated with full generations. For each generation of the one or more generation field entries, the DS processing module422generates the one or more source names that includes the generation, the vault ID, and the object ID. Having generated the one or more source names, the DS processing module422generates, for each source name, one or more sets of slice names that includes the source name, a slice index, and a segment ID of the at least one data segment of the plurality of data segments. The DS processing module422generates one or more sets of slice access requests664that includes read slice requests and the one or more sets of slice names. The DS processing module422outputs the one or more sets of slice access requests664to the DS unit set424. The DS processing module422receives the slice access responses668from the DS unit set424and decodes the received slice access responses668using the dispersed storage error coding function to reproduce the data. The decoding includes utilizing at least a decode threshold number of favorable (e.g., successfully retrieved an encoded data slice) slice access responses668corresponding to a common data segment of a common generation. Alternatively, the DS processing module422attempts to retrieve a first data segment using multiple generation IDs to identify one generation ID of the multiple generation IDs associated with storage of the data for utilization of the one generation ID in retrieval of subsequent data segments. The method of operation is discussed in greater detail with reference toFIG.49B. FIG.49Bis a flowchart illustrating another example of accessing data. The method begins with step684where a processing module (e.g., a dispersed storage (DS) processing module) identifies a data object access within a dispersed storage network (DSN). The identifying may be based on receiving a request that includes one or more of a data name, a requester identifier (ID), a vault ID, an object ID, a source name, and data. The method continues at step686where the processing module identifies a vault ID based on the data object. For example, the processing module performs a vault ID lookup based on the data name. As another example, the processing module performs the vault ID lookup based on the requester ID. The method continues at step688where the processing module obtains an object ID based on the data object. The obtaining further includes obtaining the object ID based on one or more of a request type, the data, receiving the object ID, and the data name. For example, for a read request, the processing module accesses a DSN index (e.g., a directory) to retrieve the object ID based on the data name. As another example, for a write request, the processing module generates the object ID based on a deterministic function applied to the data name. The method continues at step690where the processing module selects at least one generation ID based on generation status. The generation status indicates availability of one or more generation IDs (e.g., full or not full). The selecting further includes selecting the generation ID based on one or more of the request type and the generation status. For example, for a read request, the processing module selects each generation ID associated with a generation status that indicates that the generation is full and selects a generation ID that is one greater than a largest generation ID of one or more generation IDs that are full, if any. As another example, for a write request, the processing module selects a lowest generation ID associated with a generation that is not full. For each generation ID, the method continues at step692where the processing module generates at least one set of slice names using the vault ID, generation ID, and the object ID. For each set of slice names, the method continues at step694where the processing module generates a set of slice access requests that includes the set of slice names. The generating may further be based on the request type. For example, for a write request, the processing module includes a set of slice names and includes a set of encoded data slices that are encoded, using a dispersed storage error coding function, from a corresponding data segment of the data. As another example, for a read request, the processing module includes a set of slice names. The method continues at step696where the processing module accesses the DSN utilizing the set of slice access requests. The accessing includes outputting the set of slice access requests to the DSN. The accessing may further be based on the request type. For example, for the read request type, the processing module receives slice access responses and decodes favorable slice access responses using the dispersed storage error coding function to reproduce one or more data segments of the data. As another example, for the write request type, the processing module confirms storage of the data object when receiving a write threshold number of favorable slice access responses from the DSN for each data segment of a plurality of data segments of the data. As may be used herein, the terms “substantially” and “approximately” provides an industry-accepted tolerance for its corresponding term and/or relativity between items. Such an industry-accepted tolerance ranges from less than one percent to fifty percent and corresponds to, but is not limited to, component values, integrated circuit process variations, temperature variations, rise and fall times, and/or thermal noise. Such relativity between items ranges from a difference of a few percent to magnitude differences. As may also be used herein, the term(s) “operably coupled to”, “coupled to”, and/or “coupling” includes direct coupling between items and/or indirect coupling between items via an intervening item (e.g., an item includes, but is not limited to, a component, an element, a circuit, and/or a module) where, for indirect coupling, the intervening item does not modify the information of a signal but may adjust its current level, voltage level, and/or power level. As may further be used herein, inferred coupling (i.e., where one element is coupled to another element by inference) includes direct and indirect coupling between two items in the same manner as “coupled to”. As may even further be used herein, the term “operable to” or “operably coupled to” indicates that an item includes one or more of power connections, input(s), output(s), etc., to perform, when activated, one or more its corresponding functions and may further include inferred coupling to one or more other items. As may still further be used herein, the term “associated with”, includes direct and/or indirect coupling of separate items and/or one item being embedded within another item. As may be used herein, the term “compares favorably”, indicates that a comparison between two or more items, signals, etc., provides a desired relationship. For example, when the desired relationship is that signal1has a greater magnitude than signal2, a favorable comparison may be achieved when the magnitude of signal1is greater than that of signal2or when the magnitude of signal2is less than that of signal1. As may also be used herein, the terms “processing module”, “processing circuit”, and/or “processing unit” may be a single processing device or a plurality of processing devices. Such a processing device may be a microprocessor, micro-controller, digital signal processor, microcomputer, central processing unit, field programmable gate array, programmable logic device, state machine, logic circuitry, analog circuitry, digital circuitry, and/or any device that manipulates signals (analog and/or digital) based on hard coding of the circuitry and/or operational instructions. The processing module, module, processing circuit, and/or processing unit may be, or further include, memory and/or an integrated memory element, which may be a single memory device, a plurality of memory devices, and/or embedded circuitry of another processing module, module, processing circuit, and/or processing unit. Such a memory device may be a read-only memory, random access memory, volatile memory, non-volatile memory, static memory, dynamic memory, flash memory, cache memory, and/or any device that stores digital information. Note that if the processing module, module, processing circuit, and/or processing unit includes more than one processing device, the processing devices may be centrally located (e.g., directly coupled together via a wired and/or wireless bus structure) or may be distributedly located (e.g., cloud computing via indirect coupling via a local area network and/or a wide area network). Further note that if the processing module, module, processing circuit, and/or processing unit implements one or more of its functions via a state machine, analog circuitry, digital circuitry, and/or logic circuitry, the memory and/or memory element storing the corresponding operational instructions may be embedded within, or external to, the circuitry comprising the state machine, analog circuitry, digital circuitry, and/or logic circuitry. Still further note that, the memory element may store, and the processing module, module, processing circuit, and/or processing unit executes, hard coded and/or operational instructions corresponding to at least some of the steps and/or functions illustrated in one or more of the Figures. Such a memory device or memory element can be included in an article of manufacture. The present invention has been described above with the aid of method steps illustrating the performance of specified functions and relationships thereof. The boundaries and sequence of these functional building blocks and method steps have been arbitrarily defined herein for convenience of description. Alternate boundaries and sequences can be defined so long as the specified functions and relationships are appropriately performed. Any such alternate boundaries or sequences are thus within the scope and spirit of the claimed invention. Further, the boundaries of these functional building blocks have been arbitrarily defined for convenience of description. Alternate boundaries could be defined as long as the certain significant functions are appropriately performed. Similarly, flow diagram blocks may also have been arbitrarily defined herein to illustrate certain significant functionality. To the extent used, the flow diagram block boundaries and sequence could have been defined otherwise and still perform the certain significant functionality. Such alternate definitions of both functional building blocks and flow diagram blocks and sequences are thus within the scope and spirit of the claimed invention. One of average skill in the art will also recognize that the functional building blocks, and other illustrative blocks, modules and components herein, can be implemented as illustrated or by discrete components, application specific integrated circuits, processors executing appropriate software and the like or any combination thereof. The present invention may have also been described, at least in part, in terms of one or more embodiments. An embodiment of the present invention is used herein to illustrate the present invention, an aspect thereof, a feature thereof, a concept thereof, and/or an example thereof. A physical embodiment of an apparatus, an article of manufacture, a machine, and/or of a process that embodies the present invention may include one or more of the aspects, features, concepts, examples, etc. described with reference to one or more of the embodiments discussed herein. Further, from figure to figure, the embodiments may incorporate the same or similarly named functions, steps, modules, etc. that may use the same or different reference numbers and, as such, the functions, steps, modules, etc. may be the same or similar functions, steps, modules, etc. or different ones. While the transistors in the above described figure(s) is/are shown as field effect transistors (FETs), as one of ordinary skill in the art will appreciate, the transistors may be implemented using any type of transistor structure including, but not limited to, bipolar, metal oxide semiconductor field effect transistors (MOSFET), N-well transistors, P-well transistors, enhancement mode, depletion mode, and zero voltage threshold (VT) transistors. Unless specifically stated to the contra, signals to, from, and/or between elements in a figure of any of the figures presented herein may be analog or digital, continuous time or discrete time, and single-ended or differential. For instance, if a signal path is shown as a single-ended path, it also represents a differential signal path. Similarly, if a signal path is shown as a differential path, it also represents a single-ended signal path. While one or more particular architectures are described herein, other architectures can likewise be implemented that use one or more data buses not expressly shown, direct connectivity between elements, and/or indirect coupling between other elements as recognized by one of average skill in the art. The term “module” is used in the description of the various embodiments of the present invention. A module includes a processing module, a functional block, hardware, and/or software stored on memory for performing one or more functions as may be described herein. Note that, if the module is implemented via hardware, the hardware may operate independently and/or in conjunction software and/or firmware. As used herein, a module may contain one or more sub-modules, each of which may be one or more modules. While particular combinations of various functions and features of the present invention have been expressly described herein, other combinations of these features and functions are likewise possible. The present invention is not limited by the particular examples disclosed herein and expressly incorporates these other combinations.
232,652
11860736
DETAILED DESCRIPTION Aspects of the present disclosure introduce techniques for resumable logical map B+ tree node deletion. A snapshot hierarchy may include a plurality of snapshots connected in a branch tree structure. Each snapshot contained in the snapshot hierarchy may have a parent-child relationship with one or more snapshots in the snapshot hierarchy. A parent snapshot may be a snapshot created during a first backup session, while a child snapshot may be a snapshot created from the parent snapshot in a subsequent backup session, and accordingly, be linked to the parent snapshot. The child snapshot may be created to capture the differences from the parent snapshot when data is modified. The child snapshot then becomes a parent snapshot during an additional backup session where another child snapshot is created, thereby creating the hierarchy of snapshots. Metadata for each of the snapshots in the snapshot hierarchy may be maintained in several compute nodes of a copy-on-write (COW) B+ tree mapping architecture. As mentioned, COW techniques may be used to build up metadata mapping tables for each snapshot in the snapshot hierarchy. When a COW approach is taken and a new child snapshot, logical map is to be created, instead of copying the entire B+ tree of the parent snapshot, the logical map created for the child snapshot shares metadata of the logical map B+ tree created for the parent snapshot. Accordingly, one or more nodes of the B+ tree belonging to the parent snapshot may be shared with a child snapshot. Overall, the COW B+ tree may be a single COW B+ tree built up for logical maps of each of the snapshots. Each snapshot may have one root node in the COW B+ tree for its logical map. When a snapshot is deleted, nodes exclusively owned by the snapshot are removed from the logical map B+ tree. To verify a node is exclusively owned by the snapshot, the system verifies that the node is not shared with a parent or child snapshot. The system can efficiently verify that the node is not shared with a parent snapshot when a sequence number (SN) associated with the node is equal to or larger than a minimum SN assigned to the snapshot. Alternatively, the system may verify the node is not shared with the parent by traversing the logical map B+ tree to confirm that a logical block address (LBA) associated with the node is not found in a parent snapshot. Similarly, to verify the node is not shared with a child snapshot, the system can traverse the logical map B+ tree to confirm that the LBA associated with the node is not found in a child snapshot. In deleting a snapshot, the system may traverse the logical map B+ tree in a top-down manner to delete the nodes exclusively owned by the snapshot and skip nodes that are not exclusively owned by the snapshot. With a top-down traversal, the system may first process a root node associated with the snapshot. The system proceeds to process an index node and the leaf nodes pointed to by the index node, before moving to a next index node, and so on. An exclusively owned leaf node may be deleted when the leaf node is processed. An exclusively owned index node may be deleted when all of its exclusively owned leaf nodes have been deleted. A root node may be deleted when all of its exclusively owned index nodes have been deleted. One approach for resuming logical map B+ tree deletion is to persistently store identifier information (e.g., a physical address) of processed nodes during logical map B+ tree deletion. In crash recovery, the logical map B+ tree deletion is restarted and the system skips nodes identified in the persisted list of proceed nodes. This may avoid accessing of dangling pointers, however, this approach has high overhead in persisting identifier information of every processed node. The system also needs to filter out invalid child nodes for each node in the lookup path when restarting the logical map B+ tree deletion, which may be inefficient. According to aspects of the present disclosure, a node path cursor is updated and persisted as the system traverses the logical map B+ tree. The node path cursor stores information about a latest processed node, instead of all processed nodes. The node path cursor may be an array of the physical address (e.g., PBA) of a parent node and a pointer index of the child node being processed. The node path cursor may be stored as tuples of <parent node physical address, child node index>. When a node is deleted, the node path cursor may be set to the physical address of the deleted node and a value indicating the deletion. The value may be a value different than a child node index, such as −1. Thus, when a node is deleted, the node path cursor may be stored as the tuple <node physical address, −1>. In some embodiments, the node path cursor may contain an array of tuples including a tuple for each traversed layer of the logical map B+ tree. For example, after deleting a leaf node the node path cursor may be set to <PBA of root node, pointer to index node>, <PBA of index node, pointer to leaf node>, <PBA of leaf node, −1>. In some embodiments, the node path cursor includes the physical address of the root node and a pointer to a child node per logical map B+ tree layer. For example, after deleting the leaf node the node path cursor may be set to <PBA of root node, pointer to index node>, <pointer to leaf node>, <−1>. The node path cursor may be persisted (i.e., stored to a persistent non-volatile storage) when a node is deleted. In some embodiments, the node path cursor is persisted when a transaction is committed. Committing a transaction may refer to actually executing a change to the data in persistent storage. For snapshot deletion, committing a transaction may refer to deleting a page, associated with a node, from storage. In some embodiments, the node path cursor is held in memory and lazily flushed to the persistent storage location. For example, a transaction may not be committed until a threshold number of nodes (e.g., a commit threshold) are to be deleted in a transaction. The node path cursor may be kept in memory until the commit threshold is reached and a transaction is committed. In this way, the I/O cost to update the node path cursor can be amortized over multiple transactions. When a crash or other failure occurs, the system uses the node path cursor to resume snapshot deletion starting from the last processed node indicated by the node path cursor. Thus, accessing of dangling nodes is avoided because the system will assume the previous nodes have already been processed and will begin from the next node after the node indicated in the node path cursor. In addition, because the size of the node path cursor is small, the overhead is reduced. Though certain aspects described herein are described with respect to snapshot B+ trees, the aspects may be applicable to any suitable ordered data structure. FIG.1is a diagram illustrating an example computing environment100in which embodiments may be practiced. As shown, computing environment100may include a distributed object-based datastore, such as a software-based “virtual storage area network” (VSAN) environment, VSAN116, that leverages the commodity local storage housed in or directly attached (hereinafter, use of the term “housed” or “housed in” may be used to encompass both housed in or otherwise directly attached) to host(s)102of a host cluster101to provide an aggregate object storage to virtual machines (VMs)105running on the host(s)102. The local commodity storage housed in the hosts102may include combinations of solid state drives (SSDs) or non-volatile memory express (NVMe) drives, magnetic or spinning disks or slower/cheaper SSDs, or other types of storages. Additional details of VSAN are described in U.S. Pat. No. 10,509,708, the entire contents of which are incorporated by reference herein for all purposes, and U.S. patent application Ser. No. 17/181,476, the entire contents of which are incorporated by reference herein for all purposes. As described herein, VSAN116is configured to store virtual disks of VMs105as data blocks in a number of physical blocks, each physical block having a PBA that indexes the physical block in storage. VSAN module108may create an “object” for a specified data block by backing it with physical storage resources of an object store118(e.g., based on a defined policy). VSAN116may be a two-tier datastore, storing the data blocks in both a smaller, but faster, performance tier and a larger, but slower, capacity tier. The data in the performance tier may be stored in a first object (e.g., a data log that may also be referred to as a MetaObj120) and when the size of data reaches a threshold, the data may be written to the capacity tier (e.g., in full stripes, as described herein) in a second object (e.g., CapObj122) in the capacity tier. SSDs may serve as a read cache and/or write buffer in the performance tier in front of slower/cheaper SSDs (or magnetic disks) in the capacity tier to enhance I/O performance. In some embodiments, both performance and capacity tiers may leverage the same type of storage (e.g., SSDs) for storing the data and performing the read/write operations. Additionally, SSDs may include different types of SSDs that may be used in different tiers in some embodiments. For example, the data in the performance tier may be written on a single-level cell (SLC) type of SSD, while the capacity tier may use a quad-level cell (QLC) type of SSD for storing the data. Each host102may include a storage management module (referred to herein as a VSAN module108) in order to automate storage management workflows (e.g., create objects in MetaObj120and CapObj122of VSAN116, etc.) and provide access to objects (e.g., handle I/O operations to objects in MetaObj120and CapObj122of VSAN116, etc.) based on predefined storage policies specified for objects in object store118. A virtualization management platform144is associated with host cluster101. Virtualization management platform144enables an administrator to manage the configuration and spawning of VMs105on various hosts102. As illustrated inFIG.1, each host102includes a virtualization layer or hypervisor106, a VSAN module108, and hardware110(which includes the storage (e.g., SSDs) of a host102). Through hypervisor106, a host102is able to launch and run multiple VMs105. Hypervisor106, in part, manages hardware110to properly allocate computing resources (e.g., processing power, random access memory (RAM), etc.) for each VM105. Each hypervisor106, through its corresponding VSAN module108, provides access to storage resources located in hardware110(e.g., storage) for use as storage for virtual disks (or portions thereof) and other related files that may be accessed by any VM105residing in any of hosts102in host cluster101. VSAN module108may be implemented as a “VSAN” device driver within hypervisor106. In such an embodiment, VSAN module108may provide access to a conceptual “VSAN” through which an administrator can create a number of top-level “device” or namespace objects that are backed by object store118of VSAN116. By accessing application programming interfaces (APIs) exposed by VSAN module108, hypervisor106may determine all the top-level file system objects (or other types of top-level device objects) currently residing in VSAN116. Each VSAN module108(through a cluster level object management or “CLOM” sub-module130) may communicate with other VSAN modules108of other hosts102to create and maintain an in-memory metadata database128(e.g., maintained separately but in synchronized fashion in memory114of each host102) that may contain metadata describing the locations, configurations, policies and relationships among the various objects stored in VSAN116. Specifically, in-memory metadata database128may serve as a directory service that maintains a physical inventory of VSAN116environment, such as the various hosts102, the storage resources in hosts102(e.g., SSD, NVMe drives, magnetic disks, etc.) housed therein, and the characteristics/capabilities thereof, the current state of hosts102and their corresponding storage resources, network paths among hosts102, and the like. In-memory metadata database128may further provide a catalog of metadata for objects stored in MetaObj120and CapObj122of VSAN116(e.g., what virtual disk objects exist, what component objects belong to what virtual disk objects, which hosts102serve as “coordinators” or “owners” that control access to which objects, quality of service requirements for each object, object configurations, the mapping of objects to physical storage locations, etc.). In-memory metadata database128is used by VSAN module108on host102, for example, when a user (e.g., an administrator) first creates a virtual disk for VM105as well as when VM105is running and performing I/O operations (e.g., read or write) on the virtual disk. VSAN module108, by querying its local copy of in-memory metadata database128, may be able to identify a particular file system object (e.g., a virtual machine file system (VMFS) file system object) stored in object store118that may store a descriptor file for the virtual disk. The descriptor file may include a reference to a virtual disk object that is separately stored in object store118of VSAN116and conceptually represents the virtual disk (also referred to herein as composite object). The virtual disk object may store metadata describing a storage organization or configuration for the virtual disk (sometimes referred to herein as a virtual disk “blueprint”) that suits the storage requirements or service level agreements (SLAs) in a corresponding storage profile or policy (e.g., capacity, availability, IOPs, etc.) generated by a user (e.g., an administrator) when creating the virtual disk. The metadata accessible by VSAN module108in in-memory metadata database128for each virtual disk object provides a mapping to or otherwise identifies a particular host102in host cluster101that houses the physical storage resources (e.g., slower/cheaper SSDs, magnetics disks, etc.) that actually stores the physical disk of host102. In some embodiments, VSAN module108is configured to delete a snapshot logical map B+ tree from metadata database128. VSAN module108may store a node path cursor148in metadata database128. Node path cursor148includes information about a last processed node in the logical map B+ tree. In some embodiments, node path cursor148is an array of the physical address (e.g., PBA) of a parent node and a pointer to a child node for each traversed level of the logical map B+ tree to the child node being processed. Node path cursor148may be stored as the tuple <PBA of parent node, child node index> for each level. In some embodiments, node path cursor148is an array of the physical address of the root node and a pointer to a child node at each traversed level of the logical map B+ tree to the child node being processed. When a node is deleted, node path cursor148may be set to the physical address of the deleted node and a value indicating the deletion. The value may be a value different than a child node index, such as a value of −1, where a child node index may be 0 or 1. Thus, when a node is deleted, node path cursor148may be stored as the tuple <physical address of deleted node, −1>. In some embodiments, VSAN module108stores node path cursor148in memory114and lazily flushes node path cursor to metadata database128in VSAN116. When a crash or other failure occurs, VSAN module108may access node path cursor148to resume snapshot deletion from the last processed node in the logical map B+ tree, indicated by node path cursor148. Various sub-modules of VSAN module108, including, in some embodiments, CLOM sub-module130, distributed object manager (DOM) sub-module134, zDOM sub-module132, and/or local storage object manager (LSOM) sub-module136, handle different responsibilities. CLOM sub-module130generates virtual disk blueprints during creation of a virtual disk by a user (e.g., an administrator) and ensures that objects created for such virtual disk blueprints are configured to meet storage profile or policy requirements set by the user. In addition to being accessed during object creation (e.g., for virtual disks), CLOM sub-module130may also be accessed (e.g., to dynamically revise or otherwise update a virtual disk blueprint or the mappings of the virtual disk blueprint to actual physical storage in object store118) on a change made by a user to the storage profile or policy relating to an object or when changes to the cluster or workload result in an object being out of compliance with a current storage profile or policy. In one embodiment, if a user creates a storage profile or policy for a virtual disk object, CLOM sub-module130applies a variety of heuristics and/or distributed algorithms to generate a virtual disk blueprint that describes a configuration in host cluster101that meets or otherwise suits a storage policy. The storage policy may define attributes such as a failure tolerance, which defines the number of host and device failures that a VM can tolerate. A redundant array of inexpensive disks (RAID) configuration may be defined to achieve desired redundancy through mirroring and access performance through erasure coding (EC). EC is a method of data protection in which each copy of a virtual disk object is partitioned into stripes, expanded and encoded with redundant data pieces, and stored across different hosts102of VSAN116datastore. For example, a virtual disk blueprint may describe a RAID 1 configuration with two mirrored copies of the virtual disk (e.g., mirrors) where each are further striped in a RAID 0 configuration. Each stripe may contain a plurality of data blocks (e.g., four data blocks in a first stripe). Including RAID 5 and RAID 6 configurations, each stripe may also include one or more parity blocks. Accordingly, CLOM sub-module130, may be responsible for generating a virtual disk blueprint describing a RAID configuration. CLOM sub-module130may communicate the blueprint to its corresponding DOM sub-module134, for example, through zDOM sub-module132. DOM sub-module134may interact with objects in VSAN116to implement the blueprint by allocating or otherwise mapping component objects of the virtual disk object to physical storage locations within various hosts102of host cluster101. DOM sub-module134may also access in-memory metadata database128to determine the hosts102that store the component objects of a corresponding virtual disk object and the paths by which those hosts102are reachable in order to satisfy the I/O operation. Some or all of metadata database128(e.g., the mapping of the object to physical storage locations, etc.) may be stored with the virtual disk object in object store118. When handling an I/O operation from VM105, due to the hierarchical nature of virtual disk objects in certain embodiments, DOM sub-module134may further communicate across the network (e.g., a local area network (LAN), or a wide area network (WAN)) with a different DOM sub-module134in a second host102(or hosts102) that serves as the coordinator for the particular virtual disk object that is stored in local storage112of the second host102(or hosts102) and which is the portion of the virtual disk that is subject to the I/O operation. If VM105issuing the I/O operation resides on a host102that is also different from the coordinator of the virtual disk object, DOM sub-module134of host102running VM105may also communicate across the network (e.g., LAN or WAN) with the DOM sub-module134of the coordinator. DOM sub-modules134may also similarly communicate amongst one another during object creation (and/or modification). Each DOM sub-module134may create their respective objects, allocate local storage112to such objects, and advertise their objects in order to update in-memory metadata database128with metadata regarding the object. In order to perform such operations, DOM sub-module134may interact with a local storage object manager (LSOM) sub-module136that serves as the component in VSAN module108that may actually drive communication with the local SSDs (and, in some cases, magnetic disks) of its host102. In addition to allocating local storage112for virtual disk objects (as well as storing other metadata, such as policies and configurations for composite objects for which its node serves as coordinator, etc.), LSOM sub-module136may additionally monitor the flow of I/O operations to local storage112of its host102, for example, to report whether a storage resource is congested. [0043] zDOM sub-module132may be responsible for caching received data in the performance tier of VSAN116(e.g., as a virtual disk object in MetaObj120) and writing the cached data as full stripes on one or more disks (e.g., as virtual disk objects in CapObj122). To reduce I/O overhead during write operations to the capacity tier, zDOM may require a full stripe (also referred to herein as a full segment) before writing the data to the capacity tier. Data striping is the technique of segmenting logically sequential data, such as the virtual disk. Each stripe may contain a plurality of data blocks; thus, a full stripe write may refer to a write of data blocks that fill a whole stripe. A full stripe write operation may be more efficient compared to the partial stripe write, thereby increasing overall I/O performance. For example, zDOM sub-module132may do this full stripe writing to minimize a write amplification effect. Write amplification, refers to the phenomenon that occurs in, for example, SSDs, in which the amount of data written to the memory device is greater than the amount of information you requested to be stored by host102. Write amplification may differ in different types of writes. Lower write amplification may increase performance and lifespan of an SSD. In some embodiments, zDOM sub-module132performs other datastore procedures, such as data compression and hash calculation, which may result in substantial improvements, for example, in garbage collection, deduplication, snapshotting, etc. (some of which may be performed locally by LSOM sub-module136ofFIG.1). In some embodiments, zDOM sub-module132stores and accesses an extent map142. Extent map142provides a mapping of logical block addresses (LBAs) to physical block addresses (PBAs), or LBAs to middle block addresses (MBAs) to PBAs. Each physical block having a corresponding PBA may be referenced by one or more LBAs. In certain embodiments, for each LBA, VSAN module108, may store in a logical map of extent map142, at least a corresponding PBA. The logical map may include an LBA to PBA mapping table. For example, the logical map may store tuples of <LBA, PBA>, where the LBA is the key and the PBA is the value. As used herein, a key is an identifier of data and a value is either the data itself or a pointer to a location (e.g., on disk) of the data associated with the identifier. In some embodiments, the logical map further includes a number of corresponding data blocks stored at a physical address that starts from the PBA (e.g., tuples of <LBA, PBA, number of blocks>, where LBA is the key). In some embodiments where the data blocks are compressed, the logical map further includes the size of each data block compressed in sectors and a compression size (e.g., tuples of <LBA, PBA, number of blocks, number of sectors, compression size>, where LBA is the key). In certain other embodiments, for each LBA, VSAN module108, may store in a logical map, at least a corresponding MBA, which further maps to a PBA in a middle map of extent map142. In other words, extent map142may be a two-layer mapping architecture. A first map in the mapping architecture, e.g., the logical map, may include an LBA to MBA mapping table, while a second map, e.g., the middle map, may include an MBA to PBA mapping table. For example, the logical map may store tuples of <LBA, MBA>, where the LBA is the key and the MBA is the value, while the middle map may store tuples of <MBA, PBA>, where the MBA is the key and the PBA is the value. According to the information stored in the logical map, VSAN module108may use the logical map to determine which PBA is referenced by an LBA. Logical maps may also be used in snapshot mapping architecture. Modern storage platforms, including VSAN116, may enable snapshot features for backup, archival, or data protections purposes. Snapshots provide the ability to capture a point-in-time state and data of a VM105to not only allow data to be recovered in the event of failure but restored to known working points. Snapshots may capture VMs'105storage, memory, and other devices, such as virtual network interface cards (NICs), at a given point in time. Snapshots do not require an initial copy, as they are not stored as physical copies of data blocks (at least initially), but rather as pointers to the data blocks that existed when the snapshot was created. Because of this physical relationship, a snapshot may be maintained on the same storage array as the original data. As mentioned, snapshots collected over two or more backup sessions may create a snapshot hierarchy where snapshots are connected in a branch tree structure with one or more branches. Snapshots in the hierarchy have parent-child relationships with one or more other snapshots in the hierarchy. In linear processes, each snapshot has one parent snapshot and one child snapshot, except for the last snapshot, which has no child snapshots. Each parent snapshot may have more than one child snapshot. Additional snapshots in the snapshot hierarchy may be created by reverting to the current parent snapshot or to any parent or child snapshot in the snapshot tree to create more snapshots from that snapshot. Each time a snapshot is created by reverting to any parent or child snapshot in the snapshot tree, a new branch in the branch tree structure is created. FIG.2is a block diagram illustrating an example snapshot hierarchy200, according to an example embodiment of the present disclosure. As shown inFIG.2, seven snapshots may exist in snapshot hierarchy200. A first snapshot202may be a snapshot created first in time. First snapshot202may be referred to as a root snapshot of the snapshot hierarchy200, as first snapshot202does not have any parent snapshots. First snapshot202may further have two child snapshots: second snapshot204and fourth snapshot208. Fourth snapshot208may have been created after reverting back to first snapshot202in snapshot hierarchy200, thereby creating an additional branch from first snapshot202to fourth snapshot208. Second snapshot204and fourth snapshot208may be considered sibling snapshots. Second snapshot204and fourth snapshot208may not only be child snapshots of first snapshot202but also parent snapshots of other snapshots in snapshot hierarchy200. In particular, second snapshot204may be a parent of third snapshot206, and fourth snapshot208may be a parent of both fifth snapshot210and sixth snapshot212. Third snapshot206, fifth snapshot210, and sixth snapshot212may be considered grandchildren snapshots of first snapshot202. Third snapshot206and fifth snapshot210may not have any children snapshots; however, sixth snapshot212may have a child snapshot, seventh snapshot214. Seventh snapshot214may not have any children snapshots in snapshot hierarchy200. WhileFIG.2illustrates only seven snapshots in snapshot hierarchy200, any number of snapshots may be considered as part of a snapshot hierarchy. Further, any parent-child relationships between the snapshots in the snapshot hierarchy may exist in addition to, or alternative to, the parent-child relationships illustrated inFIG.2. Each snapshot in the snapshot hierarchy may include its own logical map. In certain embodiments, the logical maps of the snapshots are stored as a B+ tree.FIG.3is a block diagram illustrating a B+ tree300data structure, according to an example embodiment of the present application. For illustrative purposes, B+ tree300may represent the logical map for the root snapshot (e.g., first snapshot202) in snapshot hierarchy200. As illustrated, B+ tree300may include a plurality of nodes connected in a branching tree structure. The top node of a B+ tree may be referred as a root node, e.g., root node310, which has no parent node. The middle level of B+ tree300may include index nodes320and322(also referred to as “index” nodes), which may have both a parent node and one or more child nodes. In the illustrated example, B+ tree300has three levels (e.g., level 0, level 1, and level 2), and only a single middle level (e.g., level 1), but other B+ trees may have a greater number of levels with more middle levels and thus greater heights. The bottom level of B+ tree300may include leaf nodes330-336which do not have any more children nodes. In the illustrated example, in total, B+ tree300has seven nodes and three levels. Root node310is in level two of the tree, middle (or index) nodes320and322are in level one of the tree, and leaf nodes330-336are in level zero of the tree. Each node of B+ tree300may store at least one tuple. In a B+ tree, leaf nodes may contain data values (or real data) and middle (or index) nodes may contain only indexing keys. For example, each of leaf nodes330-336may store at least one tuple that includes a key mapped to real data, or mapped to a pointer to real data, for example, stored in a memory or disk. As shown inFIG.3, these tuples may correspond to key-value pairs of <LBA, MBA> or <LBA, PBA> mappings for data blocks associated with each LBA. In some embodiments, each leaf node may also include a pointer to its sibling(s), which is not shown for simplicity of description. On the other hand, a tuple in the middle and/or root nodes of B+ tree300may store an indexing key and one or more pointers to its child node(s), which can be used to locate a given tuple that is stored in a child node. Because B+ tree300contains sorted tuples, a read operation such as a scan or a query to B+ tree300may be completed by traversing the B+ tree relatively quickly to read the desired tuple, or the desired range of tuples, based on the corresponding key or starting key. According to aspects described herein, each node of B+ tree300may be assigned a monotonically increasing SN. For example, a node with a higher SN may be a node which was created later in time than a node with a smaller SN. As shown inFIG.3, root node310may be assigned an SN of S1 as root node310belongs to the root snapshot (e.g., first snapshot202illustrated inFIG.2, created first in time) and was the first node created for the root snapshot. Other nodes of B+ tree300may similarly be assigned an SN, for example, node320may be assigned S2, index node322may be assigned S3, leaf node330may be assigned S4, and so forth. As described in more detail below, the SNs assigned to each node in the B+ tree snapshot may be used during snapshot deletion to verify nodes that are exclusively owned by the snapshot or that are shared with a parent snapshot. In certain embodiments, the B+ tree logical map for each child snapshot in a snapshot hierarchy may be a COW B+ tree (also referred to as an append-only B+ tree). When a COW approach is taken and a child snapshot is created, instead of copying the entire B+ tree logical map of the parent snapshot, the child snapshot shares with the parent and, in some cases, ancestor snapshots, one or more extents by having a B+ tree index node, exclusively owned by the child, point to shared parent and/or ancestor B+ tree nodes. This COW approach for the creation of a child B+ tree logical map may be referred to as a “lazy copy approach” as the entire B+ tree logical map of the parent snapshot is not copied when creating the child B+ tree logical map. FIG.4is a block diagram illustrating a B+ tree data structure400using a COW approach for the creation of B+ tree logical maps for child snapshots in a snapshot hierarchy, according to an example embodiment of the present application. For illustrative purposes, B+ tree data structure400may represent the B+ tree logical maps for first snapshot202, second snapshot204, and third snapshot206in snapshot hierarchy200. Fourth snapshot208, fifth snapshot210, sixth snapshot212, and seventh snapshot214have been removed from the illustration ofFIG.4for simplicity. However, B+ tree logical maps for fourth snapshot208, fifth snapshot210, sixth snapshot212, and seventh snapshot214may exist in a similar manner as B+ tree logical maps described for first snapshot202, second snapshot204, and third snapshot206inFIG.4. As shown inFIG.4, index node320and leaf node334are shared by root node310of a first B+ tree logical map (e.g., associated with first snapshot202) and root node402of a second B+ tree logical map (e.g., associated with second snapshot204, which is a child snapshot of first snapshot202) generated from the first B+ tree logical map. This way, the two root nodes310and402may share the data of the tree without having to duplicate the entire data of the tree. More specifically, when the B+ tree logical map for second snapshot204was created, the B+ tree logical map for first snapshot202was copied and snapshot data for leaf node336was overwritten, while leaf nodes330,332, and334were unchanged. Accordingly, root node402in the B+ tree logical map for second snapshot204has a pointer to node320in the B+ tree logical map for first snapshot202for the shared nodes320,330, and332, but, instead of root node402having a pointer to index node322, index node412was created with a pointer to shared leaf node334(e.g., shared between first snapshot202and second snapshot204) and a pointer to new leaf node422, containing metadata for the overwritten data block. Similar methods may have been used to create the B+ tree logical map for third snapshot206illustrated inFIG.4. As mentioned, each node of each B+ tree logical map in B+ tree data structure400may be assigned a monotonically increasing SN for purposes of checking the metadata consistency of snapshots in B+ tree data structure400, and more specifically, in snapshot hierarchy200. Further, the B+ tree logical map for each snapshot in B+ tree data structure400may be assigned a min SN, where the min SN is equal to a smallest SN value among all nodes owned by the snapshot. For example, in the example B+ tree data structure400, first snapshot202may own nodes S1-S7; thus, the min SN assigned to the B+ tree logical map of first snapshot202may be equal to S1. Similarly, second snapshot204may own nodes S8-S10; thus, the min SN of the B+ tree logical map of second snapshot204may be equal to S8, and third snapshot206may own node S11-S15; thus, the min SN of the B+ tree logical map of third snapshot206may be equal to S11. Accordingly, each node, in the B+ tree logical map of child snapshots204and206snapshot, whose SN is smaller than the min SN assigned to the B+ tree logical map of the snapshot, may be a node that is not owned by the snapshot, but instead shared with an ancestor snapshot. For example, when traversing through the B+ tree logical map of second snapshot204, node320may be reached. Because node320is associated with an SN less than the min SN of second snapshot204(e.g., S2<S8), node320may be determined to be a node that is not owned by second snapshot204, but instead owned by first snapshot202and shared with second snapshot204. On the other hand, each node, in the B+ tree logical maps of child snapshots204and206, whose SN is larger than the min SN assigned to the snapshot, may be a node that is owned by the snapshot. For example, when traversing through the B+ tree logical map of second snapshot204, node412may be reached. Because node412is associated with an SN greater than the min SN of second snapshot204(e.g., S9>S8), node412may be determined to be a node that is owned by second snapshot204. Such rules may be true for all nodes belonging to each of the snapshot B+ tree logical maps created for a snapshot hierarchy, such as snapshot hierarchy200illustrated inFIG.2. In an illustrative example, to delete first snapshot202, VSAN module108first processes index nodes320and322before deleting root node310. Before deleting index node320, VSAN module108first processes leaf nodes330and332. Before deleting index node322, VSAN module108first processes leaf nodes334and336. In addition, before deleting any node, VSAN module determines whether the node is exclusively owned by first snapshot202. As shown inFIG.4, only root node310, index node322, and leaf node336are exclusively owned by first snapshot202. VSAN module108skips the nodes that are not exclusively owned by first snapshot202. Accordingly, index node320(and its child leaf nodes330and332) are skipped. In the top-down traversal of the logical map B+ tree, VSAN module108sets node path cursor148to <PBA of root node310, 1>, where 1 is the pointer index in root node310to index node322. VSAN module108skips leaf node334and sets node path cursor148to the array of <PBA of root node310, 1> <PBA of index node322, 1>, where 1 is the point index in index node322to leaf node336. VSAN module108then deletes leaf node336and sets node path cursor to the array <PBA of root node310, 1> <PBA of index node322, 1> <PBA of leaf node336, −1>, where −1 is bit value indicating node deletion. Because all its leaf nodes have been processed, VSAN module108can then delete index node322and sets node path cursor148to <PBA of root node310, 1> <PBA of index node322, −1>. Next, because all its index nodes have been processed, VSAN module108can then delete root node310and sets node path cursor148to <PBA of root node310, −1>. At this point, deletion of first snapshot202is complete. FIG.5is an example workflow500for deleting logical map B+ tree nodes corresponding to a snapshot in a snapshot hierarchy, according to an example embodiment of the present disclosure. Workflow500may be performed by VSAN module108, and in some cases, zDOM sub-module132, illustrated inFIG.1. Operations of workflow500may be described with respect to example snapshot hierarchy200ofFIG.2and B+ tree300illustrated inFIG.3. Workflow500may begin, at operation502, by VSAN module108identifying a snapshot to be deleted. In the illustrated example, VSAN module108may identify first snapshot202to be deleted. VSAN module108may identify first snapshot202by an instruction from an administrator of computing environment100to delete first snapshot202. VSAN module108may identify first snapshot202automatically, such as by expiry of a configured lifetime for first snapshot202(e.g., 30 minutes). At operation504, VSAN module108processes the root node of the identified snapshot. In the illustrated example, VSAN module108processes root node310of first snapshot202. Root node310points to index node320and index node322. As described in more detail below, before deleting root node310at operation512, VSAN module108first processes index node320and index node322. At operation506, VSAN module108identifies a next index node to process. In the illustrated example, VSAN module108proceeds to the first child node, index node320, of root node310. Index node320points to leaf node330and leaf node332. Before deleting index node320, VSAN module108first processes leaf node330and leaf node332. At operation508, VSAN module108verifies whether the identified index node is exclusively owned by the snapshot being deleted. In the illustrated example, VSAN module108verifies whether index node320is exclusively owned by first snapshot202. In some embodiments, VSAN module108verifies index node320is not shared with a parent snapshot by verifying whether the SN of index node320, S2, is equal to or larger than the minSN, S1, of first snapshot202. In some embodiments, VSAN module108verifies index node320is not shared with a parent snapshot by traversing the logical map B+ tree to verify that the LBA of index node320is not found in a parent snapshot. In some embodiments, VSAN module108verifies index node320is not shared with a child snapshot by traversing the logical map B+ tree to verify that the LBA of index node320is not found in a child snapshot. Where VSAN module108finds that a node is shared with another snapshot, VSAN module108identifies the node as not exclusively owned by the snapshot. Accordingly, where VSAN module identifies, at operation508, the index node as not exclusively owned, then, at operation510, VSAN module108skips the index node. In the illustrated example, when VSAN module108finds index node320is shared with a child snapshot, second snapshot204, VSAN module108determines index node320is not exclusively owned by first snapshot202and does not delete index node320. At operation511, VSAN module108determines whether all index nodes of the root node have been processed. In the illustrated example, VSAN module108determines whether all index nodes of root node310have been processed. Where VSAN module108determines, at operation511, that not all of the index nodes of the root node have been processed, VSAN module108returns to operation506to identify a next index node to process. In the illustrated example, after skipping index node320at operation510, VSAN module108determines at operation511that not all index nodes of root node310have been processed and, returning to operation506, VSAN module108identifies index node322as a next node to process. Where VSAN module108finds that the node is not shared with another snapshot, VSAN module108identifies the node as exclusively owned by the snapshot. Accordingly, where VSAN module108identifies, at operation508, the index node as exclusively owned, then, at operation518, VSAN module108sets node path cursor148to the current node, such as to the physical address (e.g., a PBA) of the parent node and the pointer of the child index node in the parent node. In the illustrated example, where VSAN module108determines index node322is exclusively owned by first snapshot202, VSAN module108sets node path cursor148to <PBA of root node310, 1>, where 1 is the pointer in root node310to index node322. As discussed above, before deleting an exclusively owned index node, VSAN module108first processes the child nodes of the index node. In some embodiments, the child node of the index node is another index node. The bottom level of the logical map B+ tree are leaf nodes. Thus, where the child nodes of the index node are index nodes, the index node may not be deleted until leaf nodes associated with all of the child index nodes of the index have been processed. Accordingly, at operation520, VSAN module108identifies a next child node pointer in the index node. In the illustrated example, index node322includes pointers to leaf node334and leaf node336. At operation520, VSAN module108identifies leaf node334as a next leaf node to process. At operation522, VSAN module108verifies whether the identified child node is exclusively owned by the snapshot being deleted. VSAN module108may verify the child node is not shared with a parent snapshot by verifying whether the SN of the child node is equal to or larger than the minSN of the snapshot. In some embodiments, VSAN module108verifies the child node is not shared with a parent snapshot by traversing the logical map B+ tree to verify that the LBA of child node is not found in a parent snapshot. VSAN module108may verify the child node is not shared with a child snapshot by traversing the logical map B+ tree to verify that the LBA of child node is not found in a child snapshot. Where, at operation522, VSAN module108determines the child node is shared with another snapshot, then at operation523, VSAN module108skips the child node. In the illustrated example, VSAN module108determines leaf node334is shared with second snapshot204and/or third snapshot206. In some embodiments, VSAN module108traverses the logical map B+ tree and finds the LBA associated with leaf node336in second snapshot204and/or third snapshot206. Thus, VSAN module108determines leaf node334is shared and skips (i.e., does not delete) leaf node334. At operation524, VSAN module108determines whether all child nodes pointed to in the index node have been processed. In the illustrated example, VSAN module108determines whether all leaf nodes of index node322have been processed. Where VSAN module108determines, at operation524, that not all of the child nodes of the index node have been processed, VSAN module108returns to operation520to identify a next child node to process. In the illustrated example, after skipping leaf node334at operation523, VSAN module108determines, at operation524, that not all leaf nodes of index node322have been processed. VSAN module108returns to operation520and identifies leaf node336as a next node to process. Where VSAN module108determines, at operation522, that the child node is exclusively owned, then at operation534, VSAN module108sets node path cursor148to the physical address of the index node and to the pointer in the index node to the child node. In the illustrated example, VSAN module108determines leaf node336is exclusively owned by first snapshot202and VSAN module108sets node path cursor148to <PBA of index node322, 1>, where 1 is the index value in index node322pointing to leaf node336. Where, VSAN module108determines, at operation522, that the child node is exclusively owned and after VSAN module108sets node path cursor148, at operation534, then VSAN module108deletes the child node at operation536. In the illustrated example, VSAN module108deletes leaf node336. After deleting the child node, at operation536, VSAN module108sets node path cursor148to the physical address of the child node and a bit value indicating the deletion at operation538. In the illustrated example, VSAN module108sets node path cursor148to <PBA of leaf node336, −1>. In some embodiments, VSAN module108sets node path cursor148to the array <PBA of root node310, 1> <PBA of index node322> <PBA of leaf node336, −1>. In some embodiments, VSAN module108sets node path cursor148to the array <PBA of root node310> <pointer to index node322> <pointer to leaf node336> <−1>. In some embodiments, node path cursor148is persisted when a transaction is committed. Committing a transaction may refer to actually executing a change to the data in persistent storage. For snapshot deletion, committing a transaction may refer to deleting a page, associated with a node, from storage. In some embodiments, a transaction may not be committed until a threshold number of nodes (e.g., a commit threshold) are to be deleted in a transaction. Node path cursor148may be kept in memory until the commit threshold is reached and a transaction is committed. In this way, the I/O cost to update the node path cursor can be amortized over multiple transactions. Accordingly, after deleting a child node, at operation536, and setting node path cursor148, at operation538, VSAN module108may determine, at operation540whether a commit threshold has been reached before persisting node path cursor148. Where VSAN module108determines, at operation540, that the commit threshold has been reached, VSAN module108persists node path cursor148at operation542. In the illustrated example, the commit threshold is one (e.g., one page). Accordingly, after leaf node336is deleted, at operation536, and node path cursor148is set, at operation538, then VSAN module108determines, at operation540, that the commit threshold of 1 has been reached and VSAN module108persists node path cursor148at operation542. Where VSAN module108determines, at operation540, that the commit threshold has not been reached or where VSAN module108persists node path cursor148, at operation542, VSAN module108returns to operation524and determines whether all leaf nodes pointed to in the index node have been processed. Where VSAN module108determines all child nodes pointed to in the index node have been processed, VSAN module108deletes the index node at operation526. In the illustrated example, VSAN module108determines all leaf nodes of index node322have been processed (e.g., leaf node334was skipped and leaf node336was deleted) and proceeds to delete index node322at operation526. After deleting the index node, at operation526, VSAN module108sets node path cursor148to the physical address of the index node and a bit value indicating the deletion at528. In the illustrated example, VSAN module108sets node path cursor148to <PBA of index node322, −1>. In some embodiments, VSAN module108sets node path cursor148to the array <PBA of root node310, 1> <PBA of index node322, −1>. In some embodiments, VSAN module108sets node path cursor148to the array <PBA of root node310> <pointer to index node322> <−1>. At operation530, VSAN module108may determine whether the commit threshold has been reached before persisting node path cursor148. Where VSAN module108determines, at operation530, that the commit threshold has been reached, VSAN module108persists node path cursor148at operation532. In the illustrated example, after index node322is deleted, at operation526, and node path cursor148is set, at operation528, VSAN module108determines, at operation530, the commit threshold of 1 has been reached and persists node path cursor148at operation532. Where VSAN module108determines, at operation530, that the commit threshold has not been reached or after VSAN module108persists node path cursor148, at operation532, VSAN module108returns to operation511and determines whether all index nodes pointed to in the root node have been processed. Where, at operation511, VSAN module108determines that all of the index nodes pointed to in the root node have been processed, VSAN module108deletes the root node at operation512. In the illustrated example, after VSAN module108deletes index node322, VSAN module108determines that all index nodes of root node310have been processed and deletes root node310. After deleting the root node, at operation512, VSAN module108sets node path cursor148, at operation514, to the physical address of the root node and a bit value indicating the deletion. In the illustrated example, VSAN module108sets node path cursor148to <PBA of root node310, −1>. At operation516, VSAN module108persists node path cursor148to storage. As discussed herein, a crash or other failure may occur during snapshot deletion. When such a crash or failure occurs, the logical map B+ tree deletion may be resumed using the persisted node path cursor148. VSAN108may resume the logical map B+ tree deletion from the node indicated in the node path cursor148. FIG.6is an example workflow600for resuming deletion of logical map B+ tree nodes corresponding to a snapshot in a snapshot hierarchy, according to an example embodiment of the present disclosure. Workflow600may be performed by VSAN module108, and in some cases, zDOM sub-module132, illustrated inFIG.1. Operations of workflow600may be described with respect to example snapshot hierarchy200ofFIG.2and B+ tree300illustrated inFIG.3. Workflow600may be performed during workflow500when a crash or failure occurs during deletion of logical map B+ tree nodes for a snapshot. Workflow600may begin, at operation602, by detecting a failure during snapshot deletion. In the illustrated example, a crash or failure may occur during deletion of first snapshot202. For example, a crash or failure may occur after VSAN module108persists the node path cursor <PBA of leaf node336, −1> at operation542after deleting the leaf node3336. At operation604, VSAN module108restores a latest persisted node path cursor148from storage. In the illustrated example, VSAN module108may restore the node path cursor <PBA of leaf node336, −1> from VSAN116to memory114. At operation606, VSAN module108identifies a next node to process in the logical map B+ tree based on the node path cursor148. In the illustrated example, VSAN module108assumes that nodes before leaf node336have been processed and resumes the logical map B+ tree deletion, at operation524, by determining all leaf nodes of index node322have been processed, at operation524, and deleting index node322at operation526. Techniques described herein for storing a node path cursor with information of a last processed logical map B+ tree provide a low overhead solution for resuming logical map B+ tree node deletion and avoiding dangling pointers. The various embodiments described herein may employ various computer-implemented operations involving data stored in computer systems. For example, these operations may require physical manipulation of physical quantities usually, though not necessarily, these quantities may take the form of electrical or magnetic signals where they, or representations of them, are capable of being stored, transferred, combined, compared, or otherwise manipulated. Further, such manipulations are often referred to in terms, such as producing, identifying, determining, or comparing. Any operations described herein that form part of one or more embodiments may be useful machine operations. In addition, one or more embodiments also relate to a device or an apparatus for performing these operations. The apparatus may be specially constructed for specific required purposes, or it may be a general purpose computer selectively activated or configured by a computer program stored in the computer. In particular, various general purpose machines may be used with computer programs written in accordance with the teachings herein, or it may be more convenient to construct a more specialized apparatus to perform the required operations. The various embodiments described herein may be practiced with other computer system configurations including hand-held devices, microprocessor systems, microprocessor-based or programmable consumer electronics, minicomputers, mainframe computers, and the like. One or more embodiments may be implemented as one or more computer programs or as one or more computer program modules embodied in one or more computer readable media. The term computer readable medium refers to any data storage device that can store data which can thereafter be input to a computer system computer readable media may be based on any existing or subsequently developed technology for embodying computer programs in a manner that enables them to be read by a computer. Examples of a computer readable medium include a hard drive, network attached storage (NAS), read-only memory, random-access memory (e.g., a flash memory device), NVMe storage, Persistent Memory storage, a CD (Compact Discs), CD-ROM, a CD-R, or a CD-RW, a DVD (Digital Versatile Disc), a magnetic tape, and other optical and non-optical data storage devices. The computer readable medium can also be distributed over a network coupled computer system so that the computer readable code is stored and executed in a distributed fashion. In addition, while described virtualization methods have generally assumed that virtual machines present interfaces consistent with a particular hardware system, the methods described may be used in conjunction with virtualizations that do not correspond directly to any particular hardware system. Virtualization systems in accordance with the various embodiments, implemented as hosted embodiments, non-hosted embodiments, or as embodiments that tend to blur distinctions between the two, are all envisioned. Furthermore, various virtualization operations may be wholly or partially implemented in hardware. For example, a hardware implementation may employ a look-up table for modification of storage access requests to secure non-disk data. Many variations, modifications, additions, and improvements are possible, regardless the degree of virtualization. The virtualization software can therefore include components of a host, console, or guest operating system that performs virtualization functions. Plural instances may be provided for components, operations or structures described herein as a single instance. Finally, boundaries between various components, operations and datastores are somewhat arbitrary, and particular operations are illustrated in the context of specific illustrative configurations. Other allocations of functionality are envisioned and may fall within the scope of one or more embodiments. In general, structures and functionality presented as separate components in exemplary configurations may be implemented as a combined structure or component. Similarly, structures and functionality presented as a single component may be implemented as separate components. These and other variations, modifications, additions, and improvements may fall within the scope of the appended claims(s). In the claims, elements and/or steps do not imply any particular order of operation, unless explicitly stated in the claims.
56,929
11860737
DETAILED DESCRIPTION In broad terms, embodiments provide a software middleware layer that, transparent to applications, allows those applications to use compute coprocessors such as graphics processing units (GPUs). These embodiments make possible capabilities such as scale-out, resource pooling, high availability, and memory virtualization that do not now exist for coprocessors and many applications that make use of coprocessors. SeeFIG.1, which illustrates one example of a hardware/software stack in which applications100run on a host hardware platform200, typically under the control of a system software layer such as an operating system. In addition to other standard components, the hardware platform will include one or more processors210, volatile (such as RAM215) and usually non-volatile (such as SSD, disk, etc.,220) storage components, as well as components for communicating with other systems over a network and other peripherals as needed, for example, network interface components230. Depending on the configuration, the hardware platform200may also include one or more co-processors250, such as GPUs; these may, however, also be located within other systems, accessible via any standard buses or networks, such that the concept of “hardware platform” may be broadened to include such “peripheral” or even remote coprocessors, such as coprocessors in cloud computing environments. Embodiments may also be used in other forms of distributed systems, such as a software-defined infrastructure (SDI). Although not shown, coprocessors may also have dedicated components of their own, such as memory. AsFIG.1indicates by the “duplicated” or “stacked” privileged layers, the applications may also be running in a distributed processing environment, with more than one server handling processing tasks. System software, such as a host operating system300, is generally included to perform well-known functions, and will typically include drivers310that control communication with respective peripheral devices. The software—including the applications, the system software itself (and the interception layer1000, described below)—that runs on the hardware platform is generally processor-executable code that is embodied in the storage components. In many modern computing environments, the application layer100includes, and sometimes is even totally comprised of, one or more virtual machines (VMs)120, in which case a system-level virtualization component such as a hypervisor400will typically also be included to act as the virtual-to-physical interface. In some systems, the hypervisor is co-resident with the host OS300, with both operating in a privileged, kernel mode; in other systems, the host OS intermediates some or all hypervisor commands to the hardware; in still other systems the hypervisor replaces the host OS altogether; and in systems without virtualized components such as VMs, no hypervisor may be needed and included at all. Embodiments of this invention do not require VMs, although they may operate with VMs as with any other applications that call coprocessors such as GPUs; moreover, system programmers will be able to adapt embodiments to different host OS/hypervisor configurations, including those with no separate host OS at all. Applications generally include various core functions110, such as the routines needed to communicate processing requests to the operating system, to access the file system, allocate memory, and access common functional libraries for operations such as data compression, image manipulation, accelerated math functions, etc. In the context of embodiments of this invention, one function of some applications is to properly format and issue calls to application program interfaces (APIs). As is well known, an API comprises routines and protocols that specify how software components should interact, how a software component such as an application may interact with a hardware component, etc. Of particular relevance here is that an API is usually included to form an interface between applications100and coprocessors. By way of example only, graphical user interface (GUI) components are referred to below as being the type of coprocessor the applications are to call into via corresponding APIs. AsFIG.1indicates, system software and certain other components generally run at a “privileged” or “kernel” level, meaning that they are allowed to issue, and have executed, instructions that affect such things as storage protection settings, interrupt handling, scheduling, I/O coordination, special processor status- and register settings, etc. Applications, on the other hand, typically operate in a non-privileged user space, that is, in user mode. In many systems, there is an intermediate software layer between applications and the system software. This layer, labeled1000* inFIG.1, includes such components as application programming interfaces (APIs) libraries, both open and custom, and various system libraries. In systems that employ graphics processing units (GPUs), either for standard graphics tasks or in a GPGPU (General Purpose computing on GPUs) context, this layer may also include an API model such as the CUDA (Compute Unified Device Architecture) parallel computing platform provided by Nvidia, Inc., along with the associated libCUDA driver library. This invention provides a software layer1000, referred to here as the “interception layer”, which may run at the non-privileged level, and which either mirrors the intermediate layer1000*, or is installed to act as such a layer. As the name implies, the interception layer1000intercepts API calls made by applications and changes them in a way that may introduce new functionality. Depending on the chosen implementation, the layer1000may include both custom APIs and generally available, open APIs1060, as well as one or more libraries1070, which list application features, enumerate available devices/platforms, etc., and which may be provided by vendors, or compiled heuristically, or both. Compute APIs, specifically, deal with the management of coprocessors, execution flow, and data movement, to make full and efficient use of the coprocessors. This includes dispatching data and compiled compute routines, returning status, and synchronizing streams of execution between the coprocessors and other coprocessors, and the coprocessors and the host system. The interception layer1000is preferably configured so as not to require any changes to the applications running above, or modifications to the system software on which the layer itself runs. In other words, embodiments may run on commodity systems. Although this configuration leads to advantages such as ease of installation and use, portability, universality, and convenience, other configurations are possible. For example, the interception layer could be installed at the privileged level, and could even be incorporated into system software, in the OS or hypervisor. The code that comprises the interception layer may be installed in the system and configured to intercept application calls using any known method, including downloading it from a network, reading in the code from a tangible, non-volatile storage medium, etc. This is in fact one advantage of the invention: It may be installed like other user-level applications, including applications that interact with other applications, with no need to modify the system software or include dedicated or modified hardware. SeeFIG.2, which shows an example embodiment in which applications issue calls that are intercepted by the layer1000, which directs the calls according to rules (see below) to one or more coprocessors. Embodiments of this invention may be used to advantage with substantially any known type of co-processor, many examples of which are mentioned above. Merely by way of a common example, embodiments are explained below and illustrated for the case in which the coprocessors are GPUs GPU1, GPU2, . . . , GPUn. If any modifications are needed at all to adapt a particular embodiment for use with another type of coprocessor, these adaptations will be within the skill of system architects. Coprocessors typically have a “master-slave” relationship relative to the normal host CPU(s)210that runs the applications—an application is typically run on the host CPU(s) while certain computational tasks are offloaded to coprocessors for increased performance. Compute APIs, that is, APIs that are used by applications to leverage coprocessors, have explicit functions to offload computation and move data between host memory and coprocessor device memory. The API for each coprocessor, such as a GPU, accessible by the overall system, is known and exposed to the applications that may wish to use them. Some of these APIs will be open, that is, public, whereas others may be customized for certain coprocessors. Even in the case of customized APIs, however, these will be known to all applications that may need to use the corresponding coprocessor. Since the coprocessors available at any given time are known to the system, their APIs are also known. As is known, applications100are programmed to properly format API calls to the coprocessors they wish to use and in many cases, the APIs that each application will call into may also be known in advance. The interception layer1000includes the set of APIs, both custom and open, in respective components1050,1060, so as to be able to intercept the calls from applications and correctly interpret them and pass on calls with the proper formatting to the respective GPU. As mentioned, the set of APIs used by many applications is known in advance. Applications issuing calls via the OS300are identifiable using known methods even to the interception layer, which may therefore include libraries1070of the APIs needed by each application100in the system. As mentioned above, examples of such libraries include system libraries, and those offered by the OS, libraries that provide logging, image manipulation, compression, networking access, interprocess communication, etc. Alternatively, or in addition, the interception layer may build up the libraries1070by registering in them the APIs that the applications100actually call. The interception layer1000includes, for each GPU, a corresponding replay log RPL0, RPL1, . . . , RPLn, which may be implemented either as portions of the space of the system memory215or any other memory component, either local to the host hardware or remote. To access coprocessors such as the GPUs, applications issue calls that comprise respective command and data streams. Using known methods, the interception layer intercepts these streams and collects them in replay logs RPL0, RPL1, . . . , RPLn provided on a per-GPU basis. Using known methods, the interception layer also captures—“checkpoints”—the execution state of the respective GPU. The interception layer may store this execution state in the form of state and context data structures that are typically maintained by the respective compute API (for example, CUDA, OpenCL, etc.). As each GPU application runs, the layer1000collects the respective command and data stream in the replay log associated with that GPU. Each replay log RPL0, RPL1, . . . , RPLn is preferably sized so as, at a minimum, to be able to store all of the transactions directed to its corresponding GPU since a most recent synchronization point Ft. Note that, when it comes to coprocessors, such synchronizations points are generally well-defined, since the instruction and data stream directed to a coprocessor such as a GPU is typically “linear”, without conditional branching or time-dependent processing paths, and corresponds to discrete processing tasks having known beginning and end points. As such, if a GPU fails after a synchronization point, it is possible to restart the failed processing segment from the most recent synchronization point as long as all of the instruction and data stream from point Ft onward to the point of failure is available. A window wt, defined in terms of time or number of instructions, between consecutive synchronization points is therefore knowable in advance for each GPU, such that the replay logs may be configured so as never to have a potential “gap”. Synchronization points for GPUs are similar to other forms of “checkpointing” in that the state of the GPU memory is also known at each synchronization point. In embodiments of the invention, the GPU memory is preferably shadowed, using known mechanisms, in host memory215, on one or more other GPUs, on other host platforms, or even on the file system. Shadowing is preferably two-way, such that if the host makes changes to memory in the shadow region, these changes will be communicated to the GPU, and vice versa. This memory synchronization may be done using any known mechanism, such as by using a conventional unified virtual memory driver for GPUs. This means that, at each synchronization point and forward to the next, the entire state of each GPU can be replicated based on the shadowed memory state and the logged instruction stream from that point. It is not necessary for the command and data stream for a given GPU to enter and exit the corresponding replay log to be executed by the target GPU device; rather, as indicated by the active, direct stream130n(for GPUn, with similar paths for other GPUs), the replay log can collect the stream in parallel to the active stream. Now seeFIG.3and assume that one of the coprocessors, for example, GPUn has failed or that the connection with it has failed, such that it cannot process the instructions and data from one or more of the applications100. There are several types of GPU failure, which include, without limitation, any failure that can be detected at runtime such as: memory, power, execution, thermal, data corruption, interconnect failures resulting in incorrect operation. The physical and logical state information of the GPU indicating such failures may generally be detected at the application programming interface (API) level, but could be detected by the interception layer by other conventional means including, but not limited to, sensing a signal from the associated driver, by sensing interrupts, polling of the GPU device, etc. Upon failure of a GPU in use, a redirection module1080within the interception layer1000first signals either the corresponding application100directly, or via the operating system, to pause execution of the application and then selects another, currently operational GPU according a redirection policy. This policy may be chosen in any manner, examples of which include random selection, a policy based on a utility function such as the current computational load of all GPUs in the system (for load balancing, for example), or of the least loaded GPU, based on required GPU speed, to reduce power consumption or heat, or of a degree of GPU dissimilarity (to reduce the likelihood of correlated failures), etc. This information may be programmed into the interception layer (such as in the module1080) by an administrator, or sensed via either the corresponding API, via the system software, or any other known mechanisms. Assume that GPUm is selected to take over from the failed GPUn. The data and instruction stream for failed GPUn since the most recent synchronization point will be stored in RPLn, and other state information for GPUn will also be available as described above. The interception layer thus assigns the replay log RPLn of the failed GPUs to the new target GPUm, and directs the stream of instructions and data in RPLn to GPUm for execution. The replay log RPLn is thus drained into GPUm, at which point the application that was paused may be signaled to continue its execution without interruption. In some cases, an application will specify which GPU it wishes to access simply by virtue of the API call it makes; in other cases, the API call may be to a specific type of GPU, but the OS may nominally be set to make an allocation of the GPU resource. Assuming that more than one GPU is able to handle the request, however, it would not be necessary to wait for a GPU failure to switch the application-GPU assignment; rather, based on some metric or other criterion, such as the GPU load imbalance being greater than some threshold, the interception layer could use the replay log of an overloaded GPU to enable forward computing progress until the application can be directed to call to a different GPU starting at the next synchronization point (if the window wt is wide enough to accommodate the entire stream from one synchronization point to the next), or starting at the point at which the RPL has been drained; the application's API stream can then also be changed to be logged in the replay log of the new GPU. Use of the replay logs to affect a change in the application-GPU assignment may be advantageous even in other situations. In particular, coprocessor redirection could be initiated for reasons other than coprocessor failure or unavailability. For example, assume that one or more additional GPUs is made available in runtime, for example through remote network attachment. For each such additional GPUj, the interception layer may create a corresponding replay log for GPUj and redirect calls to a previously available GPU to one of the new ones. This might also be for the purpose of load balancing. In another embodiment, the interception layer may be used to improve the performance of the GPU-calling application100by splitting the application's command stream to two separate GPUs. Even this embodiment may be extended by using the replay logging of the interception layer to implement GPU RAID capabilities such as mirroring, striping, and error correction in multiple configurations. As other examples, coprocessor redirection could be initiated by the interception layer, or by system software, to improve measured performance, power consumption, GPU uptime, etc. Since the command and data stream from an application is intercepted it can be manipulated, split and mirrored in virtually any combination that allows for improved RAS (reliability, availability, serviceability), performance, and cost. Logging of the instruction and data stream for a particular GPU may also be prospective: Note that the stream for a coprocessor such as a GPU is substantially “deterministic” in the sense that there are typically no conditional branches or jumps—in a compute API, all data accesses are explicit, so there is no need to guess what the next block will be. This means that all or part of an application's API call stream, through to the next synchronization point, can be pre-fetched, assuming it is in a memory space accessible and known to the interception layer, for example, through address pointers in the API call. Many provisioning systems now assume some shared memory. For example, coprocessors now typically have two memory spaces (host vs coprocessor, coprocessor 1 vs coprocessor, etc.) that must be kept coherent, but which may also be accessed by the interception layer. Note that, in a compute API, commands can be dispatched that don't have to execute immediately to make progress. Embodiments of the invention have several features not found in the prior art, and offer corresponding benefits. For example, the interception layer may run in user space, that is, not necessarily in system software such as an OS or hypervisor, or in an application100. Thanks to this, the invention can be portable, installed in a wide range of environments. Moreover, because the interception layer may run in a non-privileged execution mode, security and data confidentiality may be enforced without compromising the system with additional privileged code. Furthermore, unlike other approaches, additional features can be introduced into the interception layer without having to change the underlying operating system, drivers, or virtual machine hypervisors. One main reason that systems include coprocessors such as GPUs is to increase processing speed. Assume that a GPU application100is running slowly. According to prior art methods, one must analyze code to understand how much time is spent on the GPU and the cost for computation and data transfer. To save time, such known systems typically change the application code such that, while computing a current batch, the next batch is being fetched. Embodiments of this invention, in contrast, may implement automatic pipelining, using the replay logs, and do this automatically, for practically any application, with no need to change code. This may therefore provide a performance advantage even for badly written code.
20,754
11860738
The present disclosure will be described with reference to the accompanying drawings. Generally, the drawing in which an element first appears is typically indicated by the leftmost digit(s) in the corresponding reference number. DETAILED DESCRIPTION OF EMBODIMENTS According to some approaches, an image-level backup may be used to backup a physical or virtual machine (VM). This description will use a virtual machine as an non-limiting example. However, as would be understood by a person of skill in the art, embodiments are not limited to the use with virtual machines and may also be used with physical machines, for example. In case of VM, this may be accomplished by using a hypervisor snapshot (VM snapshot) of the VM. The VM snapshot may be used to create a complete copy of a VM image for backup purposes. Existing backup solutions, such as Veeam Backup & Replication, enable restoring both the whole VM image, and individual files from image level backups without restoring the whole VM image. FIG.1illustrates an example environment100in which embodiments can be practiced or implemented. Example environment100is provided for the purpose of illustration only and is not limiting of embodiments. As shown inFIG.1, example environment100includes a user system102, a host system104, and a backup server106. Host system104is connected to user system102via an interface108and to backup server106via an interface116. User system102can be any computing device, such as a personal computer, smartphone, or tablet, to name a few examples. Host system104can be a server that enables various services and applications. In one embodiment, host system104includes a hypervisor110, which enables a plurality of virtual machines112.1,112.2, . . . ,112.N. In an embodiment, hypervisor110includes a virtual machine monitor configured to launch and run virtual machines. Hypervisor110can be implemented in software, hardware, or firmware. For example, hypervisor110can be a VMware® vSphere hypervisor or a Microsoft® Hyper-V hypervisor. Backup server106is a physical or virtual machine that performs the role of a configuration and control center, performing various types of administrative activities (such as coordinating backup, replication tasks, controlling backup scheduling and resource allocating). In addition, backup server106can be used to store image-level backups. For example, as shown inFIG.1, backup server106can store image level backups114.1,114.2, . . . ,114.N of virtual machines112.1,112.2, . . . ,112.N respectively. Alternatively, image level backups can be stored on another server, network attached storage device (NAS), or storage area network (SAN). A user of user system102can have access privileges to one or more of virtual machines112.1,112.2, . . . ,112.N. For example, if the user of user system102has access privileges to virtual machine112.1, the user of user system102can log onto virtual machine112.1by providing its user credentials (login, password) to virtual machine112.1. At times, the user of user system102may need to restore data from an image level backup of a virtual machine that it has access to. This is typically done by restoring an image (or a portion thereof) of the virtual machine from a backup file. In some systems, data restoration privileges are either subject to no restrictions, or limited to backup administrators only. The former approach creates security and privacy concerns since individual data, which may contain personal information, can be accessed/restored from an image level backup of the virtual machine by more than one user. The latter approach can create a large work load for administrators. For example, administrators typically respond to a user restoration request by performing the restoration themselves. This can be time consuming, and also inefficient as it requires the availability of the administrator when the user has a need for restoring, and that the administrator is able to locate the required file within the user folders. Other approaches include creating a restoration team dedicated for handling restoration requests. However, this approach can be expensive and can create a privacy issue since the personnel of the restoration team will be able to access the user's contents of the image level backup of the virtual machine. Embodiments, as further described below, include but are not limited to systems, methods, and computer program products for enabling user authorization during file level restore from an image level backup without the need for access control by a backup administrator. Specifically, example embodiments enable an access control mechanism for controlling access to stored virtual machine images by users in the system. In an embodiment, the virtual machine includes a backup application user interface that can be used to send a restoration request to a backup server. The restoration request can include machine identifier and a user identifier of the user logged onto the virtual machine. The machine identifier can be a DNS name, IP address assigned to virtual machine, hypervisor-level VM identifier, and similar. In another embodiment, machine identifier can be automatically detected by matching computer account (such as Active Directory computer account) to a VM. The user identifier can be the login information of the user logged onto the virtual machine, user token, or similar. The backup application can access and restore data from virtual machine backups, and can accept or deny the restoration request based on certain one or more criteria, for example whether or not machine identifier contained in restoration request can be matched to the machine identifier of virtual machine present in one of the virtual machine backups stored on backup server, or the user belongs to the certain access control group in the OS of virtual machine. In an embodiment, the access control group is a local administrators group. FIG.2illustrates an example system200according to an embodiment. Example system200is provided for the purpose of illustration only, and is not limiting of embodiments. As shown inFIG.2, example system200includes a user system102, a host system104, and a backup server106. Host system104includes a hypervisor110, which enables one or more virtual machines, such as virtual machine112.1. Host system104is connected to user system102via an interface108and to backup server106via an interface116. In the example embodiments, backup server106also stores image level backups for one or more virtual machines on locally attached storage. For example, backup server106stores virtual machine image level backup VM_1 Backup114.1of virtual machine112.1. In addition, backup server106hosts a backup application204. Backup application204can be used by users to backup virtual machines, such as virtual machine112.1, to backup server106or to restore virtual machine images or portions thereof from virtual machine image level backup VM_1 Backup114.1stored on backup server106. In an embodiment, during backup or replication activity, backup application204saves to a database user accounts belonging to a certain access control group in the Operating System (OS) of Virtual Machines being backed up. In an embodiment, the access control group is a local administrators group. In an embodiment, the user accounts each include one or more user identifiers. User system102can connect to host system104via interface108and a user of user system102can log onto virtual machine112.1. When logged onto virtual machine112.1, the user of user system102can run a backup application user interface202on virtual machine112.1. Backup application user interface202is a user interface for backup application204. In an embodiment, backup application user interface202is a web-based user interface, which can be accessed using an Internet browser. User of user system102can use backup application user interface202of virtual machine112.1to send a restoration request206to backup server106. In an embodiment, restoration request206includes a machine identifier of virtual machine112.1(on which backup application user interface202is running). The machine identifier can be a Domain Name System (DNS) name, an Internet Protocol (IP) address assigned to the virtual machine, a hypervisor-level VM identifier, or similar identifier. In another embodiment, the machine identifier can be automatically detected by matching a computer account (such as an Active Directory computer account) to a VM. Backup application204is configured to receive restoration request206from virtual machine112.1over interface116. Using the machine identifier contained in the restoration request, backup application204identifies whether the machine identifier contained in restoration request206can be matched to the machine identifier of virtual machine112.1present in virtual machine image level backup114.1stored on backup server106. In an embodiment, if the machine identifier contained in restoration request206can be matched to the machine identifier of the virtual machine present in virtual machine image level backup114.1stored on backup server106, the user who is currently logged on virtual machine112.1can access and restore objects (e.g., files, folders, directories, etc.) from virtual machine image level backup114.1of virtual machine112.1using user interface202, which may be a web interface. Backup application204then accepts or denies restoration request206based at least in part on whether the machine identifier contained in restoration request206can be matched to the machine identifier of virtual machine112.1present in virtual machine image level backup114.1stored on backup server106. In an embodiment, backup application204accepts the restoration request if machine identifier contained in restoration request206can be matched to the machine identifier of virtual machine112.1present in virtual machine image level backup114.1stored on backup server106and denies the restoration request otherwise. A response (acceptance or denial)208to restoration request206is then sent to backup application user interface202. If response208is a denial, backup application user interface202displays a request denied message to the user. Otherwise, backup application user interface202provides an interface for accessing image level backup VM_1 Backup114.1of virtual machine112.1. The interface can include a view for selecting objects for restoration from image level backup VM_1 Backup114.1. FIG.3is an example process300according to an embodiment. Example process300is provided for the purpose of illustration only and is not limiting of embodiments. Example process300can be performed by a backup server, such as backup server106, and more particularly a backup application, such as backup application204. As shown inFIG.3, example process300begins in step302, which includes saving, during backup or replication activity, to a database user accounts belonging to a certain access control group in the OS of Virtual Machines being backed up. In an embodiment, the access control group is local administrators group. Subsequently, process300proceeds to step304, which includes receiving a restoration request from a virtual machine. In an embodiment, the restoration request includes machine identifier. The machine identifier can be DNS name, IP address assigned to virtual machine, hypervisor-level VM identifier, and similar. In another embodiment, VM identifier can be automatically detected by matching computer account (such as Active Directory computer account) to a VM. Subsequently, process300proceeds to step306, which includes determining whether the machine identifier contained in restoration request206can be matched to the machine identifier of a virtual machine present in one of the virtual machine backups stored on backup server. If the answer is no, process300proceeds to step308, which includes denying the restoration request. Otherwise, process300proceeds to step310, which includes accepting the restoration, and then to step312, which includes providing the user access to content of an image level backup of the virtual machine. FIG.4illustrates another example system400according to an embodiment. Example system400is provided for the purpose of illustration only and is not limiting of embodiments. As shown inFIG.4, example system400includes a user system102, a host system104, and a backup server106. Host system104includes a hypervisor110, which enables one or more virtual machines, such as virtual machine112.1. Host system104is connected to user system102via an interface108and to backup server106via an interface116. As described above with respect to example system200, backup server106stores image level backups of one or more virtual machines. For example, backup server106stores image level backup VM_1 Backup114.1of virtual machine112.1. Backup server106also hosts a backup application204, which can be used by users to backup virtual machines, such as virtual machine112.1, to backup server106or to restore virtual machine images or portions thereof, from image level backup VM_1 Backup114.1, from backup server106. In an embodiment, during backup or replication activity, backup application204saves to a database user accounts belonging to a certain access control group in the OS of Virtual Machines being backed up. In an embodiment, the access control group is a local administrators group In an embodiment, a user of user system102can use backup application user interface202of virtual machine112.1to send a restoration request402to backup server106. In an embodiment, restoration request402includes a machine identifier of virtual machine112.1(on which backup application user interface202is running), and a user identifier of the user logged onto virtual machine112.1. The user identifier can be the login information of the user logged onto the virtual machine, a user token, or similar identifier. In an embodiment, as described above with reference toFIG.2, backup application204is configured to use the machine identifier of virtual machine112.1contained in restoration request402to match it to the machine identifier of virtual machine112.1present in virtual machine image level backup114.1stored on backup server106. Backup application204then accepts or denies restoration request402based at least in part whether machine identifier contained in restoration request206can be matched to the machine identifier of virtual machine112.1present in virtual machine image level backup114.1stored on backup server106. In an embodiment, backup application204denies restoration request402if the machine identifier contained in restoration request206cannot be matched to the machine identifier of virtual machine112.1present in virtual machine image level backup114.1stored on backup server106 In an embodiment, if the machine identifier machine identifier contained in restoration request206can be matched to the machine identifier of virtual machine112.1present in virtual machine image level backup114.1stored on backup server106, backup application204is then configured to determine whether or not the user identifier, contained in the restoration request, belongs to access control group210in the OS. In an embodiment, access control group210is a local administrators group. Backup application204then accepts or denies restoration request402based at least in part on whether or not the user identifier, contained in the request, belongs to access control group210of virtual machine112.1. In an embodiment, backup application204denies restoration request402if the user identifier does not belong to access control group210. A response208denying the restoration request is then sent to backup application user interface202. If response208includes a denial, response208can indicate to the user via backup application user interface202that the reason for denial is that user identifier does not belong to access control group210. In an embodiment, access control group210is a local administrators group. Otherwise, if restoration request402is accepted, then backup application user interface202provides an interface for accessing the content of image level backup VM_1 Backup114.1of virtual machine112.1. The interface can include a view for selecting objects for restoration from image level backup VM_1 Backup114.1. In an embodiment, the user identifier can be a user token. Backup application204uses the user token to determine whether or not the user identifier, contained in the restoration request, belongs to access control group210. In an embodiment, the access control group is local administrators group. For example, in case of Microsoft Windows being an operating system of virtual machine111.2, the user token contains information regarding SIDs of users group that the user belongs to. FIG.5is another example process500according to an embodiment. Example process500is provided for the purpose of illustration only and is not limiting of embodiments. Example process500can be performed by a backup server, such as backup server106, and more particularly by a backup application, such as backup application204. As shown inFIG.5, example process500begins in step502, which includes saving, during backup or replication activity, to a database user accounts belonging to a certain access control group in the OS of Virtual Machines being backed up. In an embodiment, the access control group is local administrators group. Subsequently, example process500proceeds to in step504, which includes receiving a restoration request from a virtual machine. In an embodiment, the restoration request includes a machine identifier and a user identifier of a user logged onto the virtual machine. The machine identifier can be a DNS name, an IP address assigned to virtual machine, a hypervisor-level VM identifier, or similar identifier. In another embodiment, a VM identifier can be automatically detected by matching a computer account (such as Active Directory computer account) to a VM. Subsequently, process500proceeds to step506, which includes determining whether the machine identifier contained in restoration request206can be matched to the machine identifier of a virtual machine present in one of the virtual machine image level backups stored on the backup server. If the answer is no, process500proceeds to step508, which includes denying the restoration request. Otherwise, process500proceeds to step510. In step510, process500includes determining whether or not the user identifier, contained in the restoration request, belongs to the a certain access control group in the OS of the virtual machine. In an embodiment, the access control group is a local administrators group. If the answer is no, process500proceeds to step508, which includes denying the restoration request. Otherwise, process500proceeds to step512, which includes accepting the restoration request, and then to step514which includes providing the user access to content of an image level backup of the virtual machine. FIG.6illustrates another example system600according to an embodiment. In this embodiment, an authorization code can be delivered to the user by placing it directly into the virtual machine112file system to a folder accessible to specific user or computer administrator only, such as home folder. Example system600is provided for the purpose of illustration only and is not limiting of embodiments. As shown inFIG.6, example system600includes a user system102, a host system104, and a backup server106. Host system104includes a hypervisor110, which enables one or more virtual machines, such as virtual machine112.1. Host system104is connected to user system102via an interface108and to backup server106via an interface116. As described above with respect to example system200, backup server106stores image level backups of virtual machines. For example, backup server106stores image level backup VM_1 Backup114.1of virtual machine112.1. Backup server106also hosts a backup application204, which can be used by users to backup virtual machines, such as virtual machine112.1, to backup server106or to restore virtual machine images or portions thereof from image level backup VM_1 Backup114.1stored on backup server106. In an embodiment, a user of user system102can use backup application user interface202of virtual machine112.1to send a restoration request206to backup server106. In an embodiment, restoration request206includes a machine identifier112.1(on which backup application user interface202is running) and a user identifier of the user logged onto virtual machine112.1. The user identifier can be the login information of the user logged onto the virtual machine, user token, or similar. In an embodiment, backup application204is configured to use the machine identifier of virtual machine112.1contained in restoration request402to match it to the machine identifier of virtual machine112.1present in virtual machine image level backup114.1stored on backup server106. Backup application204then accepts or denies restoration request206based at least in part on whether the machine identifier contained in restoration request206can be matched to the machine identifier of virtual machine112.1present in virtual machine image level backup114.1stored on backup server106. In an embodiment, backup application204denies restoration request402if the machine identifier contained in restoration request206cannot be matched to the machine identifier of virtual machine112.1present in virtual machine image level backup114.1stored on backup server106. A response208denying the restoration request is then sent to backup application user interface202. If the machine identifier contained in restoration request206can be matched to the machine identifier of virtual machine112.1present virtual machine image level backup114.1stored on backup server106, backup application204is configured to write an authentication cookie606to an administrator-only accessible location602of virtual machine112.1and to prompt the user logged onto virtual machine112.1to provide the authentication cookie. In an embodiment, administrator-only accessible location602is a directory of virtual machine112.1that can only be accessed by users with administrative access privileges. A user authorized to access and restore objects (e.g., files, folders, directories, etc.) from image level backups of virtual machine112.1is part of this set of users and can therefore access and retrieve authentication cookie606from administration-only accessible location602. In an embodiment, backup application204uses an API604provided by hypervisor110to write authentication cookie606to administrator-only accessible location602of virtual machine112.1. For example, in the case of hypervisor110being a VMware® hypervisor, API604can be a VIX API, which provides a library for writing scripts and programs to manipulate virtual machines. In an embodiment, authentication cookie606can be written to administrator-only accessible location602of virtual machine112.1using the CreateTempFilelnGuest operation of the VIX API, which copies a file or directory from backup server106to administrator-only accessible location602of virtual machine112.1. Backup application204is then configured to wait for the user logged onto virtual machine112.1to provide the authentication cookie via backup application user interface202. If no authentication cookie is received by backup application204with a predetermined time interval, backup application204denies restoration request206and sends a denial response208to backup application user interface202. Response208can indicate to the user via backup application user interface202that the reason for the denial is the expiration of the time to enter the authentication. Otherwise, if an authentication cookie608is received by backup application204within the predetermined time interval, then backup application204accepts restoration request206, if the received authentication cookie608matches the written authentication cookie606and denies restoration request206, if the received authentication cookie608does not match the written authentication cookie606. A response208accepting or denying restoration request206is then sent to backup application user interface202. If response208includes a denial, response208can indicate to the user via backup application user interface202that the reason for denial is the entry of an incorrect/invalid authentication cookie. Otherwise, if restoration request206is accepted, then backup application user interface202provides an interface for accessing content of image level backup VM_1 Backup114.1of virtual machine112.1. The interface can include a view for selecting objects for restoration from image level backup VM_1 Backup114.1. FIG.7is another example process700according to an embodiment. Example process700is provided for the purpose of illustration only and is not limiting of embodiments. Example process700can be performed by a backup server, such as backup server106, and more particularly a backup application, such as backup application204. As shown inFIG.7, example process700begins in step702, which includes receiving a restoration request from a virtual machine. In an embodiment, the restoration request includes a machine identifier and a user identifier of a user logged onto the virtual machine. Subsequently, in step704, process700includes determining whether the machine identifier contained in the restoration request can be matched to the machine identifier of a virtual machine present in one of the virtual machine image level backups stored on the backup server. If the answer is no, process700proceeds to step706, which includes denying the restoration request. Otherwise, process700proceeds to step708. In step708, process700includes writing an authentication cookie to an administrator-only accessible location of the virtual machine. In an embodiment, the administrator-only accessible location is a directory of the virtual machine that can only be accessed by users with administrative access privileges. In an embodiment, the authentication cookie is written to the virtual machine using a VIX API provided by a VMware® hypervisor. Subsequently, in step710, process700includes prompting the user logged onto the virtual machine to provide the authentication cookie. A user authorized to access and restore objects (e.g., files, folders, directories, etc.) from image level backups of the virtual machine is part of the set of users that can access the administrator-only accessible location of the virtual machine, and can therefore access and retrieve the authentication cookie from the administrator-only accessible location. Then, process700proceeds to step712, in which the backup applications waits to receive an authentication cookie from the virtual machine. If no authentication cookie is received from the virtual machine within a predetermined time interval, process700proceeds to step706, which includes denying the restoration request. Otherwise, process700proceeds to step714. In step714, process700includes determining whether the received authentication cookie matches the written authorization code. If the is answer is no, process700proceeds to step706. Otherwise, process700proceeds to step716, which includes accepting the restoration request, and then to step718, which includes providing the user access to content of image level backup of the virtual machine. FIG.8illustrates another example system800according to an embodiment. Example system800is provided for the purpose of illustration only and is not limiting of embodiments. As shown inFIG.8, example system800includes a user system102, a host system104, and a backup server106. Host system104includes a hypervisor110, which enables one or more virtual machines, such as virtual machine112.1,112.2, . . . ,112.N. Host system104is connected to user system102via an interface108and to backup server106via an interface116. As described above with respect to example system200, backup server106stores image level backups of one or more virtual machines. For example, backup server106can store image level backups114.1,114.2, . . . ,114.N of virtual machines112.1,112.2, . . . ,112.N respectively. Backup server106also hosts a backup application204, which can be used by users to backup virtual machines, such as virtual machine112.1,112.2, . . . ,112.N to backup server106or to restore virtual machine images or portions thereof, from image level backups114.1,114.2, . . . ,114.N of virtual machines112.1,112.2, . . . ,112.N respectively, from backup server106. In an embodiment, during backup or replication activity, backup application204saves to a database user accounts belonging to a certain access control group in the OS of Virtual Machines being backed up. In an embodiment, the access control group is a local administrators group. In an embodiment, a user of user system102can use backup application user interface202.1,202.2, . . . ,202.N of virtual machines112.1,112.2, . . . ,112.N to send a restoration request802to backup server106. In an embodiment, restoration request802includes a machine identifier of virtual machine112.1(on which backup application user interface202is running), and a user identifier of the user logged onto virtual machine112.1. The user identifier can be the login information of the user logged onto the virtual machine, a user token, or similar identifier. In an embodiment, backup application204is configured to use the user identifier contained in the restoration request to match it to one of the user identifiers contained in the image level backups114.1,114.2, . . . ,114.N of virtual machines112.1,112.2, . . . ,112.N. Backup application204then accepts or denies restoration request802based at least in part on whether or not the user identifier contained in the restoration request can be matched to one or more of the user identifiers contained in the image level backups114.1,114.2, . . . ,114.N of virtual machines112.1,112.2, . . . ,112.N. In an embodiment, backup application204denies restoration request402if the user identifier contained in the restoration request cannot be matched to any of the user identifiers contained in the image level backups114.1,114.2, . . . ,114.N of virtual machines112.1,112.2, . . . ,112.N. A response208denying the restoration request is then sent to backup application user interface202. If response208includes a denial, response208can indicate to the user via backup application user interface202that the reason for denial is the user identifier contained in the restoration request cannot be matched to any of the user identifiers contained in the virtual machine image level backups114.1,114.2, . . . ,114.N of virtual machines112.1,112.2, . . . ,112.N respectively. Otherwise, if restoration request402is accepted, then backup application user interface202provides an interface for accessing the content of virtual machine image level backups114.1,114.2, . . . ,114.N of virtual machine112.1,112.2, . . . ,112.N. The interface can include a view for selecting objects for restoration from image level backups114.1,114.2, . . . ,114.N of virtual machines112.1,112.2, . . . ,112.N. In an embodiment, backup application204can use the machine identifier contained in the restoration request to limit visible scope of content of image level backups. For example, using machine identifier contained in the restoration request, the backup application204can limit visible scope of image level backups to content of image level backup corresponding to the VM that the restoration request came from. FIG.9is an example process900according to an embodiment. Example process900is provided for the purpose of illustration only and is not limiting of embodiments. Example process900can be performed by a backup server, such as backup server106, and more particularly a backup application, such as backup application204. As shown inFIG.9, example process900begins in step902, which includes saving, during backup or replication activity, to a database user accounts belonging to a certain access control group in the OS of Virtual Machines being backed up. In an embodiment, the access control group is local administrators group. Subsequently, process900proceeds to step904, which includes receiving a restoration request from a virtual machine. In an embodiment, the restoration request includes a machine identifier. The machine identifier can be a DNS name, an IP address assigned to virtual machine, a hypervisor-level VM identifier, or similar identifier. In another embodiment, a VM identifier can be automatically detected by matching a computer account (such as Active Directory computer account) to a VM. In an embodiment, the restoration request includes a user identifier of the user logged onto virtual machine. The user identifier can be the login information of the user logged onto the virtual machine, a user token, or similar identifier. Subsequently, process900proceeds to step906, which includes determining whether or not the user identifier contained in the restoration request can be matched to one or more of the user identifiers contained in the image level backups. If the answer is no, process900proceeds to step908, which includes denying the restoration request. Otherwise, process900proceeds to step910, which includes accepting the restoration request, and then to step912which includes providing the user access to content of an image level backup of the virtual machine. In step914, process900includes using the machine identifier contained in the restoration request to limit visible scope of content of image level backups. Various aspects of the embodiments described herein can be implemented by software, firmware, hardware, or a combination thereof.FIG.10illustrates an example computer system1000in which embodiments, or portions thereof, can be practiced or implemented as computer-readable code. For example, processes300ofFIG.3,500ofFIG.5,700ofFIG.7, and900ofFIG.9can be implemented in system1000. Various embodiments are described in terms of this example computer system1000. After reading this description, it will become apparent to a person skilled in the relevant art how to implement embodiments using other computer systems and/or computer architectures. Computer system1000includes one or more processors, such as processor1010. Processor1010can be a special-purpose or a general-purpose processor. Processor1010is connected to a communication infrastructure1020(for example, a bus or network). Computer system1000also includes a main memory1030, preferably random access memory (RAM), and may also include a secondary memory1040. Secondary memory1040may include, for example, a hard disk drive1050, a removable storage drive1060, and/or a memory stick. Removable storage drive1060may comprise a floppy disk drive, a magnetic tape drive, an optical disk drive, a flash memory, or the like. Removable storage drive1060reads from and/or writes to a removable storage unit1070in a well-known manner. Removable storage unit1070may comprise a floppy disk, magnetic tape, optical disk, etc. As will be appreciated by persons skilled in the relevant art(s), removable storage unit1070includes a computer-usable storage medium having stored therein computer software and/or data. In alternative implementations, secondary memory1040may include other similar means for allowing computer programs or other instructions to be loaded into computer system1000. Such means may include, for example, a removable storage unit1070and an interface1020. Examples of such means may include a program cartridge and cartridge interface (such as that found in video game devices), a removable memory chip (such as an EPROM, or PROM) and associated socket, and other removable storage units1070and interfaces1020which allow software and data to be transferred from the removable storage unit1070to computer system1000. Computer system1000may also include a communication and network interface1080. Communication interface1080allows software and data to be transferred between computer system1000and external devices. Communication interface1080may include a modem, a communication port, a PCMCIA slot and card, or the like. Software and data transferred via communication interface1080are in the form of signals which may be electronic, electromagnetic, optical, or other signals capable of being received by communication interface1080. These signals are provided to communication interface1080via a communication path1085. Communication path1085carries signals and may be implemented using wire or cable, fiber optics, a phone line, a cellular phone link, an RF link or other communication channels. The network interface1080allows the computer system1000to communicate over communication networks or mediums such as LANs, WANs the Internet, etc. The network interface1080may interface with remote sites or networks via wired or wireless connections. The computer system1000may also include input/output/display devices1090, such as keyboards, monitors, pointing devices, etc. In this document, the terms “computer readable medium” and “computer usable medium” are used to generally refer to media such as removable storage unit1070, removable storage drive1060, and a hard disk installed in hard disk drive1050. Computer program medium and computer usable medium can also refer to memories, such as main memory1030and secondary memory1040, which can be memory semiconductors (e.g. DRAMs, etc.). These computer program products are means for providing software to computer system1000. Computer programs (also called computer control logic) are stored in main memory1030and/or secondary memory1040. Computer programs may also be received via communication interface1080. Such computer programs, when executed, enable computer system1000to implement the embodiments discussed herein. In particular, the computer programs, when executed, enable processor1010to implement the processes discussed above inFIGS.3,5, and9. Accordingly, such computer programs represent controllers of the computer system1000. Where an embodiment is implemented using software, the software may be stored in a computer program product and loaded into computer system1000using removable storage drive1060, interface1010, hard disk drive1050or communication interface1080. Embodiments have been described above with the aid of functional building blocks illustrating the implementation of specified functions and relationships thereof. The boundaries of these functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternate boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. CONCLUSION It is to be appreciated that the Detailed Description section, and not the Summary and Abstract sections (if any), is intended to be used to interpret the claims. The Summary and Abstract sections (if any) may set forth one or more but not all exemplary embodiments of the invention as contemplated by the inventor(s), and thus, are not intended to limit the invention or the appended claims in any way. While the invention has been described herein with reference to exemplary embodiments for exemplary fields and applications, it should be understood that the invention is not limited thereto. Other embodiments and modifications thereto are possible, and are within the scope and spirit of the invention. For example, and without limiting the generality of this paragraph, embodiments are not limited to the software, hardware, firmware, and/or entities illustrated in the figures and/or described herein. Further, embodiments (whether or not explicitly described herein) have significant utility to fields and applications beyond the examples described herein. Embodiments have been described herein with the aid of functional building blocks illustrating the implementation of specified functions and relationships thereof. The boundaries of these functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternate boundaries can be defined as long as the specified functions and relationships (or equivalents thereof) are appropriately performed. Also, alternative embodiments may perform functional blocks, steps, operations, methods, etc. using orderings different than those described herein. References herein to “one embodiment,” “an embodiment,” “an example embodiment,” or similar phrases, indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it would be within the knowledge of persons skilled in the relevant art(s) to incorporate such feature, structure, or characteristic into other embodiments whether or not explicitly mentioned or described herein. The breadth and scope of the invention should not be limited by any of the above-described exemplary embodiments, but should be defined only in accordance with the following claims and their equivalents.
41,513
11860739
DETAILED DESCRIPTION An environment10with a plurality of client computing devices12(1)-12(n), an exemplary plurality of storage controllers14(1)-14(n), and a plurality of storage repositories16(1)-16(n), is illustrated inFIG.1. In this particular example, the environment10inFIG.1includes the client computing devices12(1)-12(n), the storage controllers14(1)-14(n), and the storage repositories16(1)-16(n), coupled via communication networks30(1) and30(2), although the environment10could include other types and numbers of systems, devices, components, and/or other elements coupled in other manners. The client computing devices12(1)-12(n), the storage controllers14(1)-14(n), and the storage repositories16(1)-16(n) may exist at a specific geographic or network location, may be spread across multiple geographic or network locations, or may be partially or completely virtualized in a cloud environment. The example of a method for managing snapshots in a distributed de-duplication system is executed by the storage controllers14(1)-14(n), although the technology illustrated and described herein could be executed by other types and/or numbers of other computing systems and devices or individually by one or more of the storage controllers14(1)-14(n). The environment10may include other types and numbers of other network elements and devices including routers and switches, as is generally known in the art and will not be illustrated or described herein. This technology provides a number of advantages including providing methods, non-transitory computer readable media and devices for improved management of data storage snapshots in a distributed de-duplication system. Referring toFIG.2, in this example each of the storage controllers14(1)-14(n) includes a processor18, a memory20, and a communication interface24, which are coupled together by a bus26, although the storage controllers14(1)-14(n) may include other types and numbers of elements in other configurations. The processor18of each of the storage controllers14(1)-14(n) may execute one or more programmed instructions stored in the memory20for storage management operations, as illustrated and described in the examples herein, although other types and numbers of functions and/or other operation can be performed. The processor18of each of the storage controllers14(1)-14(n) may include one or more central processing units (“CPUs”) or general purpose processors with one or more processing cores, although other types of processor(s) could be used. The memory20of each of the storage controllers14(1)-14(n) stores the programmed instructions and other data for one or more aspects of the present technology as described and illustrated herein, although some or all of the programmed instructions could be stored and executed elsewhere. A variety of different types of memory storage repositories, such as a non-volatile memory, random access memory (RAM), flash memory, or a read only memory (ROM) in the system or hard disk, SSD, CD ROM, DVD ROM, or other computer readable medium which is read from and written to by a magnetic, optical, or other reading and writing system that is coupled to the processor18, can be used for the memory20. The communication interface24of each of the storage controllers14(1)-14(n) operatively couples and communicates with the client computing devices12(1)-12(n), and the storage repositories16(1)-16(n), which are all coupled together by the communication networks30(1) and30(2), although other types and numbers of communication networks or systems with other types and numbers of connections and configurations to other devices and elements. By way of example only, the communication networks30(1) and30(2) can use TCP/IP over Ethernet and industry-standard protocols, including NFS, CIF S, S3, CDMI, HTTP and SNMP, although other types and numbers of communication networks, can be used. The communication networks30(1) and30(2) in this example may employ any suitable interface mechanisms and network communication technologies, including, for example, any local area network, any wide area network (e.g., Internet), teletraffic in any suitable form (e.g., voice, modem, and the like), Public Switched Telephone Network (PSTNs), Ethernet-based Packet Data Networks (PDNs), and any combinations thereof and the like. In this example, the bus26is a PCI bus, although other bus types and links may be used, such as PCI-Express or hyper-transport bus. Each of the client computing devices12(1)-12(n) includes a central processing unit (CPU) or processor, a memory, and an I/O system, which are coupled together by a bus or other link, although other numbers and types of network devices could be used. The client computing devices12(1)-12(n) communicate with the storage controllers14(1)-14(n) for storage management, although the client computing devices12(1)-12(n) can interact with the storage controllers14(1)-14(n) for other purposes. By way of example, the client computing devices12(1)-12(n) may run application(s) that may provide an interface to make requests to access, modify, delete, edit, read or write data within the storage repositories16(1)-16(n) via the communication network30(1) and the storage controllers14(1)-14(n). Each of the storage repositories16(1)-16(n) includes a central processing unit (CPU) or processor, and an I/O system, which are coupled together by a bus or other link, although other numbers and types of network devices could be used. Each storage repositories16(1)-16(n) assists with storing data, although the storage repositories16(1)-16(n) can assist with other types of operations such as storing of files or structured objects. Various network processing applications, such as CIFS applications, NFS applications, HTTP storage device applications, and/or FTP applications, may be operating on the storage repositories16(1)-16(n) and transmitting data (e.g., files or web pages) in response to requests from the storage controllers14(1)-14(n), and the client computing devices12(1)-12(n). It is to be understood that the storage repositories16(1)-16(n) may include hardware such as hard disk drives, solid state devices (SSD), or magnetic tapes, or software or may represent a system with multiple external resource servers, which may include internal or external networks. Although the exemplary network environment10includes the client computing devices12(1)-12(n), the storage controllers14(1)-14(n), and the storage repositories16(1)-16(n) described and illustrated herein, other types and numbers of systems, devices, components, and/or other elements in other topologies can be used. It is to be understood that the systems of the examples described herein are for exemplary purposes, as many variations of the specific hardware and software used to implement the examples are possible, as will be appreciated by those of ordinary skill in the art. In addition, two or more computing systems or devices can be substituted for any one of the systems or devices in any example. Accordingly, principles and advantages of distributed processing, such as redundancy and replication also can be implemented, as desired, to increase the robustness and performance of the devices and systems of the examples. The examples may also be implemented on computer system(s) that extend across any suitable network using any suitable interface mechanisms and traffic technologies, including by way of example only teletraffic in any suitable form (e.g., voice and modem), wireless traffic media, wireless traffic networks, cellular traffic networks, 3G traffic networks, Public Switched Telephone Network (PSTNs), Packet Data Networks (PDNs), the Internet, intranets, and combinations thereof. The examples also may be embodied as a non-transitory computer readable medium having instructions stored thereon (e.g., in the memory20) for one or more aspects of the present technology as described and illustrated by way of the examples herein, as described herein, which when executed by a processor (e.g., processor18), cause the processor to carry out the steps necessary to implement the methods of this technology as described and illustrated with the examples herein. An example of a method for managing storage in a distributed de-duplication system will now be described herein with reference toFIGS.1-8. Referring more specifically toFIG.3, an exemplary method of ingesting objects or files is illustrated. The exemplary method begins at step305where one of the storage controllers14(1)-14(n) receives an object or file to be stored into one of the storage repositories16(1)-16(n) from one of the client computing devices12(1)-12(n), although the storage controllers14(1)-14(n) can receive other types or amounts of data from other devices. Next in step310, the receiving one of the storage controllers14(1)-14(n) caches the received object or file into a cache within memory20, although the one of the storage controllers14(1)-14(n) can store the object or file at other transitory or non-transitory memory storage locations. In step315, the receiving one of the storage controllers14(1)-14(n) determines when the entire cached object or file is compressed or encrypted, or otherwise unlikely to benefit from further compression and sub-object deduplication. Accordingly, when the receiving one of the storage controllers14(1)-14(n) determines that the cached object or file is compressed, encrypted, or otherwise is unlikely to benefit from further compression and sub-object deduplication, then the Yes branch is taken to step325, which will be further illustrated below. In this example, when the cached object or file is either compressed or encrypted it means that there is low probability that further compression and sub-object deduplication will reduce the network and storage consumption associated with the object or the file. However back in step315, when the receiving one of the storage controllers14(1)-14(n) determines that the cached object is neither compressed nor encrypted, and that fragmentation, compression, and sub-object deduplication will likely reduce the network and storage consumption associated with storing the object or the file, then the No branch is taken to step320. In step320, the receiving one of the storage controllers14(1)-14(n) fragments the cached object or file. Prior or immediately subsequent to fragmenting, the receiving one of the plurality of storage controllers14(1)-14(n) compresses the object. In this example, fragmenting of the object or file relates to splitting the object or file into multiple fragments of equal or variable size, although other types or techniques of fragmenting can also be performed by the receiving one of the storage controllers14(1)-14(n) on the cached object or file. Next in step325, the receiving one of the storage controllers14(1)-14(n) computes the plaintext hash value for each of the fragments of the cached object or file using one or more hashing algorithms. Additionally, upon computing the plaintext hash value for each fragment, the receiving one of the storage controllers14(1)-14(n) also obtains a tenant key associated with the requesting one of the client computing devices12(1)-12(n), which was noted at the time of initializing the one of the client computing devices12(1)-12(n), from the memory20, although the tenant key can be obtained from other locations. Next in step330, the receiving one of the storage controllers14(1)-14(n) computes an encrypted fragment key for each of the fragments using the computed fragment plaintext hash and the obtained tenant key, although the receiving one of the storage controllers14(1)-14(n) can compute the encrypted fragment key using other techniques or parameters. Using the computed encrypted fragment key, the receiving one of the storage controllers14(1)-14(n) encrypts each of the fragments with their corresponding computed fragment key, although the fragment can be encrypted using other techniques or parameters. Next in step335, the receiving one of the storage controllers14(1)-14(n) computes a ciphertext hash value for each of the fragments by hashing the contents of the encrypted fragment, although the receiving one of the storage controllers14(1)-14(n) can use other techniques or parameters to compute the ciphertext hash value. Additionally in this example, once the ciphertext hash values are computed, the names of each of the encrypted fragments are replaced by the one of the storage controllers14(1)-14(n) with a corresponding one of the computed ciphertext hash values. Next in step340, the receiving one of the storage controllers14(1)-14(n) determines, for each of the fragments (with the name equal to the computed ciphertext hash value), when there is already an existing fragment with the same name stored in one or more of the storage repositories16(1)-16(n), although the one of the storage controllers14(1)-14(n) can make the determination in step340using other memory locations. In this example, when the receiving one of the storage controllers14(1)-14(n) determines that a fragment with the same name exists, then there is deduplication with the fragment. However, when the fragment with the same name does not exists then there is no deduplication. Accordingly, when the receiving one of the storage controllers14(1)-14(n) determines that a fragment exists with the same name, then the Yes branch is taken to step345. In step345, the receiving one of the storage controllers14(1)-14(n) does not store the fragments in the storage repositories16(1)-16(n) and the exemplary method ends. By not sending the fragment, the technology disclosed herein avoids storing duplicate data in the storage repositories16(1)-16(n) and also saves the bandwidth that would otherwise be utilized to store the fragment that is already present. Additionally in this example, the receiving one of the storage controllers14(1)-14(n) stores the computed plaintext hash values, and the ciphertext hash values associated with the original object, as a separate recipe object in one or more of the storage repositories16(1)-16(n), although the hash values can be stored at other locations. However back in step340, when the receiving one of the storage controllers14(1)-14(n) determines that an object or fragment does not exists with the same name, then the No branch is taken to step350. In step350, the receiving one of the storage controllers14(1)-14(n) stores the object or fragment with the name of the computed ciphertext value and the exemplary method ends. Additionally in this example, the receiving one of the storage controllers14(1)-14(n) stores the computed plaintext hash values, and the ciphertext hash values associated with the original object, as a separate recipe object in one or more of the storage repositories16(1)-16(n), although the hash values can be stored at other locations. Optionally, the recipe object can also be encrypted for the purpose of data security. In this example, a namespace which maps the object name to a series of plaintext and ciphertext hashes is obtained from the recipe object retrieved from the storage repositories16(1)-16(n), although the namespace can be stored at other locations. Further in this example, client computing devices12(1)-12(n) that share the same tenant key can transfer objects, for example, by providing access to the recipe object or the contents of the recipe. Accordingly, the technology disclosed herein provides a low-bandwidth way to transfer objects between systems, and to synchronize updates to objects. Referring more specifically toFIG.4, an exemplary method of managing a read or GET request is illustrated. In step405, one of the storage controllers14(1)-14(n) receives a read or GET request for an object or file from one of the client computing devices12(1)-12(n), although the storage controllers14(1)-14(n) can receive other types or amounts of requests. In this example, the read or the GET request includes the name of the object or the file, although the received request can include other types or amounts of information. Next in step410, the receiving one of the storage controllers14(1)-14(n) determines when the requested object is in the cache within the memory20, although the receiving one of storage controllers14(1)-14(n) can also determine when the object is present at other locations and can also check with the storage repositories16(1)-16(n) to ensure that the cache is not stale. Accordingly, when the receiving one of the storage controllers14(1)-14(n) determines that the requested object is within the cache of the memory20, then the Yes branch is taken to step415. In step415, the receiving one of the storage controllers14(1)-14(n) obtains the requested object from the cache and the exemplary flow proceeds to step450where the requested object is returned back to the requesting one of the client computing devices12(1)-12(n) and the flow ends. However, back in step410when the receiving one of the storage controllers14(1)-14(n) determines that the requested object or file is not present in the cache, then the Yes branch is taken to step420. In step420, the receiving one of the storage controllers14(1)-14(n) identifies or performs a lookup of plaintext and ciphertext hashes locally within the memory20or uses the name of the requested object to obtain the plaintext and ciphertext hashes from the recipe object stored in one or more of the storage repositories16(1)-16(n), although the one of the storage controllers14(1)-14(n) can use other techniques or parameters to lookup the plaintext and ciphertext hashes. In step425, the receiving one of the storage controllers14(1)-14(n) obtains, for each of the ciphertext hash values, the fragments associated with the requested object or file from one or more of the storage repositories16(1)-16(n), although the receiving one of the storage controllers14(1)-14(n) can obtain the fragments from other locations. In this example, each of the fragments obtained from the one or more of the storage repositories16(1)-16(n) is encrypted using the technique previously described and illustrated with reference toFIG.3. Next in step430, the receiving one of the storage controllers14(1)-14(n) verifies and decrypts each of the obtained encrypted fragments using the plaintext hash value corresponding to the ciphertext hash value; and also the tenant key associated with the requesting one of the client computing devices12(1)-12(n), although the receiving one of the storage controllers14(1)-14(n) can use other techniques to perform the decryption. In step435, the receiving one of the storage controllers14(1)-14(n) verifies each of the decrypted fragments using the plaintext hash value, and begins to reassemble the decrypted fragments associated with the requested object. In this example, reassembling of the fragments is required because, as illustrated inFIG.3, the objects are split into multiple fragments and stored at the storage repositories16(1)-16(n). Accordingly, reassembling from the fragments is required to generate the complete requested object. Next in step440, the receiving one of the storage controllers14(1)-14(n) determines if the reassembled fragments are compressed. In this example, the fragments stored at the storage repositories16(1)-16(n) can be compressed and stored to utilize the storage memory space efficiently, although the fragments can be compressed for other purposes. Accordingly, when the receiving one of the storage controllers14(1)-14(n) determines that the reassembled fragments are not compressed, then the No branch is taken to step450and the object is returned. However when the receiving one of the storage controllers14(1)-14(n) determines in step440that the fragments are compressed, then the Yes branch is taken to step445. In step445, the receiving one of the storage controllers14(1)-14(n) decompresses the compressed fragments using one or more data decompression algorithms. Next in step450, the receiving one of the storage controllers14(1)-14(n) returns the requested object back to the requesting one of the client computing devices12(1)-12(n) and the exemplary method ends at step450. Referring more specifically toFIG.5, an exemplary method for deleting objects is illustrated. The exemplary method beings at step505where one of the storage controllers14(1)-14(n) receives a delete object or file request from one of the client computing devices12(1)-12(n), although the storage controllers14(1)-14(n) can receive other types or amounts of information. In this example, the delete request includes the name of the object or the file, although the received delete request can include other types or amounts of information. While this method is illustrated upon receipt of the delete request, the receiving one of the storage controllers14(1)-14(n) can perform the operation of deleting the objects periodically in other examples. Next in step510, the receiving one of the storage controllers14(1)-14(n) initiates the process to remove all of the stored ciphertext and the plaintext hashes by first storing a “tombstone” object for the recipe object in one or more of the storage repositories16(1)-16(n), which marks the recipe object as becoming deleted at a specific point in time, although the receiving one of the storage controllers14(1)-14(n) can mark or delete the ciphertext and the plaintext hashes from other locations. Next in step515, the receiving one of the storage controllers14(1)-14(n) periodically identifies all of the storage repositories16(1)-16(n) storing the fragments associated with the ciphertext hashes obtained from the recipe objects and any corresponding tombstone objects, if present, although the receiving one of the storage controllers14(1)-14(n) can identify the fragments using other techniques or parameters. While this example illustrates the receiving one of the storage controllers14(1)-14(n) performing this step, one or more storage controllers of the remaining plurality of storage controllers14(1)-14(n) can also perform this step in other examples. Next in step520, the receiving one of the storage controllers14(1)-14(n) determines when the identified fragments are referenced by other of the storage controllers14(1)-14(n). In this example, the number of recipe objects containing a given ciphertext hash is counted, and the number of tombstone objects referring to a given ciphertext hash are subtracted. Accordingly, when the receiving one of the storage controllers14(1)-14(n) determines that the fragments are not being referred (e.g., has a count of zero), then the No branch is taken to step525. While this example illustrates the receiving one of the storage controllers14(1)-14(n) performing this step, one or more of the remaining storage controllers14(1)-14(n) can also perform this step in other examples. In step525, the receiving one of the storage controllers14(1)-14(n) removes or deletes all the fragments, recipe objects, and tombstone objects no longer in use from the storage repositories16(1)-16(n) and the exemplary method ends at step535. Further in this example, the receiving one of the storage controllers14(1)-14(n) may use a locking or conditional deletion mechanism to prevent race conditions. While this example illustrates the receiving one of the storage controllers14(1)-14(n) performing this step, one or more of the remaining storage controllers14(1)-14(n) can also perform this step in other examples. However back in step520, when the receiving one of the storage controllers14(1)-14(n) determines that the fragments are being referenced, then the Yes branch is taken to step530. While this example illustrates the receiving one of the storage controllers14(1)-14(n) performing this step, one or more of the remaining storage controllers14(1)-14(n) can also perform this step in other examples. In step530, the receiving one of the storage controllers14(1)-14(n) does not delete the fragments, recipe objects, and tombstone objects, and the exemplary method ends at step535. In this example of deleting the object, the receiving one of the storage controllers14(1)-14(n) avoids the race condition by using protocol Request for Comment (RFC) No. 7232, which is hereby incorporated by reference in its entirety. As it would be appreciated by a person having ordinary skill in the art, the HTTP specification supports conditional requests (e.g., as described in RFC No. 7232), including If-Unmodified-Since, prohibits an operation unless the object's creation/modification date is older than the specified data. As long as the date and time corresponding to the start of operation is used as a conditional for the delete operation, the race condition will not result in data being lost, as the delete operation will not be executed. While this example illustrates the receiving one of the storage controllers14(1)-14(n) performing this step, one or more of the remaining storage controllers14(1)-14(n) can also perform this step in other examples. Additionally, the above illustrated write technique prevents the race condition by using the following technique after step345and before ending the exemplar method. In the example to prevent the race condition, the receiving one of the storage controllers14(1)-14(n) updates the metadata for the object or performs another operation that updates the last modified time, effectively performing an operation to update the modified time on the fragment object. This technique eliminates the race condition window when the plurality of storage repositories16(1)-16(n) have strong consistency with respect to the last modified time and the conditional delete request. Additionally, the race condition can also be solved by trying to update the object first to update modified time on the object, and then performing the fragment write when a data or object not found (such as a404error) is sent back to the requesting one of the client computing devices12(1)-12(n). This also has the side effect of being more efficient over the wire. An exemplary illustration of managing snapshots of the fragments stored in the storage repositories16(1)-16(n) will now be described and illustrated with reference toFIGS.6-8. To perform a global snapshot, one of the storage controllers14(1)-14(n) computes a snapshot set that contains a complete set of ciphertext hashes and recipe objects associated with all the fragments in the storage repositories16(1)-16(n), or directly from the other storage controllers14(1)-14(n). To retain a non-global snapshot, the snapshot set contains a subset of ciphertext hashes associated with a given state of the filesystem or collection of objects that match snapshot criteria, such as for a given directory or file type, which is retained by the one of the storage controllers14(1)-14(n), although the received list can be stored at other locations, such as by a separate snapshot management computing device (not illustrated) that interacts with one of the storage controllers14(1)-14(n) using the same interface as the storage repositories16(1)-16(n). Further in this example, for retention and compliance, snapshot sets can be created and marked as either not being removable, or requiring a hold to be released before they can be removed. This has applications for compliance and legal holds, as the entire contents of the enterprise-wide storage system, including cloud storage, can be immediately protected from deletion and alteration, without interfering with read/write system operation. Referring more specifically toFIG.6, an exemplary method for managing snapshot specifications is illustrated. In step600in this example, one of the storage controllers14(1)-14(n) defines one or more snapshot parameters for a snapshot, including time, time range, subset of namespace, versioning, access control criteria, for example, although the one of the storage controllers14(1)-14(n) can define other snapshot parameters. Next, in step602, the snapshot parameters are packaged by the one of the storage controllers14(1)-14(n) into a snapshot request, which can be represented as a JavaScript Object Notation (JSON) file, although the one of the storage controllers14(1)-14(n) can package the parameters into other formats in other examples. The snapshot request is then stored in step604to a cloud storage bucket, such as on a cloud one of the storage repositories16(1)-16(n), for example, although the snapshot request can be stored at other locations. In step606, the storage of the snapshot request triggers immediate or deferred computation logic at the one of the storage controllers14(1)-14(n). In step608, the one of the storage controllers14(1)-14(n) verifies and validates the snapshot request using the computation logic along with the permissions of a submitting user (e.g., of one of the client computing devices12(1)-12(n). Referring more specifically toFIG.7, an exemplary method for snapshot access is illustrated. In step700, one of the storage controllers14(1)-14(n) receives a request to access a snapshot for a given namespace subset at a given time or name from one of the client computing devices12(1)-12(n), although the one of the storage controllers14(1)-14(n) can receive other types of requests including other types of information. Next, in step702, the one of the storage controllers14(1)-14(n) accesses the namespace using data in the received request, although other techniques can be used to access the namespace. In step704, the one of the storage controllers14(1)-14(n) access the requested snapshot using a NAS protocol in response to a received request from one of the client computing devices12(1)-12(n), although other types of protocols could also be used to access the snapshot. Additionally, the one of the storage controller14(1)-14(n) sends out a request in steps706A and706B to access a file and a directory in parallel to an external device, such a one of the storage repositories16(1)-16(n) or a snapshot management device (not illustrated) in a cloud network. Next, in steps708A and708B, the snapshot is resolved to the corresponding snapshot request object by the one of the storage controllers14(1)-14(n) to a snapshot time (e.g., as received directly in step700, or as referenced by a name received in step700) based on a specified name, although the snapshot can be resolved using other techniques. Further, in steps710A and710B, the directory and file information is translated by the one of the storage controllers14(1)-14(n) to a parent ID and file name and, in steps712A and712B, the one of the storage controllers14(1)-14(n) queries for all the objects specified with the specified parent ID and the filename using the snapshot time to determine which recipe objects were valid at the corresponding snapshot time. Furthermore, for each set of matching recipe objects with the same name, the one of the storage controllers14(1)-14(n) returns, in steps714A and714B, a recipe immediately preceding the snapshot time back to the one of the client computing devices12(1)-12(n). Referring more specifically toFIG.8, an exemplary method for garbage collection of snapshots is illustrated. In step800, a garbage collection process is initiated at one of the storage controllers14(1)-14(n) and, in step802, the one of the storage controllers14(1)-14(n) enumerates all snapshot time ranges and corresponding namespaces from stored snapshot request objects. Next, in step804, the one of the controllers14(1)-14(n) groups and sorts snapshot recipe and tombstone objects by identifier and time, respectively, although other parameters could be used to group and/or sort the snapshots. In step806, the one of the storage controllers14(1)-14(n) determines, for each recipe and tombstone object, whether the interval between the recipe object and a tombstone object does not overlap with any snapshot time, where the namespace of the recipe object also matches that of the snapshot. Next, in step808, if the determination indicates that the interval does not overlap, the one of the storage controllers14(1)-14(n) performs the same reference counting and recipe object and tombstone object deletion process described and illustrated in more detail earlier with reference to step520. Finally, in step810, the one of the storage controller14(1)-14(n) deletes fragments that are not included in any of the recipe objects. Accordingly, with this technology, files are stored as fragment objects (each named with the hash of the fragment), and recipe objects (each containing metadata such as a parent identifier, a parent name, a file name, and an ordered list of the hash of each fragment objects that comprises a given file). Fragment objects are immutable, and are garbage collected when no longer referenced in any recipe objects. When an existing file is deleted, a tombstone object is created, and when an existing file is updated, the corresponding recipe object is either appended with the updated recipe object, or a new recipe object is created. Further, the primary user-visible snapshot functionality that is enabled includes named snapshots, time range snapshots, versioning snapshots, retention, version browse and access, single file store, selective file store, snapshot browse, snapshot restore, snapshot compare, and snapshot clone. In named snapshots, a user manually or programmatically indicates that a given point in time subset of the global namespace is associated with a snapshot name in the snapshot request object. All recipes and fragments that intersect with the specified time are retained until such time that the named snapshot is released. In time range snapshots (also referred to as versioning snapshots) a user manually or programmatically indicates that all changes in a given time range for a subset of the global namespace is associated with a snapshot name in the snapshot request object. All recipes and fragments that intersect with the specified time range are retained until such time that the time range snapshot is released. All historical recipes and fragments that match the subset of the namespace are retained until such time that the snapshot request is deleted. In retention, a user manually or programmatically indicates that a subset of the global namespace is to be retained for a given duration of time (retention), or until further notice (hold), is associated with a snapshot name in the snapshot request object. All recipes and fragments that match the subset of the namespace are retained until such time that the retention and/or hold is released. Further, in version browse and access, a user can manually or programmatically access historical versions of any element in the namespace, where present. In a single file restore, a user can manually or programmatically replace the current version of a file with a historical version or version from a named snapshot (or specific time), where present. The selective file restore provides a user to manually or programmatically restore files that match a given set of criteria, such as files encrypted by ransomware. In snapshot browse, a user can manually or programmatically access a subset of the namespace as it existed at a given time or named snapshot using file system protocols. In a snapshot restore, a user can manually or programmatically replace a current namespace with a prior namespace as it existed at a given time or named snapshot. In snapshot compare a user can manually or programmatically compare the differences between a namespace at one point in time or named snapshot, and another point in time or named snapshot. In snapshot clone a user can manually or programmatically clone a snapshot into another part of the namespace, and once cloned, modify it independently from the original namespace. Further in this example, in order to access a point-in-time file from a snapshot, the following translation needs to be performed: named snapshot resolved to time T, if not specified directly; directory and file name A translated to recipe P that existed at time T. In order to access a point-in-time directory listing (either from a snapshot, or at an arbitrary point in time), the following translation needs to be performed: named snapshot resolved to time T, if not specified directly; query performed across all recipe files to identify directory and file names that existed at time T. Using these two techniques, snapshot views of namespace contents can be accessed via a web interface, via network file protocols, via versioned access protocols, or via cloud protocols such as CDMI that directly support snapshot and versioned file/object access. Accordingly, as illustrated and described by way of the examples herein, this technology provides a number of advantages including providing methods, non-transitory computer readable media and devices for improved management of data storage in a distributed de-duplication system. Using the above illustrated techniques, the technology disclosed herein is able to use processor and memory resources minimally for client computing devices to facilitate data storage and storage operations. Additionally, fragments of the objects or files can be stored without requiring modifications to the storage repositories. Furthermore, the technology does not waste computing resources on non-de-duplicable data. Additionally, the technology allows hybrid public/private cloud de-duplication and supports multi-tenant de-duplication with full tenant security isolation. The technology also improves efficiency of namespace migration between systems and tenants. Having thus described the basic concept of the technology, it will be rather apparent to those skilled in the art that the foregoing detailed disclosure is intended to be presented by way of example only, and is not limiting. Various alterations, improvements, and modifications will occur and are intended to those skilled in the art, though not expressly stated herein. These alterations, improvements, and modifications are intended to be suggested hereby, and are within the spirit and scope of the technology. Additionally, the recited order of processing elements or sequences, or the use of numbers, letters, or other designations therefore, is not intended to limit the claimed processes to any order except as may be specified in the claims. Accordingly, the invention is limited only by the following claims and equivalents thereto.
38,564
11860740
DETAILED DESCRIPTION OF A PREFERRED EMBODIMENT The present invention may be implemented as computer software on a conventional computer system. Referring now toFIG.1, a conventional computer system150for practicing the present invention is shown. Processor160retrieves and executes software instructions stored in storage162such as memory, which may be Random Access Memory (RAM) and may control other components to perform the present invention. Storage162may be used to store program instructions or data or both. Storage164, such as a computer disk drive or other nonvolatile storage, may provide storage of data or program instructions. In one embodiment, storage164provides longer term storage of instructions and data, with storage162providing storage for data or instructions that may only be required for a shorter time than that of storage164. All storage elements described herein may include conventional memory and/or disk storage and may include a conventional database. All elements of the system described herein may include any or all of at least one input, at least one output and at least one input/output and may include a hardware computer processor. All system elements may include a computer processor system or other logic circuitry, designed or programmed via non transitory computer readable program code devices to operate as described herein and/or memory system. All system claim elements are to be interpreted as structural, the only nonce word to be used in claims is the word “means” and all other words are not to be interpreted as nonce words. Input device166such as a computer keyboard or mouse or both allows user input to the system150. Output168, such as a display or printer, allows the system to provide information such as instructions, data or other information to the user of the system150. Storage input device170such as a conventional floppy disk drive or CD-ROM drive accepts via input172computer program products174such as a conventional floppy disk or CD-ROM or other nonvolatile storage media that may be used to transport computer instructions or data to the system150. Computer program product174has encoded thereon computer readable program code devices176, such as magnetic charges in the case of a floppy disk or optical encodings in the case of a CD-ROM which are encoded as program instructions, data or both to configure the computer system150to operate as described below. In one embodiment, each computer system150is a conventional SUN MICROSYSTEMS T SERIES SERVER running the SOLARIS operating system commercially available from ORACLE CORPORATION of Redwood Shores, California, a PENTIUM-compatible personal computer system such as are available from DELL COMPUTER CORPORATION of Round Rock, Texas running a version of the WINDOWS operating system (such as XP, VISTA, or 7) commercially available from MICROSOFT Corporation of Redmond Washington or a Macintosh computer system running the MACOS or OPENSTEP operating system commercially available from APPLE INCORPORATED of Cupertino, California and the FIREFOX browser commercially available from MOZILLA FOUNDATION of Mountain View, California or INTERNET EXPLORER browser commercially available from MICROSOFT above, although other systems may be used. Each computer system150may be a SAMSUNG GALAXY NEXUS III commercially available from SAMSUNG ELECTRONICS GLOBAL of Seoul, Korea running the ANDROID operating system commercially available from GOOGLE, INC. of Mountain View, California. Various computer systems may be employed, with the various computer systems communicating with one another via the Internet, a conventional cellular telephone network, an Ethernet network, or all of these. Referring now toFIG.2, a method of causing backup software to avoid backing up of certain virtual machines on each of several cluster backup configurations is shown according to one embodiment of the present invention. A list of cluster backup configuration identifiers is received210. The list may include the names of each cluster backup configuration, and may include location information describing the locations of the cluster on which the virtual machines reside. In one embodiment, the locations of the cluster backup configuration may already be defined, in which case only the names of the cluster backup configurations need be received. Cluster backup configuration names may be received and updated at any time as indicated by the dashed line in the Figure. A list of names of virtual machines that are not to be backed up is received212. In one embodiment, the machines not to be backed up may be spread across cluster backup configurations from among those whose names are received in step210, though the cluster backup configuration on which the virtual machine resides at the time need not be specified on the list, and may not even be known by the author of the list, and thus, the list may be authored without regard for the particular cluster backup configuration on which a virtual machine resides at any time. In one embodiment, the list is in the format of a spreadsheet, such as a conventional EXCEL spreadsheet, commercially available from MICROSOFT CORPORATION of Redmond, Washington. The list may include, for each virtual machine, the reason the virtual machine is not to be backed up. Step212may be part of a continuously operating process, allowing the names of virtual machines not to be backed up to be modified at any time, as shown by the dashed line in the figure. If a list distribution trigger occurs214, the list may be converted216into an XML format, or another format that can be used by the backup and recovery software used to back up each cluster backup configuration as described above. A list distribution trigger may occur based on an event, such as an event that alters the list, for example, when a virtual machine is added to, or added to or removed from, the list. A list distribution trigger may occur based on time, such as before the backups of the cluster backup configurations are about to occur. A list distribution trigger may be a manual indication received from a system administrator to distribute the list. The converted list is provided218to the backup and recovery software or to every cluster backup configuration whose identifiers were received in step210as described above, without regard for which cluster backup configuration contains data or provides other resources for a given virtual machine, to cause the backup and recovery software backing up the cluster backup configuration not to back up data and other resources of any virtual machine on the list. The converted list may be provided to every cluster backup configuration using the backup and recovery software. For each cluster backup configuration receiving the list, the list may contain any or all of the names of zero, one, or more virtual machines for which the cluster backup configuration is storing data or otherwise providing resources, names of one or more virtual machines for which the cluster backup configuration is not storing data or otherwise providing resources at the time of receipt of the list, but will later store data or provide resources for that virtual machine, and when such data is stored or such resources are provided on that cluster backup configuration, should not be backed up, and the names of one or more virtual machines that will never migrate to that cluster backup configuration. A check may be optionally made220as to whether the registrations of the virtual machines that will not be backed up has been properly implemented on any one or more cluster backup configuration, for example, by querying the backup software to identify the virtual machines not to be backed up on a given cluster backup configuration, and checking against the converted list of virtual machines provided to it for errors. If errors are found, the list may be reapplied to that cluster backup configuration or to all of them. The backup software, such as the conventional SIMPANA software commercially available form COMMVAULT of Oceanport, New Jersey, reads the list for each cluster backup configuration as a filter222, causing the data and other resources for virtual machines with the names received in step212not to be backed up when the backup software backs up the data in data stores and other resources on that cluster backup configuration. Some or all other virtual machines that are stored on that cluster backup configuration will be backed up at such time. The method continues at step214. Step222may operate as a continuous process in one embodiment, repeating at one or more times of the day, in the embodiment, in which the distribution trigger is not based on the backup time. This allows the backup and recovery software to reuse the converted list one or more times after the first use of it to exclude virtual machines from being backed up. System. Referring now toFIG.3, a system300for controlling backup and recovery software is shown according to one embodiment of the present invention. Communication interface302includes a conventional TCP/IP compatible communication interface, running suitable communication protocols such as TCP/IP, Ethernet, or both. Application interface302includes input/output301, coupled to a network such as an Ethernet network, the networks of the Internet, or both. Unless otherwise described herein, all communication into, or out of, system300described below, is made via input/output301of communication interface302. Cluster manager310provides a user interface to allow a system administrator to enter cluster backup configuration identifiers as described above, which may include location information of such cluster backup configuration. Cluster manager310stores such information about the cluster backup configurations in system storage304. Names or other identifiers of virtual machines are received from a system administrator by virtual machine manager312, which stores such information about the virtual machines into system storage304. In one embodiment, the virtual machine manager312provides a user interface to the system administrator to allow such information to be received. In one embodiment, in conjunction with the names of the virtual machines, the reasons that the virtual machines are not to be backed up are also received and stored by virtual machine manager312. In one embodiment, the list of names of virtual machines is received into an EXCEL spreadsheet. The spreadsheet may be configured to allow for the exporting of the names of the virtual machines in an XML format that can be used by the backup and recovery software320, described below. In one embodiment, to allow for such exporting into such format, a template of the XML file may be received, and imported into EXCEL by selecting the developer tab in Excel, and then selecting the import user interface control that appears as part of the XML area of such tab. The import tab requests the user to indicate how EXCEL should map attributes in the XML file into columns, and in the case of a COMMVAULT update subclient filters template file commercially available from COMMVAULT used as the template, the attributes “vmfilter”, “equalsOrNotEquals”, “type”, “VirtualServer” and “OVERWRITE” are mapped to columns A through E, respectively. The resulting spreadsheet may then be saved onto disk and used to receive the list of virtual machine names. To receive the list, the file is opened and the virtual machine server name, used to identify the virtual machine that currently or may reside in a cluster backup configuration, is added to a row of the EXCEL spreadsheet for each virtual machine that is not to be backed up, using the first column A. The second column in each row is set to a value of “1”, and the third column in each row is set to a value of “10”. The fourth and fifth columns are unchanged in every row and keep the values imported from the file. Other columns may be used to keep track of other information about the virtual machines on the list. Such other columns may hold a description of the reason why each virtual machine is not to be backed up, the application or applications running on such virtual machine, and other information about such virtual machines, such as a contact person for that machine and the entity for which such virtual machine is operated, such as the department in the company for which such virtual machine is operated. The use of other information allows a single spreadsheet to be used for both tracking of virtual machines not backed up and for controlling the backup and recovery software, reducing errors from what would otherwise be two lists that could become out of sync. In one embodiment, the conversion of the list to an XML format that can be used by the backup and recovery software, and the distribution of such XML-based list, is initiated via manual processes performed by the system administrator. In another embodiment, such conversion and distribution is automatically initiated by distribution trigger sensor314. Distribution trigger sensor314senses the distribution triggers described above, either by sensing the addition or removal or both of one or more names of virtual machines to or from the list received by virtual machine manager312, or by monitoring a system clock to identify a time that is before the next time that the virtual machines in the cluster backup configurations corresponding to the list received by cluster manager310will be backed up by the backup and recovery software, such time or times having been provided to it by a system administrator. In one embodiment, distribution trigger sensor314uses both techniques, first monitoring a system clock to identify whether the time is within a threshold amount of time of the start of the backup and recovery process of the cluster backup configurations corresponding to the list received by cluster manager310, and then at such time, comparing the list of virtual machines, and/or the list of cluster backup configuration identifiers with those copied into a separate area of system storage304at a time that was within a threshold amount of time of the immediately preceding backup of the cluster backup configurations corresponding to the list received by cluster manager310as described below. For example, if the backups occur at 2 AM on the day after each weekday, and the threshold is one hour, distribution trigger sensor314monitors the system clock and approximately at 1 AM on Tuesday, distribution trigger manager314compares the list of cluster backup configurations and/or virtual machines in system storage304at that time, with the list of cluster backup configuration identifiers and/or virtual machine names stored at 1 AM on the prior Saturday, when the last set of changes to the list were discovered as described herein. If there are any differences in such lists, (or if the list in the separate area of system storage304doesn't exist), distribution trigger sensor314signals list converter316. Distribution trigger sensor314additionally stores the current version of the list of cluster backup configuration identifiers and the list of names of virtual machines stored in system storage304in the separate area of system storage304, replacing the prior stored version of such list. Other techniques of detecting differences, such as by setting an indicator when any changes are made, checking the indicator at the threshold amount of time before the scheduled backup, if the indicator is set, converting and distributing the list, and clearing the indicator, may be used. In another embodiment, distribution trigger sensor314is a part of virtual machine manager312and/or cluster manager310, and upon sensing any changes to the list of virtual machine names, the list of cluster backup configuration identifiers, or both, distribution trigger sensor314signals list converter316. In such embodiment, the separate area of system storage304need not be used, as changes cause the conversion and distribution of the list of names of virtual machines as soon as any changes are entered to the lists by the system administrator as described above. In the embodiment in which the conversion and distribution of the list of names of virtual machines is initiated manually, a system administrator may signal list converter316. If such embodiment, the system administrator may signal list converter316when the system administrator manually identifies one or more distribution triggers as described above or for any other reason. When signaled, list converter316converts the list of names of virtual machines received by virtual machine manager312and stored in system storage304into an XML format that can be understood by backup software320. In the embodiment using the EXCEL spreadsheet, EXCEL can perform such conversion. Using the developer tab of excel, in the XML area of such tab, the export user interface control is used to convert the five columns A-E described above into the XML format used by the backup and recovery software. A script may be written in the EXCEL program, for example, using VISUAL BASIC to automatically export the file when a change is made or when a change is made and the file is saved into system storage304. List converter316then signals distribution manager318, or the system administrator may signal distribution manager318manually, for example by executing a script implementing distribution manager318. When signaled, distribution manager318provides the XML converted list of virtual machine identifiers in system storage304either to the backup and recovery software or to each cluster backup configuration, optionally via the backup and recovery software320. Backup and recovery software320is configured to back up all virtual machines362-368on cluster backup configurations352-354using the list of cluster backup configurations in system storage304. If virtual machine B364is one of the virtual machines on the XLM converted list of virtual machines not to be backed up, backup and recovery software320will not back virtual machine B364, but will back up virtual machine A362, virtual machine C366and virtual machine D368. In one embodiment, backup and recovery software320uses the converted list of cluster backup configurations stored on each cluster backup configuration or the converted list it received in order to identify the cluster backup configurations that it backs up, backing up all virtual machines on each such cluster backup configuration352,354except for those on the converted list of virtual machines it uses as described herein. In one embodiment, a system administrator may query backup and recovery software320after the list of virtual machines not to be backed up is provided to it, as described above, to verify that the virtual machines on the list have been accepted into the filtering component of backup and recovery software, so as to ensure that such virtual machines on the list are not backed up by backup and recovery software320. In another embodiment, checker322retrieves the list of virtual machines being filtered by backup and recovery software320or being filtered (i.e. not backed up) on a particular cluster backup configuration, such as by querying backup and recovery software320, and compares the list of virtual machines being so filtered with the list of virtual machines stored in system storage304. If any differences are detected, checker322indicates such differences to the system administrator, so that suitable corrective measures may be employed and/or signals distribution manager318to redistribute the converted list of virtual machine identifiers either to all cluster backup configurations or to a one that is queried. Selected Examples of Operation Referring now ToFIG.4, examples of operation are shown according to one embodiment of the present invention. An XML converted list of identifiers of virtual machines is produced and distributed410to the cluster backup configurations352,354as described herein at a time when virtual machine A362does not reside on cluster backup configuration A352as shown inFIG.3, for example, because it resides on cluster backup configuration B354. The XML converted list of identifiers of virtual machines distributed to each cluster backup configuration is the same list. Virtual machine A is one of the identifiers of virtual machines not to be backed up. At a first backup412, virtual machine A362is not backed up412by backup and recovery software320on cluster backup configuration B354, but virtual machines B-D364-368are backed up by backup and recovery software320, as specified by the XML converted list of identifiers of virtual machines. At a later time, a second backup is performed416using the XML converted list of identifiers distributed in step410or an XML-converted list having the exact same virtual machine identifiers as that list, for example, because of a time-based distribution trigger that unnecessarily (because the virtual machine identifiers had not changed) redistributed a copy of the XML converted list. Different virtual machine identifiers are not distributed to the cluster backup configurations352or354between the time the XML converted list was originally distributed and the time the second backup is performed, although the exact same virtual machine identifiers may have been provided as a second XML converted list during such time. However, during such period between the time it was originally distributed and the time the second backup is performed, virtual machine A362moved414from cluster backup configuration B354to cluster backup configuration A352, and optionally virtual machine C has moved414with virtual machine A. During such period, no indication of the move or moves are provided to backup and recovery software320. Backup and recovery software will perform the second backup using the converted list of identifiers distributed in step410, and will not back up416virtual machine A362on cluster backup configuration A352but will back up416the other virtual machines B-D364-368. The backup or non-backup of a virtual machine is irrespective of which cluster backup configuration the virtual machine is on, and whether the virtual machine has moved to a different cluster backup configuration between the time the converted list is originally (with respect to any lists containing the same virtual machine identifiers) distributed and when the backup occurs. In one embodiment, the backup and recovery software320may provide a utility that allows identification to such software of one or more individual virtual machines not to be backed up (or only to be backed up) for each cluster backup configuration, but either requires identification of virtual machines that reside on the cluster backup configuration at the time of the use of such utility to identify virtual machines to be backed up or not backed up, or only applies to an individual cluster backup configuration. In one embodiment, a cluster backup configuration is a cluster. It is noted that in other embodiments, the XML converted list distributed may be a list of virtual machines to be backed up, with virtual machines not on the list being the ones not backed up. Thus, the list of virtual machines not to be backed up is implied as being any virtual machine not on the list distributed. In this embodiment, the list of virtual machines not to be backed up is considered to be (implicitly) distributed with the list distributed. Summary of Certain Embodiments There has been described a method of instructing backup and recovery software, that receives a specification of one of a set of two or more cluster backup configurations on which a virtual machine in two or more virtual machines operates to prevent said backup and recovery software from backing up said virtual machine, to not back up at least one virtual machine in the two or more virtual machines on any of the two or more of cluster backup configurations, including: building, for the plurality of cluster backup configurations, each cluster backup configuration comprising at least one computer system, each computer system in the two or more including at least one of the virtual machines in the two or more, a list of identifiers of each of the at least one virtual machine in the two or more virtual machines that is not to be backed up, the list built so that, for each of the cluster backup configurations in the two or more, the list contains identifiers of: at least one first virtual machine residing on the cluster backup configuration at the time the list is received and that the backup and recovery software is not to back up; at least one second virtual machine not residing on the cluster backup configuration at the time the list is received, but that will move from another cluster backup configuration to said cluster backup configuration, and that the backup and recovery software is not to back up; and at least one third virtual machine not residing on the cluster backup configuration at the time the list is received, and that will not move from another cluster backup configuration to said cluster backup configuration, and that the backup and recovery software is not to back up; and distributing the list to each cluster backup configuration in the two or more. The method contains an optional feature whereby the list comprises a first format; and the list is built from a different list in a second format, different from the first format. The method contains an optional feature whereby the different list comprises information that is not included in the list. The method contains an optional feature whereby the distributing step is responsive to a triggering event. The method contains an optional feature whereby the triggering event comprises a time of day. The method contains an optional feature whereby the triggering event comprises a change of information on or to be on the list. Described is a computer implemented system for instructing backup and recovery software, that receives a specification of one of a set of two or more cluster backup configurations on which a virtual machine in two or more virtual machines operates to prevent said backup and recovery software from backing up said virtual machine, to not back up at least one virtual machine in the two or more virtual machines on any of the two or more cluster backup configurations, including:a list converter including a processor system coupled to a memory system for generating and providing at an output a list of identifiers of each of the at least one virtual machine that is not to be backed up, the list converter generating the list so that for each of the cluster backup configurations in the two or more, the list contains identifiers of: at least one first virtual machine residing on the cluster backup configuration at the time the list is received and that the backup and recovery software is not to back up; at least one second virtual machine not residing on the cluster backup configuration at the time the list is received, but that will move from another cluster backup configuration to said cluster backup configuration, and that the backup and recovery software is not to back up; and at least one third virtual machine not residing on the cluster backup configuration at the time the list is received, and that will not move from another cluster backup configuration to said cluster backup configuration, and that the backup and recovery software is not to back up; and a distribution manager including the processor system coupled to the memory system having an input coupled to the list converter output for receiving the list, the distribution manager having an output coupled to each cluster backup configuration in the two or more, each cluster backup configuration including at least one computer system, each computer system comprising at least one of the virtual machines in the two or more, the distribution manager for distributing via the distribution manager output the list to each cluster backup configuration in the two or more. The system contains an optional feature whereby: the list converter comprises an input for receiving a different list in a first format; and the list converter generates the list in a second format, different from the first format, using the different list. The system contains an optional feature whereby the different list comprises information that is not included in the list. The system may additionally include a distribution trigger sensor having an input coupled to receive trigger event indication, the distribution trigger sensor for providing a signal at an output responsive to the trigger event indication; and contains an optional feature whereby the list converter is has an input coupled to the distribution trigger sensor output for receiving the trigger event indication, and the list converter generates the list responsive to the trigger event indication. The system contains an optional feature whereby the trigger event indication comprises a time of day. The system contains an optional feature whereby the triggering event indication comprises a change of information on, or to be on, the list. Described is a computer program product including a nontransitory computer useable medium having computer readable program code embodied therein for instructing backup and recovery software, that receives a specification of one of a set of two or more cluster backup configurations on which a virtual machine in two or more virtual machines operates to prevent said backup and recovery software from backing up said virtual machine, to not back up at least one virtual machine in the two or more virtual machines on any of the two or more cluster backup configurations, the computer program product including computer readable program code devices configured to cause a computer system to:build for the two or more cluster backup configurations, each cluster backup configuration in the two or more including at least one computer system, each computer system including at least one of the virtual machines in the two or more, a list of identifiers of each of the at least one virtual machine in the two or more virtual machines that is not to be backed up, the list built so that, for each of the cluster backup configurations in the two or more, the list contains identifiers of: at least one first virtual machine residing on the cluster backup configuration at the time the list is received and that the backup and recovery software is not to back up; at least one second virtual machine not residing on the cluster backup configuration at the time the list is received, but that will move from another cluster backup configuration to said cluster backup configuration, and that the backup and recovery software is not to back up; and at least one third virtual machine not residing on the cluster backup configuration at the time the list is received, and that will not move from another cluster backup configuration to said cluster backup configuration, and that the backup and recovery software is not to back up; and distribute the list to each cluster backup configuration in the two or more. The computer program product contains an optional feature whereby the list converter comprises an input for receiving a different list in a first format; and the list converter generates the list in a second format, different from the first format, using the different list. The computer program product contains an optional feature whereby the different list comprises information that is not included in the list. The computer program product contains an optional feature whereby the computer readable program code devices configured to cause the computer system to distribute are responsive to a triggering event. The computer program product contains an optional feature whereby the triggering event comprises a time of day. The computer program product contains an optional feature whereby the triggering event comprises a change of information on or to be on the list. Other times for the components of the list may be at the time the list is built, instead of the time the list is received.
32,292
11860741
While embodiments are described herein by way of example for several embodiments and illustrative drawings, those skilled in the art will recognize that the embodiments are not limited to the embodiments or drawings described. It should be understood, that the drawings and detailed description thereto are not intended to limit embodiments to the particular form disclosed, but on the contrary, the intention is to cover all modifications, equivalents and alternatives falling within the spirit and scope as defined by the appended claims. The headings used herein are for organizational purposes only and are not meant to be used to limit the scope of the description or the claims. As used throughout this application, the word “may” be used in a permissive sense (i.e., meaning having the potential to), rather than the mandatory sense (i.e., meaning must). Similarly, the words “include”, “including”, and “includes” mean including, but not limited to. DETAILED DESCRIPTION Disclosed is a continuous data protection system that captures all of the changes happening on the data store (e.g., a database) and periodically builds system snapshots (sometimes referred to as copies, herein) by applying logs on the closest system snapshot, in embodiments. The system may be able to apply transaction logs to a previous logical backup to create a new point-in-time logical backup, without losing any customer data, in some instances. For example, system snapshots may be built at a partition level (e.g., for systems that partition data) by applying the change logs to prior snapshots. In some such embodiments, the continuous data protection system generates backups without any additional queries or scanning of the client's production data source by relying on prior snapshots and change log data to create new snapshots, instead. Accumulation of change log data for a table, as well as generation of updates to snapshots for the table may be performed independently for each partition (e.g., at a different time) of a same table, based on characteristics particular to each partition for example, in some embodiments. In embodiments, a log apply service of the continuous data protection system is responsible for at least two core functionalities to support backup and restore. During a conversion process, the log apply service may convert partition snapshots (sometimes referred to as backups) from a physical format (e.g., mysql) to a logical format snapshot. The log apply service may also create subsequent point-in-time logical partition snapshots by applying transaction logs to a previous logical partition snapshot, and create a complete user backup, for example. In some embodiments, continuous capture of individual changes to a table provide for a more fine-grained availability of those individual changes at a later time. For example, the accumulation of individual changes to the data—data that constitutes a state of the database table at a time, in embodiments, may be used to more accurately take the table—or partition—back to a particular state at any point-in-time along a continuum. Such features contrast with prior systems that could only take the system back to a select few points-in-time when snapshots of the system were taken. In some such prior systems, the snapshots added additional burden to the client's production system because the snapshots were created from scans of the production database, interrupting or delaying production services. Additionally, scan-based snapshots take relatively more time to create, and fail to provide as accurate a view of the database as the techniques disclosed herein, at least because, by the time the scan completes (at least for large data sets), data that has already been scanned may have been changed. Additionally, in at least some embodiments, the techniques described herein are applied on a partition-by-partition basis. For example, snapshots and change log data for a particular partition may be kept in an uncoordinated manner, with respect to the other partitions of the table (e.g., according to different schedules). A relatively inactive partition of a table may have a snapshot generated based on a maximum duration of time threshold, while another relatively active partition of that same table may have snapshots generated more often, based on an amount of accumulated changes, as just one example of many. The times at which the snapshots are created for either partition may not have any relationship, and may be based upon the particular characteristics of that partition, in embodiments. The above-noted process may be triggered when a customer enables backups for a given table. In embodiments, the continuous data protection manager112may initiate the first complete backup of the table, during the initial backup process. For example, for all partitions of the table, the continuous data protection manager or service may store the snapshots by exporting data from storage nodes to storage-level physical format into a durable storage. In embodiments, a log apply process is used whenever the continuous data protection manager or service decides a new logical partition snapshot is required to bound the time taken for creating point-in-time backups. Log apply may also be used during restore to apply logs to a backup. In some systems, log applying backups is an expensive operation (e.g., when there is a relatively greater amount of time and greater number of changes between backups). By relying on pre-existing incremental partition images to define a backup, the system may significantly reduce the load on the log apply service, saving compute costs. Additionally, by relying upon incremental partition images to define a backup, the system may allow users to create many backups partially sharing the same set of logs and base partition images, which may translate into storage costs savings. In some embodiments, a periodicity at which system snapshots of the partitions are built is decided based on an amount of logs accumulated. For example, the periodicity may be based on a threshold amount of logs accumulated. In another example, the periodicity may be based upon a rate of change of the logs accumulated. For instance, if the system is becoming more active . . . more changes are being generated . . . the increase in the rate of change of the number of logs may be used as a trigger to increase the periodicity. In some embodiments, the log apply service applies logs for a single partition on a single host. In some circumstances (e.g., large tables) each partition may be log applied in parallel to reduce the time to apply the logs for the table, e.g., by respective parallel processes. In embodiments, both the newly-created snapshots as well as the change logs are stored to durable storage. In some such embodiments, the snapshots and the durably-stored change logs may be used to restore the partition. In some embodiments, the continuous data protection backup service provides an interface and functionality supporting unified management of the data, while optimizing customer costs and restore times via periodic log application and trimming. Another benefit of some embodiments is a predictable time to recovery, by accurately identifying continuous backups, which, if restored, would cause the system to break SLAs and take appropriate action to bring the system back into compliance. The systems and methods described herein may be employed in various combinations and in various embodiments to implement a network-based service that provides data storage services to storage service clients (e.g., user, subscribers, or client applications that access the data storage service on behalf of users or subscribers). The service may, in some embodiments, support the continuous data protection of tables that are maintained on behalf of clients in a data store, e.g., a non-relational database or other type of database. The service may provide a high level of durability and availability through replication, in some embodiments. For example, in some embodiments, the data storage service may store data in multiple partitions (e.g., partitions that each contain a subset of the data in a table being maintained on behalf of a client), and may store multiple replicas of those partitions on respective storage devices or virtual storage volumes of different storage nodes. In some embodiments, the data storage systems described herein may provide mechanisms for backing up a database table as a synchronous operation while the database continues to receive, accept, and service read and/or write operations that are directed to the table. In some embodiments, in response to a request to back up a table, the system may create a backup of each individual partition independently and (in some cases) in parallel (i.e., substantially concurrently). In embodiments, when a request to back up a table is received, the system may guarantee that all write operations that were directed to the table up to that point are included in the backup. In some embodiments, such a guarantee may not be made. In some embodiments, backup operations may be initiated by data storage service users (e.g., customers, service subscriber, and/or client applications) using a “CreateBackup” application programming interface (API). In some embodiments, the systems described herein may support the scheduling of backups (e.g., every day at a particular time, or according to a published, but not necessarily periodic, schedule). In response to receiving a request to back up a table, these systems may back up each partition of the table as an individual item in a remote storage system (e.g., a key-value durable storage system), and may store metadata about the backup that is subsequently usable when restoring the backup to a new database (e.g., a new database table). In some embodiments, the system may be configured to initiate separate backup operations for each of the partitions of a table automatically (e.g., programmatically and without user intervention) in response to a request to back up the table, and to manage those backup operations on a per-partition basis (again, without user involvement). In various embodiments, the data storage service described herein may provide an application programming interface (API) that includes support for some or all of the following operations on the data in a table maintained by the service on behalf of a storage service client: put (or store) an item, get (or retrieve) one or more items having a specified primary key, delete an item, update the attributes in a single item, query for items using an index, and scan (e.g., list items) over the whole table, optionally filtering the items returned. The amount of work required to satisfy service requests that specify these operations may vary depending on the particular operation specified and/or the amount of data that is accessed and/or transferred between the storage system and the client in order to satisfy the request. In embodiments, the system disclosed herein may implement an application program interface (API) that provides access to configuration setting associated with the partition, the configuration settings including, but not limited to: a maximum backup time that indicates a maximum period of time between snapshots of the partition; a minimum backup time that indicates a minimum period of time between snapshots of the partition; and a duration of time to retain snapshots of the partition. Another API may allow consumers to update the current settings for a table within the database service, for example, to enable or disable the continuous backups and modify the duration of time to retain backups. Yet another API may provide the option to enable continuous backups for a table. The triggered action may initiate the creation of a continuous backup through the workflow described herein, such as by initiation of the archival copy of logs for a table and creation of an initial backup of a table in a logical format. In various embodiments, the systems described herein may store data in replicated partitions on multiple storage nodes (which may be located in multiple data centers) and may implement a single master failover protocol. For example, each partition may be replicated on two or more storage nodes (or storage devices thereof) in a distributed database system, where those replicas make up a replica group. In some embodiments, membership in various replica groups may be adjusted through replicated changes, and membership and other updates in the system may be synchronized by synchronizing over a quorum of replicas in one or more data centers at failover time. As described herein, when a database table is created or restored from backup, various resources may be provisioned for the implementation of that table, including storage resources (e.g., disk capacity), and throughput capacity (which may, e.g., be specified in terms of input/output requests per second, or IOPS, for read operations and/or write operations). If the table is divided into two or more partitions (e.g., if various data items are stored on different ones of the partitions according to their primary key values), the provisioned resources may also be divided among the partitions. For example, if a database table is divided into two partitions, each partition may have access to half of the total amount of storage and/or throughput resources that are provisioned and/or committed for the implementation of the table. In some embodiments of the distributed database systems described herein, each storage node may include multiple storage devices or logical volumes, each of which stores various partition replicas. For example, in one embodiment, each storage node of the distributed database system may include five storage devices or logical storage volumes. In some embodiments, one or more mechanisms may be implemented on each of the storage nodes for determining, on a local level (e.g., on a storage node basis) whether and/or how to split a partition or move a partition (or a given replica of a partition), based on the current utilization of provisioned resources and/or other information. For example, one of the storage nodes may be configured to determine that a partition for which a replica is stored on one of its storage devices (e.g., disks) or logical storage volumes should be split into two new partitions, and may divide the data in the partition by hash ranges, by key space ranges, or using other criteria to divide the data between the two new partitions. In another example, a storage node may be configured to determine that one or more partitions (or replicas thereof) should be moved from a given storage device or logical storage volume to another storage device or logical storage volume, e.g., in order to reduce the amount of provisioned storage capacity or throughput capacity on the given storage device or logical storage volume. As noted above, from a user's perspective, a backup operation generally operates to create a backup of a whole table, but internally, the system may back up each partition of the table independently, such that consistency is guaranteed only up to a particular transaction or write operation on a per partition basis (rather than across the whole table). In some embodiments, the system may be configured to maintain metadata about the table (e.g., to keep track of the table schema, and the state of the world from the perspective of the table and of each partition). In some embodiments, this metadata may be stored in the data storage system itself, and a copy of the metadata may also be stored in the remote storage system into which tables are backed up. FIG.1is a block diagram illustrating a system for continuous data protection, according to some embodiments. Provider network100may be a private or closed system, in one embodiment, or may be set up by an entity such as a company or a public-sector organization to provide one or more services (such as various types of cloud-based storage) accessible via the Internet and/or other networks to clients160, in another embodiment. In one embodiment, provider network100may be implemented in a single location or may include numerous data centers hosting various resource pools, such as collections of physical and/or virtualized computer servers, storage devices, networking equipment and the like (e.g., computing system 1200 described below with regard toFIG.12), needed to implement and distribute the infrastructure and storage services offered by the provider network100. In one embodiment, provider network100may implement various computing resources or services, such as a database service110or other data processing (e.g., relational or non-relational (NoSQL) database query engines, data warehouse, data flow processing, and/or other large scale data processing techniques), data storage services (e.g., an object storage service, block-based storage service, or data storage service that may store different types of data for centralized access), virtual compute services, and/or any other type of network based services (which may include various other types of storage, processing, analysis, communication, event handling, visualization, and security services not illustrated). In some embodiments, the provider network100may include a continuous protection data manager112configured to handle or manage backups of databases that are stored with or maintained by the database service110. The backups may be maintained by one or more data storage services. The continuous data protection manager112may manage snapshots from the database service (e.g., in a native format illustrated as144) as well as change log data (e.g., in a native format illustrated as154) from the database service110, in some embodiments. In at least some embodiments, a storage node may convert the change log data, from a format native to the database service to a non-native format, prior to providing the change log data to the continuous data protection manager112. In some embodiments, the data storage services may include a snapshot data store142and a change log archive data store152. The snapshot data store142may be configured to store complete backups of partitions (e.g., partitions124A-N) of the database at a particular point-in-time. For example, a particular snapshot stored at the snapshot data store142may be generated at a particular point-in-time such that the data in the particular snapshot is at the state in which the database existed at that particular point-in-time. In other embodiments, the database service110may be configured to store current or active data for the database such that data of the partition of the database is the most recent version of the data. The change log archive data store152may be configured to store logs152indicating changes, mutations or events that occur with respect to the database or any data corresponding to the database managed by the database service110. In at least some embodiments, archives may be immutable. In some examples, immutable archived data may not be changed or edited, but only read or deleted. In some examples, archived snapshots or change log data may be not be changed or edited, but only read or deleted, in durable storage (e.g., storage service290), for example. In various embodiments, the components illustrated inFIG.1may be implemented directly within computer hardware, as instructions directly or indirectly executable by computer hardware (e.g., a microprocessor or computer system), or using a combination of these techniques. For example, the components ofFIG.1may be implemented by a system that includes a number of computing nodes (or simply, nodes), in one embodiment, each of which may be similar to the computer system embodiment illustrated inFIG.12and described below. In one embodiment, the functionality of a given system or service component (e.g., a component of database service110) may be implemented by a particular node or may be distributed across several nodes. In some embodiments, a given node may implement the functionality of more than one service system component (e.g., more than one data store component, such as snapshot data store142or change log archive152). Database service110may include various types of database services, in embodiments (e.g., relational and non-relational) for storing, querying, and updating data. Such services may be enterprise-class database systems that are highly scalable and extensible. In one embodiment, queries may be directed to a database in database service110that is distributed across multiple physical resources (e.g., computing nodes or database nodes), and the database system may be scaled up or down on an as needed basis. The database system may work effectively with database schemas of various types and/or organizations, in different embodiments. In one embodiment, clients/subscribers may submit queries in a number of ways, e.g., interactively via a SQL interface to the database system. In other embodiments, external applications and programs may submit queries using Open Database Connectivity (ODBC) and/or Java Database Connectivity (JDBC) driver interfaces to the database system. In one embodiment, database service110may also be any of various types of data processing services to perform different functions (e.g., query or other processing engines to perform functions such as anomaly detection, machine learning, data lookup, or any other type of data processing operation). Various other distributed processing architectures and techniques may be implemented by database service110(e.g., grid computing, sharding, distributed hashing, etc.) in another embodiment. In one embodiment, clients160may encompass any type of client configurable to submit network-based requests to provider network100via network170, including requests for database service110(e.g., to query a database managed by the database service110) or data storage service(s) (e.g., a request to create, read, write, obtain, or modify data in data storage service(s), etc.). For example, in one embodiment a given client160may include a suitable version of a web browser, or may include a plug-in module or other type of code module configured to execute as an extension to or within an execution environment provided by a web browser. Alternatively, in a different embodiment, a client160may encompass an application such as a database application (or user interface thereof), a media application, an office application or any other application that may make use of storage resources in data storage service(s) to store and/or access the data to implement various applications. In one embodiment, such an application may include sufficient protocol support (e.g., for a suitable version of Hypertext Transfer Protocol (HTTP)) for generating and processing network-based services requests without necessarily implementing full browser support for all types of network-based data. That is, client160may be an application configured to interact directly with provider network100, in one embodiment. In one embodiment, client160may be configured to generate network-based services requests according to a Representational State Transfer (REST)-style network-based services architecture, a document- or message-based network-based services architecture, or another suitable network-based services architecture. In one embodiment, a client160may be configured to provide access to provider network100to other applications in a manner that is transparent to those applications. For example, client160may be configured to integrate with a database on database service110. In such an embodiment, applications may not need to be modified to make use of the storage system service model. Instead, the details of interfacing to the database service110may be coordinated by client160. The clients160may communicate with the database service110from within the provider network100, in some embodiments. For example, the clients160may be implemented on computing nodes of a computing service offered by the provider network100. The clients160may convey network-based services requests to and receive responses from provider network100via network170, in one embodiment. In one embodiment, network170may encompass any suitable combination of networking hardware and protocols necessary to establish network-based-based communications between clients160and provider network100. For example, network170may encompass the various telecommunications networks and service providers that collectively implement the Internet. In one embodiment, network170may also include private networks such as local area networks (LANs) or wide area networks (WANs) as well as public or private wireless networks. For example, both a given client160and a provider network100may be respectively provisioned within enterprises having their own internal networks. In such an embodiment, network170may include the hardware (e.g., modems, routers, switches, load balancers, proxy servers, etc.) and software (e.g., protocol stacks, accounting software, firewall/security software, etc.) necessary to establish a networking link between given client160and the Internet as well as between the Internet and provider network100. It is noted that in one embodiment, clients160may communicate with provider network100using a private network rather than the public Internet. The clients160may send a request to perform an operation to respective databases managed by the database service110. For example, a given client160may send a PUT (or SET) command and corresponding data to request that the data be stored in the database at the database service110. In another example, a given client160may send a CHANGE (or UPDATE) command and corresponding data to request that some data existing in the database at the database service110be changed based on the sent data. In yet another example, a given client160may send a DELETE (or REMOVE) command and identify data to request that the data existing in the database at the database service110be deleted or removed from the database and the database service110. In other embodiments, a given client160may provide another command to perform some operation to the data stored in the database at the database service110. The database service110may be configured to maintain a backup system for partitions of databases managed by the database service110. In some embodiments, the backup system may perform backups for single partitions of the databases or single-partition databases. In other embodiments, the backup system may perform backups for multiple partitions of the databases. The backup system may include a continuous data protection manager112configured to manage change logs and backups or snapshots of partitions of the databases stored in storage services of the database service110. The continuous data protection manager112may generate a complete backup of a partition of the database (e.g., a snapshot) by applying logs to a closest existing snapshot. A periodicity at which the continuous data protection manager112generated backups may be based on the amount of logs accumulated for the partition or table. Periodicity may be based on an amount of time, in some embodiments. In some embodiments the periodicity may be based on an amount of changes or a period of time between backups, whichever happens sooner. A change log (e.g., change log data154) may indicate one or more changes to the database over a period of time or sequence of events. For example, the change log (e.g., change log154) may indicate that data was added, changed or deleted from the database in a period of time. The change log may be stored at a log store (e.g., change log data store604inFIG.6, described below, or change log archive152). The log store may be accessible to the database service110and the continuous data protection manager112. In some embodiments, the database service110may generate or update a log segment in response to an event at the database. For example, the database service110may indicate in the log segment that the event occurred, and some data in the database has changed. The log segment may include metadata indicating a sequence start identifier, a sequence end identifier, a time start identifier, a time end identifier, one or more checksums, a previous cumulative size of the change log, a lineage of the log segment, or any combination thereof. The sequence start identifier may indicate a sequence number for a first event in a sequence of events that is logged in the log segment. The sequence end identifier may indicate a sequence number for a last event in the sequence of events that is logged in the log segment. The time start identifier may indicate a timestamp for the first event in the sequence of events that is logged in the log segment. The time end identifier may indicate a timestamp for the last event in the sequence of events that is logged in the log segment. The one or more checksums may correspond to the data in the partition of the database, the log segment, etc. The one or more checksums may be used by the continuous data protection manager112or the database service110to determine whether application of at least a portion of the log segment or the change log to the database has resulted in the correct data being restored or retrieved. The previous cumulative size of the change log may indicate a size of the change log prior to the respective log segment. The lineage of the log segment may indicate parent or child partitions associated with the log segment. The log segment may be added to the change log in response to satisfying one or more criteria. In some embodiments, the log segment may be added to the change log based on a determination that the log segment satisfies a size threshold. For example, the database service110may add events to the log segment continuously until the log segment reaches a specified size or a size limit. After the log segment is the specified size, the log segment may be committed to the change log. In other embodiments, the log segment may be added to the change log based on a determination that a number of operations tracked or identified in the log segment satisfies a threshold number of operations. For example, the database service110may perform multiple CHANGE operations to the database. The multiple CHANGE operations on a data entry may require a higher amount of storage space in the log segment despite the storage space used by the data entry has not significantly increased. The database service110may track a number of operations and corresponding types and save the number of operations and corresponding types to the log segment. The database service110may receive an indication that an event has occurred with a partition of a given database managed by the database service110. In some embodiments, the event may be based on a request to perform an operation sent from a given client160to the database service110. For example, the event may be based on a PUT command, a CHANGE command, a DELETE command or any other type of request from the given client160. The event may also indicate a type of the operation, such as PUT, CHANGE, DELETE, etc. The event may also indicate data that used to perform the operation. For example, the indicated data may include new data to be stored with the PUT command, changed data to be updated with the CHANGE command, data to be deleted with the DELETE command. The clients160may send a request to the database service110to generate a backup of a partition of a database. The database service110may also implement timed, periodic or continuous backups automatically or after scheduling by the clients160. In some embodiments, a backup of the partition the database (e.g., a snapshot) may be generated at least in part by the continuous data protection manager112. The continuous data protection manager112may have access to the database stored at the database service110, the snapshot data store142and the change log archive152. In an example, the continuous data protection manager112may determine that the snapshot type is a log-based snapshot. The continuous data protection manager112may generate data that indicates a point in the change log that, when used in conjunction with a stored snapshot, may be used to represent a snapshot of the partition of the database. The continuous data protection manager112may store the generated data as metadata in the snapshot data store142. In some embodiments, the log-based snapshot may be used to generate a full snapshot based on applying at least a portion the change log to a stored snapshot. The log-based snapshot may indicate the portion of the change log that may include logged events that have occurred since generation of the stored snapshot through the current time. The continuous data protection manager112may retrieve the stored snapshot from the snapshot data store142, such as by sending a request to the snapshot data store142for the previous snapshot. The continuous data protection manager112may retrieve the change log from the change log archive152, such as by sending a request to the change log archive152for the change log. The continuous data protection manager112may be configured to apply the change log to the stored snapshot. In some embodiments, applying the change log to the stored snapshot includes starting with the stored snapshot and stepping through the change log and reenacting events included in the change log in chronological order to the stored snapshot. In other embodiments, applying the change log to the stored snapshot includes starting with the stored snapshot and stepping through the change log and undoing events included in the change log in reverse chronological order. In at least the embodiment illustrated inFIG.1, the system may instruct components of the system (e.g., forward/reverse converter206inFIG.2, described below) to convert partition data in a native format to a preferred format (e.g., a binary and text interchangeable format, a typed JSON-superset, etc.) and then persist the preferred format to backup storage (e.g., snapshot data store142) distinct from the source storage system (e.g., database service110). In embodiments, the conversion from the service or native format is not done directly into the preferred format. In some embodiments the native format is converted to an intermediate model (e.g., a Java data type) first and then serialized into the preferred format. FIG.2is a data flow diagram illustrating logical relationships between a snapshot manager, a change log manager, and associated data within the context of a continuous data protection manager112, according to some embodiments. The snapshot manager212, and a change log manager214depicted inFIG.2illustrate components of the continuous data protection manager112also depicted inFIG.1. These managers, in combination with the other depicted components may perform one or more of the steps of the processes illustrated inFIGS.3-5, in embodiments. Different components (e.g., illustrated or non-illustrated components) may perform various of the steps, in various embodiments, without departing from the scope of this disclosure.FIG.3is a flow chart illustrating a technique used for continuous data protection, according to some embodiments. An initial copy of a database partition is obtained and saved as a snapshot in a snapshot data store (block302). For example, in at least the illustrated embodiment of FIG.2, continuous data protection manager112receives partition data144(e.g., in a format native to the database), for example, obtaining a copy (e.g., a full partition snapshot). The copy may be from a scan or an export of a replica of the database partition, for example. The snapshot manager212may store the initial copy of the partition data (e.g., from a replica) in a format native to the database to a data store (e.g., snapshot data store142in storage service290). In some embodiments, the continuous data protection manager may direct a process external to the continuous data protection manager (e.g., a database service process) to obtain a copy of the database partition and to store the copy to durable storage (e.g., in either of a native or non-native format). In some embodiments, the snapshot may be converted from a native format to a non-native format. In an example, forward/reverse converter206may convert the initial copy of the partition data stored in snapshot data store142into a snapshot in a non-native format145, for example, forward converting each item of the full partition snapshot to a non-native format, and then store the converted items to durable storage (e.g., to snapshot data store142, or another data store distinct from snapshot data store142) as partition snapshot data in a non-native format. In some such embodiments, the durable storage may be used at part of a transport mechanism to transport snapshots obtained in a native format to the continuous data protection manger112in a non-native format, with the native format snapshots being deleted from durable storage subsequent to conversion. In at least some embodiments, forward/reverse conversion may be performed by a forward/reverse converter process of the database service that is not part of the continuous data protection manager112. In embodiments, the conversion process may include forward transforming the replica from the native format into an intermediate object (e.g., an intermediate JAVA based record) to represent the data in-memory (an in-memory representation) before continuing to transform the native-format record to the non-native format. In embodiments, a non-native format may be a format that provides dual-format interoperability (e.g., applications can seamlessly consume the data in either its text or binary forms without loss of data fidelity) which enables users to take advantage of the ease of use of a text format while capitalizing on the efficiency of a binary format. For example, some forms of dual-format interoperability make the text form easy to prototype, test, and debug, while the binary format saves space and parsing effort. A non-native format may include a rich type system, e.g., extending JSON's type system, adding support for types suitable for a wider variety of uses, including precision-sensitive applications and portability across languages and runtimes, in some embodiments. In addition to strings, Booleans, arrays (e.g., lists), objects (e.g., structs), and nulls, the non-native format may add support for arbitrary-precision timestamps, embedded binary values, and symbolic expressions. Some embodiments of a non-native format may expand JSON's number specifications by defining distinct types for arbitrary-size integers, IEEE-754 binary floating-point numbers, and infinite-precision decimals. A non-native format may include Ion, a richly-typed, self-describing, hierarchical data serialization format that provides interchangeable binary and text representations, in embodiments. In some embodiments, a non-native format is a self-describing format, giving its readers and writers the flexibility to exchange data without needing to agree on a schema in advance. For example, in some embodiments, a self-describing format may not require external metadata (i.e. a schema) in order to interpret the structural characteristics of data denoted by the format. For example, the payloads of the self-describing format may be free from build-time binding that inhibits independent innovation and evolution across service boundaries (data may be sparsely encoded and the implicit schema may be changed without explicit renegotiation of the schema among all consumers). Continuing withFIGS.2and3, change log data corresponding to changes to the database partition may be accumulated (block304), to change log storage, for instance. For example, change log storage may include any one of the database service110, or the change log manager214, or a change log data store604(FIG.6, described below) may accumulate the change log data, or the change log data may be accumulated to durable storage (e.g., change log archive152of storage service290). The change log data may be accumulated in a format native to the database from which the change log data is obtained, or may be converted into a non-native format prior to accumulation, in various embodiments. In some embodiments, the change log data may be accumulated in a process that continues during one or more of the other steps of the process. For example, the accumulation of the change log data (block304) may continue during other steps such as any or all of steps306-312. If a backup is not triggered (block306, no) the system may continue to accumulate change log data. When a backup is triggered (block306, yes) a previous snapshot of the database partition may be accessed (block308) from the snapshot data store142, by the snapshot manager212, for example. In some embodiments, instead of using a previous snapshot of the database partition (e.g., from durable storage), the system may obtain a copy of the database partition (e.g., from a replica of the partition). Changes from the accumulated change log since the previous snapshot was created may be applied (block310) to the previous snapshot to generate a new snapshot of the database partition. For instance, the snapshot manager212may call the log apply service213to apply the accumulated change log. Log-apply may sometimes be referred to herein as materialization. Materialization may include starting with one or more snapshots, applying the corresponding logs, and creating a coherent set of artifacts that represent the requested backup, for example. The results of materialization may be put directly into a database service table or a materialized set of artifacts may be stored to durable storage, in embodiments. The snapshot manager212may store (block312) the new snapshot to the snapshot data store142. The system may return to accumulating change log data (block304) in at least the illustrated embodiment. In some embodiments, accumulation may not be interrupted, and continues as a parallel process. In some embodiments, a first copy or a snapshot of a partition is stored to durable storage (e.g., snapshot datastore142) and change log data (e.g., from the database service) may be continuously archived (e.g., to change log archive152) to durable storage. At some point, (e.g., when the change log archive gets to a certain size threshold, or some time threshold) the change log data is downloaded from the archive and used (e.g., via a log apply function) to generate a new snapshot, based on the first snapshot. This process may iterate, at least as long as changes are being made to the partition or database, for example. In at least the illustrated embodiments, either of snapshot manager212or change log manager214may call an error handler217to trigger any of various error routines such as retrying the conversion/reverse-conversion process, retrying the process using a different partition replica, retrying the process on different host machine, reporting failure, flagging the error and continuing, etc. In at least the illustrated embodiments, a checksum generator (not illustrated) may generate a checksum on the snapshot in the non-native format prior to upload of the snapshot to the snapshot data store142. In embodiments, the uploaded data may also be processed with a streaming checksum. In some embodiments, the checksum is performed after the upload is complete. The checksum may help protect integrity of the overall process while uploading the output file to storage, in embodiments. Some such features are useful when the entire data set cannot fit into memory (e.g., partitions). FIG.4is a flow chart illustrating a technique for continuous data protection according to some embodiments.FIG.6is a data flow diagram illustrating a continuous data protection system with a continuous data protection manager and a change log data store (e.g., a streaming data service data store), according to some embodiments. The steps illustrated inFIG.4may be performed by various components of the systems described herein (e.g., components in continuous data protection system or service112), or by components not illustrated herein, in some embodiments without departing from the scope of this disclosure. At least some of the illustrated steps may be performed by one or more of the components depicted inFIGS.1-2&6. The following description will describe various components ofFIG.6that implement the functionality described and illustrated inFIG.4, in embodiments. Partition data (e.g.,144) is obtained from a storage node in a storage format native to the database (block402). The partition data may be read into memory (e.g., by forward/reverse converter206) transforming the data to an in-memory format. A forward conversion process may be applied to convert each item of the in-memory partition data to a non-native format for the snapshot (block404), by the forward/reverse converter206, for example. A checksum may be generated for the snapshot, in embodiments. The partition snapshot may be stored (block406) in a non-native format to a durable snapshot data store142, by the snapshot manager212of the continuous data protection manager112, for example. Native-database-format change log data corresponding to a database partition is accumulated (block408) via a change log data store604. In some embodiments, change log data store604may be implemented in durable storage (e.g., of storage service290). In some embodiments, change log data store604may be implemented externally to the storage service290, by a data stream service that streams the changes, for example. In some embodiments, the database service110may provide the streams of change data. If a change log amount threshold is not reached (block410) the continuous protection manager may determine whether a change log time period threshold has been reached (block412). If the change log time period threshold is not reached, the process may return to the accumulation step at408. But, if either of the change log amount threshold is reached (block410, yes) or the change log time period threshold is reached (block412, yes), the process may continue by obtaining (block415) a previous snapshot from the snapshot data store142and converting the obtained snapshot to the native format (e.g., by forward/reverse converter206of the continuous data protection manager112). In some embodiments, instead of applying native format change log data to a native version of a previous snapshot to generate an updated snapshot, stored native format change log data may be converted to non-native format change log data and then may be log-applied to an existing snapshot in non-native format to generate the updated snapshot. In some embodiments, the storage node may directly provide the change log data in the non-native format (e.g., instead of a conversion being performed by the continuous data protection manager112). At block416, accumulated native-format change log data is applied to the native version of the previous snapshot to generate an updated snapshot (e.g., by the log apply service213of the continuous data protection manager112). At417, the updated snapshot is converted (block417) to a non-native format and the updated snapshot is stored to the snapshot data store142. At block419, each applied change item is converted to a non-native format and archived to the change log archive152(e.g., by the change log manager214and the forward/reverse converter206). The process may return to the accumulation step at block408, in some embodiments. FIG.4illustrates logical steps of a process that may be implemented in various different ways. For example, while the accumulation of change log data in block408is illustrated as a step that is performed prior to the applying of the accumulated change log data in block416and prior to the conversion of the updated snapshot in417, and prior to the conversion of change items in block419in at least some embodiments, some of the functionality described in the steps402-419may be implemented by one or more processes (e.g., one process for each partition, or the like) and may be performed in parallel with the other steps of the process. For example, an instantiated process that is executing the accumulation of the change log data in block408may continue to accumulate the change log data while another process is converting change items to a non-native format as in block419. FIG.5is a flow chart illustrating a technique for continuous data protection during a partition split, according to some embodiments. The steps illustrated inFIG.5may be performed by various of the components depicted inFIGS.1-2and6,7and8, in embodiments. At block502, a data base split procedure is triggered that splits a partition into a first and a second partition at a point-in-time (e.g., by the database service110). Responsive to the partition split, a corresponding change log is split into a first and a second change log at the point-in-time (block504). For example, the first change log includes changes since the split for the first partition, and the second change log include changes since the split for the second partition. At block506, a continuous data protection backup is triggered (e.g., by continuous data protection manager112). A previous snapshot corresponding to the database partition prior to the split is split into a first and a second snapshot (block508). At block510, some of the changes from the corresponding change log that are prior to the point-in-time, as well as changes from the first change log, are applied to the first snapshot to create an updated first snapshot. Some of the changes from the corresponding change log that are prior to the point-in-time, as well as changes from the second change log, are applied to the second snapshot to create an updated second snapshot (block512). In some embodiments, the updated first and second snapshots are stored (block514) to durable storage (e.g., at the snapshot data store142). In some embodiments, logs and snapshots may be stored to durable storage in the native format and converted to the non-native format (e.g., on the fly) when needed. For example, native format change logs603may be stored to change log data store604in storage service290and then converted by continuous data protection manager112to the non-native format155. Similarly, native format snapshots or copies of partitions may be stored to durable storage (e.g., snapshot data store142, via continuous data protection manager112or via the database service, directly) and then converted to non-native snapshots. The non-native format changes may be log-applied to a prior snapshot in order to generate another snapshot (e.g., for an on-demand backup or to create a periodic snapshot). The change logs in native-format, may remain in durable storage, in embodiments. In some embodiments, the first and second change logs are stored to a change log archive152. In at least some embodiments, a checksum and the non-native format (e.g., logical format) snapshot of the partition may be sent to the snapshot data store (e.g., data store142). In some embodiments, the system may send the snapshot without the checksum. The recipient of the snapshot (e.g., snapshot data store142) may use the checksum to verify whether the snapshot has been received without error and send a message back to the snapshot manager212indicating success or not. For example, the snapshot manager212receives back the checksum conformation, and invokes an error handler217or reports a successful snapshot. The error handler217may perform any of various handling processes, such as retrying the transmission of the snapshot, encoding the failure, reporting the failure and continuing, etc. The system may respond to a successful checksum by marking the process successful, in embodiments. FIG.7is a block diagram illustrating a service provider network that implements a continuous data protection manager service, according to some embodiments. Depicted is a continuous data protection manager service112similar to those illustrated inFIGS.1,2, and6that may perform some of the steps illustrated in at leastFIGS.3-5. The continuous data protection manager service112is illustrated as part of service provider network100that includes database service110, storage service290, computer service640, clients160as well as other storage services620and other services610. In at least the illustrated embodiment, the service provider network100is communicatively coupled to client networks760and clients160via network170. Service provider network100may provide one or more services to a plurality of distinct customers, each distinct customer comprising a distinct customer network. One or more of the components inFIG.7may be implemented by one or more of the computing nodes 1200 illustrated inFIG.12. FIG.8is a block diagram illustrating logical relationships between components of a continuous data protection manager, a database service, and a storage service, according to some embodiments. In at least the illustrated embodiment, the system has already obtained initial copies of the partitions and accumulated changes log data, as described above. In some embodiments, the illustrated components may perform one or more of the steps depicted inFIGS.9and10. Other components may perform the steps. For example, some of the steps depicted inFIGS.9and10may be performed by components of the continuous data protection manager112depicted inFIG.2. Illustrated is, a database service110, change log archive152, and snapshot data store142, in storage service290. Various components of the continuous data protection manager112may direct the data transfers, in some embodiments. Various components of continuous data protection manager112may obtain and/or write the change data855(in native or non-native format), and/or the snapshot data845(in native or non-native format) from/to the storage service290. Backup data865may be stored to snapshot data store142, in embodiments. On-Demand Backup FIG.9is a process diagram illustrating a logical flow of an on-demand backup request, according to some embodiments. In some embodiments, the depicted process illustrates that independent partition copies are coalesced into a coherent backup copy of a table for a particular point-in-time, at the time of the on-demand backup, for example. In some embodiments, the target time for the on-demand backup may be a time in the future (or past), with respect to when the request backup was requested. On-demand backups of the full database table may be made for a variety of reasons, not limited to, but including compliance, or to support subsequent restoration, for example. In at least the illustrated embodiment, an on-demand request to backup a database table to durable storage is received (block902), by restoration and backup interface826, from a requesting entity, such as a client device, for example. A target on-demand time associated with the request is stored and the requested backup is scheduled (block904) as an asynchronous process, by the continuous data protection manager112, for example. The process may be asynchronous in that an acknowledgement that the requested backup has been successful may be sent to the requesting entity prior to actually generating the requested backup (block906). This relatively quick response may be an unexpected result, especially for clients requesting backup of a large database table at least because a scanning-type backup of a large database table—a prior way of performing a backup of an entire table—takes a relatively longer amount of time to complete. The system may be able to respond with the acknowledgement of success prior to performing the backup because the system has all the data it needs to create the backup, in embodiments. For example, a combination of existing snapshots and change log data may be used to generate the requested on-demand backup after the confirmation of success has been sent. Some such workflows may be beneficial for a variety of scenarios, such as, but not limited to, creation of compliance backups that are required on some schedule, but are not necessarily used right away, for example. In some embodiments, a backup may be deleted, overwritten or otherwise removed from the system without ever being materialized. For instance, a backup may be requested and deleted, without being exported or used in a restore, and without the backup ever being materialized, in at least some embodiments. A most-recent snapshot for a partition of the table may be obtained for each partition (block908), from the snapshot data store142, by the snapshot manager212(seeFIG.2) of the continuous data protection manager112, for example. Separate sets of changes to the partition are identified from change logs for each partition between the most recent snapshot and the target time of the backup (block910), by the change log manager214(seeFIG.2) of the continuous data protection manager112, for example. The identified change logs corresponding to the partition may be log-applied to the most recent snapshot for each partition, to generate an updated snapshot of the partition in a state at the target time (block912), by log apply service213(seeFIG.2) of the continuous data protection manager112, for example. The generated snapshot for each partition may be stored to durable storage with a time stamp indicating the target time of the on-demand backup (block914), by the snapshot manager212of the continuous data protection manager112, for example. Block916illustrates the system may mark the on-demand backup for the target time complete. In embodiments, the resulting table may have a different partitioning than the prior table. Partitions may be merged or split for example. Some embodiments, herein, describe techniques that operate on distinct partitions of a table at distinct times. For example, a first partition of a table (e.g., a relatively active table) may accumulate changes to the first partition in a change log more quickly than a second partition accumulates changes to the second partition, and the system may dynamically respond to each of those partitions differently, by performing updates to snapshots or producing backups of the more active partition more often and at different times from updates or backups of the less-active second partition of the same table. In at least the illustrated embodiment, a technique for producing an on-demand backup for an entire table that includes a number of distinct partitions that each have updated snapshots or backups at distinct times, is disclosed. In some embodiments where distinct partitions of a table have durably-stored snapshots that were generated at distinct times from one another, the system may use a log-apply service to apply logs for the respective partitions to the corresponding snapshots to bring the state of the data for each partition to a common point-in-time—an aligned-in-time set of snapshots for the partitions of the table. A request for an on-demand backup may inherently indicate that a target time for the requested backup of the table is when the request is made or received. In some embodiments, an on-demand backup may be requested for another time, the on-demand backup request may indicate a time in the future for performing the backup, for example. FIG.11illustrates, among other things, timing characteristics associated with change log streams of a system that implement the techniques described herein, in embodiments. For example, the illustrated example of change log streams for table A depict five change log streams (SA1-SA5) with snapshots1105-1114taken at various times t. Also illustrated is a target time of a requested point-in-time restoration of table A, and change log data1108CLD-1111CLD for the periods of time between the last snapshot for the respective change stream and the target time.FIG.11illustrates that the system may use the change log data particular to each change log stream to bring the snapshot for that partition to a state at a time common for each of the partitions—a time aligned set of snapshots that together constitute a full backup of a table at the requested time (in some embodiments) and that can be used to restore the table to a requested point-in-time (in some embodiments). As described herein, the aligned set of partitions (the partitions of a table) may be coherent (e.g., the aligned set of backups may represent a state of the database table at a target time (or at least relatively close points-in-time)), but causal consistency is not necessarily guaranteed. In embodiments, the system may perform an on-demand backup in a similar fashion for a single partition of a table with multiple partitions, or as depicted inFIG.11for Table A, for the table as a whole, using a partition-by-partition process to accomplish the task. Point-In-Time Restore FIG.10is a process diagram illustrating a logical flow of a point-in-time restoration, according to some embodiments. A point-in-time restoration may be performed for a portion of a database table (e.g., a partition) or for the entire table, in some embodiments. As illustrated inFIG.10, a request to restore a database table to a state of the database table at a target point-in-time (e.g., a point-in-time specified in the request, or otherwise) is received (block1002). For example, a requesting entity, such as a client, a client device, a process executing on a service provider or client machine, or the like may request the point-in-time restore. Requesting clients may wish to restore a table or partition in various ways. For example, a client may wish to restore over the existing table, or restore to a newly-created table while retaining the existing table. In some way, a table in a database service is identified for the restoration (block1004), as directed by backup manager824, for example. In some embodiments, restoration and backup interface826may provide a customer-facing interface for a customer to experience a fully managed restoration or backup service. For instance, restoration and backup interface826may provide an interface (e.g., a graphical or program interface) that client devices or processes of a customer can access to manage restorations and backups, such as viewing the number of available backups as well as characteristics thereof, requesting on-demand backups or restorations, and the like. In at least some embodiments, the restoration and backup interface826may provide a customer-facing interface that gives the customer the ability to configure table properties at the time of a restoration. For example, when a restoration is performed using the client-facing interface to a database service (e.g.,110) the restoration interface (e.g.,826) may provide access to the features provided via a client-facing interface of the database service. Managed-database service features, such as configuration of table properties, partition management, secondary indexing services and the like may be passed though from the database service to the client via the restoration and backup interface826. The disclosed technique may be performed by one or more processes. In one example, one process may be called to create the new table and another process may be called to perform the rest of the restoration. In some instances, a distinct process or node may perform steps1006-1012for a respective partition, the node hosting the partition, for example, may perform the steps1006-1012for that partition. At block1006, a separate archived partition is identified for each partition, based on the target point-in-time. For example, restoration manager822may identify an archived snapshot of a partition of the database table that is closest in time (either prior to the time, or subsequent to the time) to the target point-in-time.FIG.11illustrates a particular example for a point-in-time restoration. Change log stream SA3, in particular, illustrates that snapshot1114has been selected instead of snapshot1109because snapshot1112is closer in time to the requested time, a concept also applicable to on-demand backups that are scheduled, instead of performed at the requested time, in some embodiments. It is contemplated that archived partitions may be selected, in some embodiments, based on the amount of changes between the requested time and the archived partition. It may be preferable to select partitions associated with fewer changes between the partition and the requested time, even when the amount of time between the selected partition and the requested time is greater than another available partition, in embodiments. For each partition, a segment of changes in a change log corresponding to the partition is identified for a period of time between the identified snapshot and the target point in time (block1008), by the change log manager214inFIG.2, for example. Block1010illustrates that for each partition, the identified segment is log-applied to the corresponding partition to generate an updated snapshot of the partition at the target point-in-time. Some combination of the change log manager214snapshot manager212, log apply service213, and the restore manager may perform such functions, in some embodiments, for example. Going back to the illustrated example inFIG.11, the change log data1109CLD (represented by the shaded area) may be applied to the snapshot1109to walk the state of the partition backward in time to the state of the partition at the requested point-in-time. Items (e.g., restoration data875) from the updated snapshot of the identified portion of the database (e.g., for a partition) are sent to the database service to create entries in the newly-identified database table (block1012) in the illustrated embodiment, by restoration manager822, for example. The items may be sent as puts via a client-facing interface of the database service to a newly-created table, or to overwrite an existing table, in a few non-limiting examples. Successful completion of the point-in-time restore of the table may be acknowledged to the requesting entity (block1014), via restoration and backup interface826, for example. FIG.10is a logical flow of a process that may be implemented various ways. For example, various different processes or nodes may each perform steps1006-1012for a respective partition, in parallel or sequentially without departing from the scope of this disclosure. Restoration of on-demand backups may be performed, even if the on-demand backup has not been created or materialized—if the coherent artifacts have not been generated. For example, a process similar to that described above for point-in-time restore may be performed to create the table requested for on-demand restoration. If an on-demand backup has already been materialized and stored to durable storage, the log-apply process may be skipped and the items may be sent directly to the database service (e.g., via the client-facing interface or otherwise). For example, the items from separate partitions in durable storage that represent the backup may be streamed to the database service. In some embodiments, such a transfer may include a conversion (e.g., via forward/reverse converter206) from non-native format to a native format of the database, as described above. Client-Facing Interface of Database Service Clients may generally access production database services of a service provider via a client-facing interface (e.g., a “front-door” of the database service, such as request routers of the database service). The service provider's internal processes may also access the database service via the client-facing interface, or may use a proprietary or “backend” interface to manage, restore, obtain partition copies, or otherwise access the database service, in embodiments. By, using the client-facing interface of the database service to enter the restore data into the table, the system may take advantage of the existing features of the database service that optimize the table performance. The client-facing interface of the database service, or the restoration and backup interface826, for example, may make the data that is being restored available to clients before the entire restoration has completed, in embodiments. In a particular example, clients may be given access to the restored data while the restoration data is simultaneously being put into the database. Such a technique may have the advantage of giving the client access to at least some of the data in (on average) half the time of the total restoration time, for example. In some embodiments, the items from the updated snapshots (e.g.,875) may be sent by the restore manager822to create entries in the newly-identified database in database service110via either of the client-facing interface, or the backend interface of the database service. In some embodiments, by using the client-facing interface, the restoration process may leverage the existing functionality of the database service (e.g., the benefits of a fully-managed database service, managed table configurations—placement optimization at the time of restore etc., managed partitioning—fixing IOPS dilution via repartitioning etc., local and global secondary indexing services, or the like). Use of the client-facing alternative may decouple the restore process from the proprietary or technical features particular to the internal storage nodes of the database service. Using the client-facing interface to the database service provides clients with access to existing features of the database service, such as automatically creating secondary index(es) as the restored partitions are loaded, changing the partitioning, streaming all the data changes out as items are added in, etc. For example, some database services provide managed features, such as providing streams of data indicating changes to the database table to which a customer may register a listener. In one example of many, such streams may provide full-text characteristics of database that does not support full-text search, for example. Clients may use such data for a variety of reasons. Another benefit of using the client-facing interface of such a database service is that it leverages such already-existing features of the database service without having to replicate them, as would be required in a back-end interface restoration scenario. In at least some embodiments, the restoration and backup interface826may provide for configuration of such features, such as turning the above-noted streaming feature off or on. Use of the client-facing interface also has the benefit of leveraging the existing global secondary indexing (GSI) management of the existing database service. Benefits of using a backend interface include faster data transfers to the storage nodes and greater control over the data. For example, restoring via a backend interface could provide clients with greater flexibility in controlling whether a new GSI is created or a LSI is deleted, instead of relying on the defaults, or non-existence of such features in the client-facing interface for such features. The methods described herein may in various embodiments be implemented by any combination of hardware and software. For example, in one embodiment, the methods may be implemented by a computer system that includes a processor executing program instructions stored on a non-transitory, computer-readable storage medium coupled to the processor. The program instructions may be configured to implement the functionality described herein (e.g., the functionality of various servers and other components that implement the scalable, distributed data storage systems and/or remote key-value durable backup storage systems described herein (or another type of remote storage system suitable for backing up tables and/or partitions thereof). FIG.12is a block diagram illustrating a computing node configured to implement at least a portion of a continuous data protection system, according to various embodiments described herein. For example, computing node 1200 may represent a computing node that implements one or more of the techniques or components described herein for providing continuous data protection, according to various embodiments. In various embodiments, computing node 1200 may be configured to implement any or all of the components of a system that implements a scalable, distributed data storage system and a corresponding continuous data protection system, or multiple computing nodes similar to or different from computing node 1200may collectively provide this functionality. For example, in various embodiments, one or more computing nodes 1200 may implement database service110, snapshot manager212, change log manager214, snapshot data store142, change log archive152, or clients160. Additionally, any number of storage node instances may each host one or more replicas of various data partitions and/or metadata associated therewith. For example, any given storage node instance may host a replica acting as master replica for its replica group and/or a replica acting as a slave replica in its replica group. Similarly, one or more computing nodes 1200 may implement a key-value durable backup storage system (or an interface or other component thereof), in different embodiments. Computing node 1200 may be any of various types of devices, including, but not limited to, a personal computer system, desktop computer, laptop or notebook computer, mainframe computer system, handheld computer, workstation, network computer, a consumer device, application server, storage device, telephone, mobile telephone, or in general any type of computing device. In some embodiments that include multiple computing nodes 1200, all of the computing nodes 1200 may include the same or similar hardware components, software components, and functionality, while in other embodiments the computing nodes 1200 comprising a computing system configured to implement the functionality described herein may include a wide variety of hardware components, software components, and functionality. In some embodiments, multiple computing nodes 1200 that collectively implement a data storage service or a remote storage system may be components of a larger shared resource system or grid computing system. It is noted that different elements of the system described herein may be implemented by different computing nodes 1200. For example, a computer system that supports the functionality described herein for performing continuous data protection may be implemented on the same computing nodes 1200 on which clients (through which a customer or subscriber may access the system) execute, or on one or more other computing nodes 1200, in different embodiments. In another example, different subsystems (e.g., a Web service interface, an admission control subsystem, a service request subsystem; and/or one or more Web servers or other components) may be implemented on or across multiple ones of the computing nodes, and each of the computing nodes may be similar to computing node 1200. In the illustrated embodiment, computing node 1200 includes one or more processors 1210 (any of which may include multiple cores, which may be single or multi-threaded) coupled to a system memory 1220 via an input/output (I/O) interface 1230. Computing node 1200 further includes a network interface 1240 coupled to I/O interface 1230. In various embodiments, computing node 1200 may be a uniprocessor system including one processor 1210, or a multiprocessor system including several processors 1210 (e.g., two, four, eight, or another suitable number). Processors 1210 may be any suitable processors capable of executing instructions. For example, in various embodiments, processors 1210 may be general-purpose or embedded processors implementing any of a variety of instruction set architectures (ISAs), such as the x86, PowerPC, SPARC, or MIPS ISAs, or any other suitable ISA. In multiprocessor systems, each of processors 1210 may commonly, but not necessarily, implement the same ISA. Similarly, in a distributed computing system such as one that collectively implements a scalable database service or a remote storage service in which tables are baked up, each of the computing nodes may implement the same ISA, or individual computing nodes and/or replica groups of nodes may implement different ISAs. The computing node 1200 also includes one or more network communication devices (e.g., network interface 1240) for communicating with other systems and/or components over a communications network (e.g. Internet, LAN, etc.). For example, a client application executing on computing node 1200 may use network interface 1240 to communicate with a server application executing on a single server or on a cluster of servers that implement a distributed system. In another example, an instance of a server application executing on computing node 1200 may use network interface 1240 to communicate with other instances of the server application that may be implemented on other computer systems. In the illustrated embodiment, computing node 1200 also includes one or more persistent storage devices 1260 and/or one or more I/O devices 1280. In various embodiments, persistent storage devices 1260 may correspond to disk drives, tape drives, solid state memory, other mass storage devices, or any other persistent storage device. Computing node 1200 (or a distributed application or operating system operating thereon) may store instructions and/or data in persistent storage devices 1260, as desired, and may retrieve the stored instruction and/or data as needed. Computing node 1200 includes one or more system memories 1220 that are configured to store instructions and/or data (shown as program instructions 1225 and data store 1245, respectively) that are accessible by processor(s) 1210. In various embodiments, system memories 1220 may be implemented using any suitable memory technology, (e.g., one or more of cache, static random-access memory (SRAM), DRAM, RDRAM, EDO RAM, DDR 10 RAM, synchronous dynamic RAM (SDRAM), Rambus RAM, EEPROM, non-volatile/Flash-type memory, or any other type of memory). System memory 1220 may contain program instructions 1225 that are executable by processor(s) 1210 to implement the methods and techniques described herein. In the illustrated embodiment, program instructions and data implementing desired functions, methods or techniques (such as functionality for backing up tables, and/or restoring tables from backup using any or all of the mechanisms described herein), are shown stored within system memory 1220 as program instructions 1225. For example, program instruction 1225 may include program instructions that when executed on processor(s) 1210 implement any or all of a continuous data protection system112, storage service290, various data stores and archives, and/or any other components, modules, or sub-modules of a system that provides the data storage system and services described herein. Program instructions 1225 may also include program instructions configured to implement additional functionality of a system that implements a data storage service not described herein. In some embodiments, program instructions 1225 may include program instructions configured to implement functionality of a key-value durable backup storage system or another type of remote storage system suitable for backing up tables and/or partitions thereof. In some embodiments, program instructions 1225 may implement multiple separate clients, server nodes, and/or other components. It is noted that in some embodiments, program instructions 1225 may include instructions and data implementing desired functions that are not directly executable by processor(s) 1210 but are represented or encoded in an abstract form that is translatable to instructions that are directly executable by processor(s) 1210. For example, program instructions 1225 may include instructions specified in an ISA that may be emulated by processor 1210, or by other program instructions 1225 executable on processor(s) 1210. Alternatively, program instructions 1225 may include instructions, procedures or statements implemented in an abstract programming language that may be compiled or interpreted in the course of execution. As non-limiting examples, program instructions 1225 may be encoded in platform native binary, any interpreted language such as Java™ byte-code, or may include code specified in a procedural or object-oriented programming language such as C or C++, a scripting language such as perl, a markup language such as HTML or XML, or any other suitable language or in any combination of languages. In some embodiments, program instructions 1225 may include instructions executable to implement an operating system (not shown), which may be any of various operating systems, such as UNIX, LINUX, Solaris™, MacOS™, Windows™, etc. Any or all of program instructions 1225 may be provided as a computer program product, or software, that may include a non-transitory computer-readable storage medium having stored thereon instructions, which may be used to program a computer system (or other electronic devices) to perform a process according to various embodiments. A non-transitory computer-readable storage medium may include any mechanism for storing information in a form (e.g., software, processing application) readable by a machine (e.g., a computer). Generally speaking, a non-transitory computer-accessible medium may include computer-readable storage media or memory media such as magnetic or optical media, e.g., disk or DVD/CD-ROM coupled to computing node 1200 via I/O interface 1230. A non-transitory computer-readable storage medium may also include any volatile or non-volatile media such as RAM (e.g. SDRAM, DDR SDRAM, RDRAM, SRAM, etc.), ROM, etc., that may be included in some embodiments of computing node 1200 as system memory 1220 or another type of memory. In other embodiments, program instructions may be communicated using optical, acoustical or other form of propagated signal (e.g., carrier waves, infrared signals, digital signals, etc.) conveyed via a communication medium such as a network and/or a wireless link, such as may be implemented via network interface 1240. In other embodiments, program instructions and/or data as described herein for implementing a data storage service that employs the techniques described above may be received, sent or stored upon different types of computer-readable media or on similar media separate from system memory 1220 or computing node 1200. Program instructions and data stored on a computer-readable storage medium may be transmitted to a computing node 1200 for execution by a processor 1210 by transmission media or signals such as electrical, electromagnetic, or digital signals, which may be conveyed via a communication medium such as a network and/or a wireless link, such as may be implemented via network interface 1240. In some embodiments, system memory 1220 may include data in data store 1245 and/or program instructions 1225 and/or special purpose instructions 1226, which may be configured as described herein. In some embodiments, data store 1245 may store the snapshots, or the change stream items, for example. In some embodiments, special purpose program instructions 1226 may include instructions that implement the continuous data protection manager112, the storage service290, or the database service110, for example. In some embodiments, system memory 1220 may include data store 1245, which may be configured as described herein. For example, the information described herein as being stored by the scalable, distributed data storage system (e.g., table data, metadata for tables, partitions and backups, transaction information, configuration information for tables and/or partitions, or other information used in performing the methods described herein may be stored in data store 1245 or in another portion of system memory 1220 on one or more nodes, in persistent storage 1260, and/or in remote storage 1270, in various embodiments. In some embodiments, and at various times, system memory 1220 (e.g., data store 1245 within system memory 1220), persistent storage 1260, and/or remote storage 1270 may store copies of table data (e.g., partition data) backup copies of table and/or partition data, metadata associated with tables, partitions, backups, transactions and/or their states, database configuration information, and/or any other information usable in implementing the methods and techniques described herein. In some embodiments, remote storage 1270 may be a key-value durable storage system in which tables (and/or partitions thereof) are backed up and from which they are restored, as described herein. Data store 1245 may in various embodiments include collections of data maintained by a data storage service on behalf of its clients/users, and/or metadata used by a computing system that implements such a service, as described herein (including, but not limited to, tables managed and maintained on behalf of clients/users of the service, metadata tables, business rules, partition maps, routing tables, indexes, namespaces and/or partitions thereof, service level agreement parameter values, subscriber preferences and/or account information, performance data, resource capacity data, resource usage data, provisioned resource utilization data, reserved resource data, resource reservation IDs, resource reservation timeout period values, parameter values for various partition management policies, limits, or constraints, and/or information about candidate partition management operations). In one embodiment, I/O interface 1230 may be configured to coordinate I/O traffic between processor(s) 1210, system memory 1220 and any peripheral devices in the system, including through network interface 1240 or other peripheral interfaces. In some embodiments, I/O interface 1230 may perform any necessary protocol, timing or other data transformations to convert data signals from one component (e.g., system memory 1220) into a format suitable for use by another component (e.g., processor 1210). In some embodiments, I/O interface 1230 may include support for devices attached through various types of peripheral buses, such as a variant of the Peripheral Component Interconnect (PCI) bus standard or the Universal Serial Bus (USB) standard, for example. In some embodiments, the function of I/O interface 1230 may be split into two or more separate components, such as a north bridge and a south bridge, for example. Also, in some embodiments, some or all of the functionality of I/O interface 1230, such as an interface to system memory 1220, may be incorporated directly into processor 1210. Network interface 1240 may be configured to allow data to be exchanged between computing node 1200 and other devices attached to a network, such as other computer systems 1290 (which may implement one or more server nodes and/or clients of a scalable, distributed data storage system and/or a remote key-value durable storage system), for example. In addition, network interface 1240 may be configured to allow communication between computing node 1200 and various I/O devices 1250 and/or remote storage 1270), or between other nodes in a system providing shared computing services. In general, network interface 1240 may be configured to allow data to be exchanged between computing node 1200 and any of a variety of communication devices, external storage devices, input/output devices and/or other computing devices, in different embodiments. Input/output devices 1250 may, in some embodiments, include one or more display terminals, keyboards, keypads, touchpads, scanning devices, voice or optical recognition devices, or any other devices suitable for entering or retrieving data by one or more computing nodes 1200. Multiple input/output devices 1250 may be present in computing node 1200 or may be distributed on various nodes of a distributed system that includes computing node 1200. In some embodiments, similar input/output devices may be separate from computing node 1200 and may interact with one or more nodes of a distributed system that includes computing node 1200 through a wired or wireless connection, such as over network interface 1240. Network interface 1240 may commonly support one or more wireless networking protocols (e.g., Wi-Fi/IEEE 802.11, or another wireless networking standard). However, in various embodiments, network interface 1240 may support communication via any suitable wired or wireless general data networks, such as other types of Ethernet networks, for example. Additionally, network interface 1240 may support communication via telecommunications/telephony networks such as analog voice networks or digital fiber communications networks, via storage area networks such as Fibre Channel SANs, or via any other suitable type of network and/or protocol. In various embodiments, computing node 1200 may include more, fewer, or different components than those illustrated inFIG.12(e.g., displays, video cards, audio cards, peripheral devices, other network interfaces such as an ATM interface, an Ethernet interface, a Frame Relay interface, etc.) Storage service clients (e.g., users, subscribers and/or client applications) may interact with a data storage service such as that described herein in various ways in different embodiments, such as to submit requests for service (including, but not limited to, requests to create and/or partition tables, requests to store, retrieve and/or update items in tables, or requests to split, move, or otherwise repartition a table), and to receive results. For example, some subscribers to the service may have physical access to computing node 1200, and if so, may interact with various input/output devices 1250 to provide and/or receive information. Alternatively, other clients/users may use client computing systems to access the system, such as remotely via network interface 1240 (e.g., via the Internet and/or the World Wide Web). In addition, some or all of the computing nodes of a system providing the service may provide various feedback or other general types of information to clients/users (e.g., in response to user requests) via one or more input/output devices 1250. It is noted that any of the distributed system embodiments described herein, or any of their components, may be implemented as one or more web services. For example, a front-end module or administrative console of a Web services platform may present data storage services and/or database services to clients as web services. In some embodiments, a web service may be implemented by a software and/or hardware system designed to support interoperable machine-to-machine interaction over a network. A web service may have an interface described in a machine-processable format, such as the Web Services Description Language (WSDL). Other systems may interact with the web service in a manner prescribed by the description of the web service's interface. For example, the web service may define various operations that other systems may invoke, and may define a particular application programming interface (API) to which other systems may be expected to conform when requesting the various operations. In various embodiments, a web service may be requested or invoked through the use of a message that includes parameters and/or data associated with the web services request. Such a message may be formatted according to a particular markup language such as Extensible Markup Language (XML), and/or may be encapsulated using a protocol such as Simple Object Access Protocol (SOAP). To perform a web services request, a web services client may assemble a message including the request and convey the message to an addressable endpoint (e.g., a Uniform Resource Locator (URL)) corresponding to the web service, using an Internet-based application layer transfer protocol such as Hypertext Transfer Protocol (HTTP). In some embodiments, web services may be implemented using Representational State Transfer (“RESTful”) techniques rather than message-based techniques. For example, a web service implemented according to a RESTful technique may be invoked through parameters included within an HTTP method such as PUT, GET, or DELETE, rather than encapsulated within a SOAP message. Those skilled in the art will appreciate that computing node 1200 is merely illustrative and is not intended to limit the scope of embodiments. In particular, the computing system and devices may include any combination of hardware or software that can perform the indicated functions, including computers, network devices, internet appliances, PDAs, wireless phones, pagers, etc. Computing node 1200 may also be connected to other devices that are not illustrated, in some embodiments. In addition, the functionality provided by the illustrated components may in some embodiments be combined in fewer components or distributed in additional components. Similarly, in some embodiments the functionality of some of the illustrated components may not be provided and/or other additional functionality may be available. Those skilled in the art will also appreciate that, while various items are illustrated as being stored in memory or on storage while being used, these items or portions of them may be transferred between memory and other storage devices for purposes of memory management and data integrity. Alternatively, in other embodiments some or all of the software components may execute in memory on another device and communicate with the illustrated computing system via inter-computer communication. Some or all of the system components or data structures may also be stored (e.g., as instructions or structured data) on a computer-readable storage medium or a portable article to be read by an appropriate drive, various examples of which are described above. In some embodiments, instructions stored on a computer-readable storage medium separate from computing node 1200 may be transmitted to computing node 1200 via transmission media or signals such as electrical, electromagnetic, or digital signals, conveyed via a communication medium such as a network and/or a wireless link. Various embodiments may further include receiving, sending or storing instructions and/or data implemented in accordance with the foregoing description upon a computer-readable storage medium. Accordingly, different embodiments may be practiced with other computer system configurations. Note that while several examples described herein are directed to the application of various techniques in systems that include a non-relational database, in other embodiments these techniques may be applied in systems in which the distributed data store is implemented using a different storage paradigm. The various methods as illustrated in the figures and described herein represent example embodiments of methods. The methods may be implemented manually, in software, in hardware, or in a combination thereof. The order of any method may be changed, and various elements may be added, reordered, combined, omitted, modified, etc. Those skilled in the art will appreciate that in some embodiments the functionality provided by the methods discussed above may be provided in alternative ways, such as being split among more software modules or routines or consolidated into fewer modules or routines. Similarly, in some embodiments illustrated methods may provide more or less functionality than is described, such as when other illustrated methods instead lack or include such functionality respectively, or when the amount of functionality that is provided is altered. In addition, while various operations may be illustrated as being performed in a particular manner (e.g., in serial or in parallel) and/or in a particular order, those skilled in the art will appreciate that in other embodiments the operations may be performed in other orders and in other manners. Those skilled in the art will also appreciate that the data structures discussed above may be structured in different manners, such as by having a single data structure split into multiple data structures or by having multiple data structures consolidated into a single data structure. Similarly, in some embodiments illustrated data structures may store more or less information than is described, such as when other illustrated data structures instead lack or include such information respectively, or when the amount or types of information that is stored is altered. The various methods as depicted in the figures and described herein represent illustrative embodiments of methods. The methods may be implemented in software, in hardware, or in a combination thereof in various embodiments. Similarly, the order of any method may be changed, and various elements may be added, reordered, combined, omitted, modified, etc., in various embodiments. From the foregoing it will be appreciated that, although specific embodiments have been described herein for purposes of illustration, various modifications may be made without deviating from the spirit and scope of the appended claims and the elements recited therein. In addition, while certain aspects are presented below in certain claim forms, the inventors contemplate the various aspects in any available claim form. For example, while only some aspects may currently be recited as being embodied in a computer readable storage medium, other aspects may likewise be so embodied.
98,256
11860742
DETAILED DESCRIPTION The description that follows includes systems, methods, techniques, instruction sequences, and computing machine program products that embody illustrative embodiments of the present disclosure. In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of example embodiments. It will be evident, however, to one skilled in the art that the present inventive subject matter may be practiced without these specific details. It will be appreciated that some of the examples disclosed herein are described in the context of virtual machines that are backed up by using base snapshots and incremental snapshots, for example. This should not necessarily be regarded as limiting of the disclosures. The disclosures, systems and methods described herein apply not only to virtual machines of all types that run a file system (for example), but also to NAS devices, physical machines (for example Linux servers), and databases. In existing data management systems, when migrating data across different storage platforms, users may need to export data manually from one storage platform and import the data to another storage platform. Oftentimes, the source storage platform reads and processes data in a proprietary file format that is not compatible with the target storage platform. This is often largely due to the innate structure of the file itself. In order to correctly and efficiently export or extract data from the source storage platform to the target storage platform, an extra step of conversion of the source file format is required. The overall manual process involved in data migration may include exporting data from a source storage platform, converting source file format of the data to a file format that is compatible with the target storage platform, and importing the data of the converted file format to the target storage platform. It is not only time consuming and prone to errors and delays, but also inefficient as it may require coordination with different internal and external teams. This data inertia can create challenges for customers, especially where they feel locked into a particular platform. In addition, in order to avoid a need to deal with data file format conversion to combat compatibility issues between storage platforms, users are limited to a specific set of intercompatible storage services, oftentimes operated by the same storage service provider. Thereby, their ability to choose from other storage service providers is constrained. Various embodiments described herein relate to a cross-platform data migration system. The system integrates the processes of data export or extraction from a source storage platform, data file format conversions, and data import to a target storage platform into an automatic and seamless data backup process. During the process of file format conversion, the system converts a proprietary file format on the fly into a platform-neutral file format that can be consumed by a database in another proprietary storage platform. This way, users may choose any storage platform provided by any storage service provider without being locked in with a specific service provider due to file format compatibility issues. In some embodiments, the cross-platform data migration system may be a software-level component of a storage appliance in a data center in a networked computing environment, such as the networked computing environment100inFIG.1. In some embodiments, the cross-platform data migration system may be integrated into a data management system for managing virtual machines and data backups in the storage appliance. In some embodiments, the cross-platform data migration system identifies a first file format associated with backup data of a virtual machine at a particular point in time. The virtual machine may reside in the host system (e.g., on-premises data center), or in a cloud-based storage platform supported by a third-party cloud service provider. The backup data may include one or more snapshots (e.g., base snapshots or incremental snapshots) that are generated based on a frequency provided under a Service-Level Agreement (“SLA agreement”). The SLA agreement ensures service level consistency for one or more clusters of nodes. A snapshot may be captured in any pre-determined time interval, such as every 1 minute, every 1 hour, or every 30 days. The pre-determined time interval depends on the SLA consistency requirement outlined in the SLA agreement. In some embodiments, the cross-platform data migration system identifies a file format (e.g., the first file format) associated with the backup data. A file format is a particular way to encode information for storage in a computer system. File formats may be either proprietary or open to the public. When a file format is proprietary, the backup data generated in such a file format may not be directly consumed by another storage platform of a different service provider. Therefore, a conversion of proprietary file format associated with the backup data is required before exporting the data onto another storage platform. In some embodiments, the file format associated with backup data is converted into a platform-neutral file format In some embodiments, the file format associated with backup data is converted into a file format that is compatible with a specific target storage platform, to which the backup data is exported. In some embodiments, conversion of a file format of backup data may include the conversion of the logic aspect of the backup data, in some examples, it is the logic profile of a snapshot. The logic profile of a snapshot may be a subset of the backup data associated with the snapshot. The logic aspect of the backup data may include the associated database schema and the associated database control logic. The database control logic may include database control logic data items associated with data changes in a database, such as database stored procedures and database logic views. Stored procedures are packaged sequences of program instructions that perform specific tasks as individual units. Stored procedures are subroutines available to applications that access a database. Logic views are logical projections of the source data based on a selection of criteria. Each logic view is a logical entity of the source data that can be operated independently as if it was a physical data table. In some embodiments, the conversion of file format includes the conversion of each logic view. By integrating the database control logic into the backup data, once restored, applications may immediately run on the restored database without having to reconfigure the control logic to invoke pre-configured database functions. In some embodiments, the cross-platform data migration system converts the first file format of the backup data in real-time into a platform-neutral file format before storing the converted backup data in the on-premises local storage, such as storage appliance102or storage appliance300ofFIG.1, as described further below. The venue where the backup data is stored may be based on user requests or based on a pre-configured automatic backup procedure. In some embodiments, the cross-platform data migration system converts the first file format of the backup data in real-time into a platform-neutral file format before storing the converted backup data in a storage platform based on a user request. For example, a user may request the backup data, such as snapshots, to be stored in a storage platform different from the on-premises data center (e.g., the host system). Since the backup data source file format is converted into a platform-neutral file format before being stored, a user may select any storage platform without regard for the file format compatibility issues between the source and target platforms. In some embodiments, when the selection of potential target storage platforms is known to the host system, or the on-premises data center, the cross-platform data migration system ensures the converted platform-neutral store procedures and logic views are compatible with all potential target storage platforms. This way, when the backup data is restored in the target storage platform in the event of unplanned downtime of the data center104(e.g., the host system), the restored database may immediately support operations of applications that require database functions that have been pre-configured in the source storage platform. Reference will now be made in detail to embodiments of the present disclosure, examples of which are illustrated in the appended drawings. The present disclosure may, however, be embodied in many different forms and should not be construed as being limited to the embodiments set forth herein. FIG.1depicts one embodiment of a networked computing environment100in which the disclosed technology may be practiced. As depicted, the networked computing environment100includes a data center104, a storage appliance102, and a computing device106in communication with each other via one or more networks128. The networked computing environment100may also include a plurality of computing devices interconnected through one or more networks128. The one or more networks128may allow computing devices and/or storage devices to connect to and communicate with other computing devices and/or other storage devices. In some cases, the networked computing environment100may include other computing devices and/or other storage devices not shown. The other computing devices may include, for example, a mobile computing device, a non-mobile computing device, a server, a work-station, a laptop computer, a tablet computer, a desktop computer, or an information processing system. The other storage devices may include, for example, a storage area network storage device, a networked-attached storage device, a hard disk drive, a solid-state drive, or a data storage system. The data center104may include one or more servers, such as server200, in communication with one or more storage devices, such as storage device108. The one or more servers may also be in communication with one or more storage appliances, such as storage appliance102. The server200, storage device108, and storage appliance300may be in communication with each other via a networking fabric connecting servers and data storage units within the data center104to each other. The storage appliance300may include a data management system for backing up virtual machines and files within a virtualized infrastructure. In some embodiments, the storage appliance300may include a cross-platform data migration system for managing file format conversion for backup data generated by the data management system. In some embodiments, the cross-platform data migration system is integrated into the data management system to provide functions including backing up virtual machines and files, and converting the file format of the backup data into a platform-neutral format before storing the data either in the on-premises database (e.g., the storage device108), or in a database of a third-party storage platform (not shown). The server200may be used to create and manage one or more virtual machines associated with a virtualized infrastructure. The one or more virtual machines may run various applications, such as a database application or a web server. The storage device108may include one or more hardware storage devices for storing data, such as a hard disk drive (HDD), a magnetic tape drive, a solid-state drive (SSD), a storage area network (SAN) storage device, or a Networked-Attached Storage (NAS) device. In some cases, a data center, such as data center104, may include thousands of servers and/or data storage devices in communication with each other. The one or more data storage devices108may comprise a tiered data storage infrastructure (or a portion of a tiered data storage infrastructure). The tiered data storage infrastructure may allow for the movement of data across different tiers of a data storage infrastructure between higher-cost, higher-performance storage devices solid-state drives and hard disk drives) and relatively lower-cost, lower-performance storage devices (e.g., magnetic tape drives). The one or more networks128may include a secure network such as an enterprise private network, an unsecure network such as a wireless open network, a local area network (LAN), a wide area network (WAN), and the Internet. The one or more networks128may include a cellular network, a mobile network, a wireless network, or a wired network. Each network of the one or more networks128may include hubs, bridges, routers, switches, and wired transmission media such as a direct-wired connection. The one or more networks128may include an extranet or other private network for securely sharing information or providing controlled access to applications or files. A server, such as server200, may allow a client to download information or files (e.g., executable, text, application, audio, image, or video files) from the server200or to perform a search query related to particular information stored on the server200. In some cases, a server may act as an application server or a file server. In general, server200may refer to a hardware device that acts as the host in a client-server relationship or a software process that shares a resource with or performs work for one or more clients. One embodiment of server200includes a network interface110, processor112, memory114, disk116, and virtualization manager118all in communication with each other. Network interface110allows server200to connect to one or more networks128. Network interface110may include a wireless network interface and/or a wired network interface. Processor112allows server200to execute computer-readable instructions stored in memory114in order to perform processes described herein. Processor112may include one or more processing units, such as one or more CPUs and/or one or more GPUs. Memory114may comprise one or more types of memory (e.g., RAM, SRAM, DRAM, ROM, EEPROM, Flash, etc.). Disk116may include a hard disk drive and/or a solid-state drive. Memory114and disk116may comprise hardware storage devices. The virtualization manager118may manage a virtualized infrastructure and perform management operations associated with the virtualized infrastructure. The virtualization manager118may manage the provisioning of virtual machines running within the virtualized infrastructure and provide an interface to computing devices interacting with the virtualized infrastructure. In one example, the virtualization manager118may set a virtual machine having a virtual disk into a frozen state in response to a snapshot request made via an application programming interface (API) by a storage appliance, such as storage appliance300. Setting the virtual machine into a frozen state may allow a point-in-time snapshot of the virtual machine to be stored or transferred. In one example, updates made to a virtual machine that has been set into a frozen state may be written to a separate file (e.g., an update file) while the virtual disk may be set into a read-only state to prevent modifications to the virtual disk file while the virtual machine is in the frozen state. The virtualization manager118may then transfer backup data associated with the virtual machine to a storage appliance (e.g., a storage appliance102or storage appliance300ofFIG.1, described further below) in response to a request made by a user via the storage appliance. For example, the backup data may include an image of the virtual machine (e.g., base snapshot) or a portion of the image of the virtual disk file (e.g., incremental snapshot) associated with the state of the virtual disk at the point in time when it is frozen. In some embodiments, the cross-platform data migration system may convert the file format associated with the backup data into a platform-neutral file format before storing the backup data in the platform-neutral file format in the storage appliance. In some embodiments, after the data associated with the point in time snapshot of the virtual machine has been transferred to the storage appliance300, the virtual machine may be released from the frozen state (i.e., unfrozen) and the updates made to the virtual machine and stored in the separate file may be merged into the virtual disk file. The virtualization manager118may perform various virtual machine-related tasks, such as cloning virtual machines, creating new virtual machines, monitoring the state of virtual machines, moving virtual machines between physical hosts for load balancing purposes, and facilitating backups of virtual machines. One embodiment of a storage appliance300(or storage appliance102) includes a network interface120, processor122, memory124, and disk126all in communication with each other. Network interface120allows storage appliance300to connect to one or more networks128. Network interface120may include a wireless network interface and/or a wired network interface. Processor122allows storage appliance300to execute computer-readable instructions stored in memory124in order to perform processes described herein. Processor122may include one or more processing units, such as one or more CPUs and/or one or more GPUs. Memory124may comprise one or more types of memory (e.g., RAM, SRAM, DRAM, ROM, EEPROM, NOR Flash, NAND Flash, etc.). Disk126may include a hard disk drive and/or a solid-state drive. Memory124and disk126may comprise hardware storage devices. In some embodiments, the storage appliance300may include four machines. Each of the four machines may include a multi-core CPU, 64 GB of RAM, a 400 GB SSD, three 4 TB HDDs, and a network interface controller. In this case, the four machines may be in communication with one or more networks128via the four network interface controllers. The four machines may comprise four nodes of a server cluster. The server cluster may comprise a set of physical machines that are connected together via a network. The server cluster may be used for storing data associated with a plurality of virtual machines, such as backup data associated with different point-in-time versions of the virtual machines. The networked computing environment100may provide a cloud computing environment for one or more computing devices. Cloud computing may refer to Internet-based computing, wherein shared resources, software, and/or information may be provided to one or more computing devices on-demand via the Internet. The networked computing environment100may comprise a cloud computing environment providing Software-as-a-Service (SaaS) or Infrastructure-as-a-Service (IaaS) services. SaaS may refer to a software distribution model in which applications are hosted by a service provider and made available to end-users over the Internet. In some embodiments, the networked computing environment100may include a virtualized infrastructure that provides software, data processing, and/or data storage services to end-users accessing the services via the networked computing environment100. In one example, networked computing environment100may provide cloud-based work productivity or business-related applications to a computing device, such as computing device106. The storage appliance102may comprise a cloud-based data management system for backing up virtual machines and/or files within a virtualized infrastructure, such as virtual machines running on server200/or files stored on server200. In some cases, networked computing environment100may provide remote access to secure applications and files stored within data center104from a remote computing device, such as computing device106. The data center104may use an access control application to manage remote access to protected resources, such as protected applications, databases, or files located within the data center104. To facilitate remote access to secure applications and files, a secure network connection may be established using a virtual private network (VPN). A VPN connection may allow a remote computing device, such as computing device106, to securely access data from a private network (e.g., from a company file server or mail server) using an unsecure public network or the Internet. The VPN connection may require client-side software (e.g., running on the remote computing device) to establish and maintain the VPN connection. The VPN client software may provide data encryption and encapsulation prior to the transmission of secure private network traffic through the Internet. In some embodiments, the storage appliance300may manage the extraction and storage of virtual machine snapshots associated with different point in time versions of one or more virtual machines running within the data center104. A snapshot of a virtual machine may correspond with a state of the virtual machine at a particular point in time. In response to a restore command from the storage device108, the storage appliance300may restore a point-in-time version of a virtual machine (e.g., base snapshot) or restore point-in-time versions of one or more files located on the virtual machine (e.g., incremental snapshot) and transmit the restored data to the server200. In response to a mount command from the server200, the storage appliance300may allow a point-in-time version of a virtual machine to be mounted and allow the server200to read and/or modify data associated with the point-in-time version of the virtual machine. To improve storage density, the storage appliance300may deduplicate and compress data associated with different versions of a virtual machine and/or deduplicate and compress data associated with different virtual machines. To improve system performance, the storage appliance300may first store virtual machine snapshots received from a virtualized environment in a cache, such as a flash-based cache. The cache may also store popular data or frequently accessed data (e.g., based on a history of virtual machine restorations, incremental files associated with commonly restored virtual machine versions) and current day incremental files or incremental files corresponding with snapshots captured within the past 24 hours. In some embodiments, the cross-platform data migration system in the software layer of the storage appliance300may instruct a third-party storage platform to restore a virtual machine on the third-party platform using the stored backup data that are associated with all of the databases used by the virtual machine in the cluster at the time the backup data was captured. With the database control logic data items encoded in the backup data in the platform-neutral format, a client or a user may immediately run applications on the restored virtual machine on the third-party storage platform to perform functions that had been pre-programmed in the backed-up databases. An incremental file may comprise a forward incremental file or a reverse incremental file. A forward incremental file may include a set of data representing changes that have occurred since an earlier point-in-time snapshot of a virtual machine. To generate a snapshot of the virtual machine corresponding with a forward incremental file, the forward incremental file may be combined with an earlier point-in-time snapshot of the virtual machine (e.g., the forward incremental file may be combined with the last full image of the virtual machine that was captured before the forward incremental file was captured and any other forward incremental files that were captured subsequent to the last full image and prior to the forward incremental file). A reverse incremental file may include a set of data representing changes from a later point-in-time snapshot of a virtual machine. To generate a snapshot of the virtual machine corresponding with a reverse incremental file, the reverse incremental file may be combined with a later point-in-time snapshot of the virtual machine (e.g., the reverse incremental file may be combined with the most recent snapshot of the virtual machine and any other reverse incremental files that were captured prior to the most recent snapshot and subsequent to the reverse incremental file). The storage appliance300may provide a user interface (e.g., a web-based interface or a graphical user interface) that displays virtual machine backup information such as identifications of the virtual machines protected and the historical versions or time machine views for each of the virtual machines protected. A time machine view of a virtual machine may include snapshots of the virtual machine over a plurality of points in time. Each snapshot may comprise the state of the virtual machine at a particular point in time. Each snapshot may correspond with a different version of the virtual machine (e.g., Version 1 of a virtual machine may correspond with the state of the virtual machine at a first point in time and Version 2 of the virtual machine may correspond with the state of the virtual machine at a second point in time subsequent to the first point in time). The user interface may enable an end-user of the storage appliance300(e.g., a system administrator or a virtualization administrator) to select a particular version of a virtual machine to be restored or mounted. When a particular version of a virtual machine has been mounted, the particular version may be accessed by a client (e.g., a virtual machine, a physical machine, or a computing device) as if the particular version was local to the client. A mounted version of a virtual machine may correspond with a mount point directory (e.g., /snapshots/VM5Nersion23). In one example, the storage appliance300may run an NFS server and make the particular version (or a copy of the particular version) of the virtual machine accessible for reading and/or writing. The end-user of the storage appliance300may then select the particular version to be mounted and run an application (e.g., a data analytics application) using the mounted version of the virtual machine. In another example, the particular version may be mounted as an iSCSI target. FIG.2depicts one embodiment of server200ofFIG.1. The server200may comprise one server out of a plurality of servers that are networked together within a data center (e.g., data center104). In one example, the plurality of servers may be positioned within one or more server racks within the data center. As depicted, the server200includes hardware-level components and software-level components. The hardware-level components include one or more processors202, one or more memory204, and one or more disks206. The software-level components include a hypervisor208, a virtualized infrastructure manager222, and one or more virtual machines, such as virtual machine220. The hypervisor208may comprise a native hypervisor or a hosted hypervisor. The hypervisor208may provide a virtual operating platform for running one or more virtual machines, such as virtual machine220. Virtual machine220includes a plurality of virtual hardware devices including a virtual processor210, a virtual memory212, and a virtual disk214. The virtual disk214may comprise a file stored within the one or more disks206. In one example, a virtual machine220may include a plurality of virtual disks214, with each virtual disk of the plurality of virtual disks214associated with a different file stored on the one or more disks206. Virtual machine220may include a guest operating system216that runs one or more applications, such as application218. The virtualized infrastructure manager222, which may correspond with the virtualization manager118inFIG.1, may run on a virtual machine or natively on the server200. The virtual machine may, for example, be or include the virtual machine220or a virtual machine separate from the server200. Other arrangements are possible. The virtualized infrastructure manager222may provide a centralized platform for managing a virtualized infrastructure that includes a plurality of virtual machines. The virtualized infrastructure manager222may manage the provisioning of virtual machines running within the virtualized infrastructure and provide an interface to computing devices interacting with the virtualized infrastructure. The virtualized infrastructure manager222may perform various virtualized infrastructure related tasks, such as cloning virtual machines, creating new virtual machines, monitoring the state of virtual machines, and facilitating backups of virtual machines. In some embodiments, the server200may use the virtualized infrastructure manager222to facilitate backups for a plurality of virtual machines (e.g., eight different virtual machines) running on the server200. Each virtual machine running on the server200may run its own guest operating system and its own set of applications. Each virtual machine running on the server200may store its own set of files using one or more virtual disks associated with the virtual machine (e.g., each virtual machine may include two virtual disks that are used for storing data associated with the virtual machine). In some embodiments, a data management application running on a storage appliance, such as storage appliance102inFIG.1or storage appliance300inFIG.1, may request a snapshot of a virtual machine running on server200. The snapshot of the virtual machine may be stored as one or more files, with each file associated with a virtual disk of the virtual machine. A snapshot of a virtual machine may correspond with a state of the virtual machine at a particular point in time. The particular point in time may be associated with a time stamp. In one example, a first snapshot of a virtual machine may correspond with a first state of the virtual machine (including the state of applications and files stored on the virtual machine) at a first point in time and a second snapshot of the virtual machine may correspond with a second state of the virtual machine at a second point in time subsequent to the first point in time. In response to a request for a snapshot of a virtual machine at a particular point in time, the virtualized infrastructure manager222may set the virtual machine into a frozen state or store a copy of the virtual machine at the particular point in time. The virtualized infrastructure manager222may then transfer data associated with the virtual machine (e.g., an image of the virtual machine or a portion of the image of the virtual machine) to the storage appliance300or storage appliance102. The data (e.g., backup data) associated with the virtual machine may include a set of files including a virtual disk file storing contents of a virtual disk of the virtual machine at the particular point in time and a virtual machine configuration file (e.g., database schema and database control logic data items) storing configuration settings for the virtual machine at the particular point in time. The contents of the virtual disk file may include the operating system used by the virtual machine, local applications stored on the virtual disk, and user files (e.g., images and word processing documents). In some cases, the virtualized infrastructure manager222may transfer a full image of the virtual machine to the storage appliance102or storage appliance300ofFIG.1or a plurality of data blocks corresponding with the full image (e.g., to enable a full image-level backup of the virtual machine to be stored on the storage appliance). In other cases, the virtualized infrastructure manager222may transfer a portion of an image of the virtual machine associated with data that has changed since an earlier point in time prior to the particular point in time or since a last snapshot of the virtual machine was taken. In one example, the virtualized infrastructure manager222may transfer only data associated with virtual blocks stored on a virtual disk of the virtual machine that have changed since the last snapshot of the virtual machine was taken. In some embodiments, the data management application may specify a first point in time and a second point in time and the virtualized infrastructure manager222may output one or more virtual data blocks associated with the virtual machine that have been modified between the first point in time and the second point in time. In some embodiments, the server200or the hypervisor208may communicate with a storage appliance, such as storage appliance102inFIG.1or storage appliance300inFIG.1, using a distributed file system protocol such as Network File System (NFS) Version 3, or Server Message Block (SMB) protocol. The distributed file system protocol may allow the server200or the hypervisor208to access, read, write, or modify files stored on the storage appliance as if the files were locally stored on the server200. The distributed file system protocol may allow the server200or the hypervisor208to mount a directory or a portion of file system located within the storage appliance. FIG.3depicts one embodiment of storage appliance300inFIG.1. The storage appliance may include a plurality of physical machines and virtual machines that may act in concert as a single computing system. Each physical machine of the plurality of physical machines may comprise a node in a cluster. In one example, the storage appliance may be positioned within a server rack within a data center. As depicted, the storage appliance300includes hardware-level components and software-level components. The hardware-level components include one or more physical machines, such as physical machine334and physical machine324. The physical machine334includes a network interface316, processor318, memory320, and disk322all in communication with each other. Processor318allows physical machine334to execute computer readable instructions stored in memory320to perform processes described herein. Disk322may include a hard disk drive and/or a solid-state drive. The physical machine324includes a network interface326, processor328, memory330, and disk332all in communication with each other. Processor328allows physical machine324to execute computer readable instructions stored in memory330to perform processes described herein. Disk332may include a hard disk drive and/or a solid-state drive. In some cases, disk332may include a flash-based SSD or a hybrid HDD/SSD drive. In some embodiments, the storage appliance300may include a plurality of physical machines arranged in a cluster (e.g., eight machines in a cluster). Each of the plurality of physical machines may include a plurality of multi-core CPUs, 108 GB of RAM, a 500 GB SSD, four 4 TB HDDs, and a network interface controller. In some embodiments, the plurality of physical machines may be used to implement a cluster-based network file server. The cluster-based network file server may neither require nor use a front-end load balancer. One issue with using a front-end load balancer to host the IP address for the cluster-based network file server and to forward requests to the nodes of the cluster-based network file server is that the front-end load balancer comprises a single point of failure for the cluster-based network file server. In some cases, the file system protocol used by a server, such as server200inFIG.1, or a hypervisor, such as hypervisor208inFIG.2, to communicate with the storage appliance300may not provide a failover mechanism (e.g., NFS Version 3). In the case that no failover mechanism is provided on the client side, the hypervisor may not be able to connect to a new node within a cluster in the event that the node connected to the hypervisor fails. In some embodiments, each node in a cluster may be connected to each other via a network and may be associated with one or more IP addresses (e.g., two different IP addresses may be assigned to each node). In one example, each node in the cluster may be assigned a permanent IP address and a floating IP address and may be accessed using either the permanent IP address or the floating IP address. In this case, a hypervisor, such as hypervisor208inFIG.2, may be configured with a first floating IP address associated with a first node in the cluster. The hypervisor may connect to the cluster using the first floating IP address. In one example, the hypervisor may communicate with the cluster using the NFS Version 3 protocol. Each node in the cluster may run a Virtual Router Redundancy Protocol (VRRP) daemon. A daemon may comprise a background process. Each VRRP daemon may include a list of all floating IP addresses available within the cluster. In the event that the first node associated with the first floating IP address fails, one of the VRRP daemons may automatically assume or pick up the first floating IP address if no other VRRP daemon has already assumed the first floating IP address. Therefore, if the first node in the cluster fails or otherwise goes down, then one of the remaining VRRP daemons running on the other nodes in the cluster may assume the first floating IP address that is used by the hypervisor for communicating with the cluster. In order to determine which of the other nodes in the cluster will assume the first floating IP address, a VRRP priority may be established. In one example, given a number (N) of nodes in a cluster from node(0) to node(N−1), for a floating IP address (i), the VRRP priority of nodeG) may be G-i) modulo N. In another example, given a number (N) of nodes in a cluster from node(0) to node(N−1), for a floating IP address (i), the VRRP priority of nodeG) may be (i-j) modulo N. In these cases, nodeG) will assume floating IP address (i) only if its VRRP priority is higher than that of any other node in the cluster that is alive and announcing itself on the network. Thus, if a node fails, then there may be a clear priority ordering for determining which other node in the cluster will take over the failed node's floating IP address. In some cases, a cluster may include a plurality of nodes and each node of the plurality of nodes may be assigned a different floating IP address. In this case, a first hypervisor may be configured with a first floating IP address associated with a first node in the cluster, a second hypervisor may be configured with a second floating IP address associated with a second node in the cluster, and a third hypervisor may be configured with a third floating IP address associated with a third node in the cluster. As depicted inFIG.3, the software-level components of the storage appliance300may include data management system302, cross-platform data migration system314, a virtualization interface304, a distributed job scheduler308, a distributed metadata store310, a distributed file system312, and one or more virtual machine search indexes, such as virtual machine search index306. In some embodiments, the cross-platform data migration system314may be a software-level component of a storage appliance300in a networked computing environment100. In some embodiments, the cross-platform data migration system may be integrated into a data management system for managing the complete workflow of data migration as explained further inFIG.3. In some embodiments, the software-level components of the storage appliance300may be run using a dedicated hardware-based appliance. In another embodiment, the software-level components of the storage appliance300may be run from the cloud (e.g., the software-level components may be installed on a cloud service provider). In some cases, the data storage across a plurality of nodes in a cluster (e.g., the data storage available from the one or more physical machine (e.g., physical machine334and physical machine324)) may be aggregated and made available over a single file system namespace (e.g., /snapshots/). A directory for each virtual machine protected using the storage appliance300may be created (e.g., the directory for Virtual Machine A may be /snapshots/VM_A). Snapshots and other data associated with a virtual machine may reside within the directory for the virtual machine. In one example, snapshots of a virtual machine may be stored in subdirectories of the directory (e.g., a first snapshot of Virtual Machine A may reside in /snapshots/VM_A/s1/ and a second snapshot of Virtual Machine A may reside in /snapshots/VM_A/s2/). The distributed file system312may present itself as a single file system, in which as new physical machines or nodes are added to the storage appliance300, the cluster may automatically discover the additional nodes and automatically increase the available capacity of the file system for storing files and other data. Each file stored in the distributed file system312may be partitioned into one or more chunks or shards. Each of the one or more chunks may be stored within the distributed file system312as a separate file. The files stored within the distributed file system312may be replicated or mirrored over a plurality of physical machines, thereby creating a load-balanced and fault tolerant distributed file system. In one example, storage appliance300may include ten physical machines arranged as a failover cluster and a first file corresponding with a snapshot of a virtual machine (e.g., /snapshots/VM_A/s1/s1.full) may be replicated and stored on three of the ten machines. The distributed metadata store310may include a distributed database management system that provides high availability without a single point of failure. In some embodiments, the distributed metadata store310may comprise a database, such as a distributed document-oriented database. The distributed metadata store310may be used as a distributed key value storage system. In one example, the distributed metadata store310may comprise a distributed NoSQL key value store database. In some cases, the distributed metadata store310may include a partitioned row store, in which rows are organized into tables or other collections of related data held within a structured format within the key value store database. A table (or a set of tables) may be used to store metadata information associated with one or more files stored within the distributed file system312. The metadata information may include the name of a file, a size of the file, file permissions associated with the file, when the file was last modified, and file mapping information associated with an identification of the location of the file stored within a cluster of physical machines. In some embodiments, a new file corresponding with a snapshot of a virtual machine may be stored within the distributed file system312and metadata associated with the new file may be stored within the distributed metadata store310. The distributed metadata store310may also be used to store a backup schedule for the virtual machine and a list of snapshots for the virtual machine that are stored using the storage appliance300. In some cases, the distributed metadata store310may be used to manage one or more versions of a virtual machine. Each version of the virtual machine may correspond with a full image snapshot of the virtual machine stored within the distributed file system312or an incremental snapshot of the virtual machine (e.g., a forward incremental or reverse incremental) stored within the distributed file system312. In some embodiments, the one or more versions of the virtual machine may correspond with a plurality of files. The plurality of files may include a single full image snapshot of the virtual machine and one or more incremental aspects derived from the single full image snapshot. The single full image snapshot of the virtual machine may be stored using a first storage device of a first type (e.g., a HDD) and the one or more incremental aspects derived from the single full image snapshot may be stored using a second storage device of a second type (e.g., an SSD). In this case, only a single full image needs to be stored and each version of the virtual machine may be generated from the single full image or the single full image combined with a subset of the one or more incremental aspects. Furthermore, each version of the virtual machine may be generated by performing a sequential read from the first storage device (e.g., reading a single file from a HDD) to acquire the full image and, in parallel, performing one or more reads from the second storage device (e.g., performing fast random reads from an SSD) to acquire the one or more incremental aspects. The distributed job scheduler308may be used for scheduling backup jobs that acquire and store virtual machine snapshots for one or more virtual machines over time. The distributed job scheduler308may follow a backup schedule to back up an entire image of a virtual machine at a particular point in time or one or more virtual disks associated with the virtual machine at the particular point in time. In one example, the backup schedule may specify that the virtual machine be backed up at a snapshot capture frequency, such as every two hours or every 24 hours. Each backup job may be associated with one or more tasks to be performed in a sequence. Each of the one or more tasks associated with a job may be run on a particular node within a cluster. In some cases, the distributed job scheduler308may schedule a specific job to be run on a particular node based on data stored on the particular node. For example, the distributed job scheduler308may schedule a virtual machine snapshot job to be run on a node in a cluster that is used to store snapshots of the virtual machine in order to reduce network congestion. The distributed job scheduler308may comprise a distributed fault tolerant job scheduler, in which jobs affected by node failures are recovered and rescheduled to be run on available nodes. In some embodiments, the distributed job scheduler308may be fully decentralized and implemented without the existence of a master node. The distributed job scheduler308may run job scheduling processes on each node in a cluster or on a plurality of nodes in the cluster. In one example, the distributed job scheduler308may run a first set of job scheduling processes on a first node in the cluster, a second set of job scheduling processes on a second node in the cluster, and a third set of job scheduling processes on a third node in the cluster. The first set of job scheduling processes, the second set of job scheduling processes, and the third set of job scheduling processes may store information regarding jobs, schedules, and the states of jobs using a metadata store, such as distributed metadata store310. In the event that the first node running the first set of job scheduling processes fails (e.g., due to a network failure or a physical machine failure), the states of the jobs managed by the first set of job scheduling processes may fail to be updated within a threshold period of time (e.g., a job may fail to be completed within 30 seconds or within minutes from being started). In response to detecting jobs that have failed to be updated within the threshold period of time, the distributed job scheduler308may undo and restart the failed jobs on available nodes within the cluster. The job scheduling processes running on at least a plurality of nodes in a cluster e.g., on each available node in the cluster) may manage the scheduling and execution of a plurality of jobs. The job scheduling processes may include run processes for running jobs, cleanup processes for cleaning up failed tasks, and rollback processes for rolling-back or undoing any actions or tasks performed by failed jobs. In some embodiments, the job scheduling processes may detect that a particular task for a particular job has failed and in response may perform a cleanup process to clean up or remove the effects of the particular task and then perform a rollback process that processes one or more completed tasks for the particular job in reverse order to undo the effects of the one or more completed tasks. Once the particular job with the failed task has been undone, the job scheduling processes may restart the particular job on an available node in the cluster. The distributed job scheduler308may manage a job in which a series of tasks associated with the job are to be performed atomically partial execution of the series of tasks is not permitted). If the series of tasks cannot be completely executed or there is any failure that occurs to one of the series of tasks during execution (e.g., a hard disk associated with a physical machine fails or a network connection to the physical machine fails), then the state of a data management system may be returned to a state as if none of the series of tasks was ever performed. The series of tasks may correspond with an ordering of tasks for the series of tasks and the distributed job scheduler308may ensure that each task of the series of tasks is executed based on the ordering of tasks. Tasks that do not have dependencies with each other may be executed in parallel. In some cases, the distributed job scheduler308may schedule each task of a series of tasks to be performed on a specific node in a cluster. In other cases, the distributed job scheduler308may schedule a first task of the series of tasks to be performed on a first node in a cluster and a second task of the series of tasks to be performed on a second node in the cluster. In these cases, the first task may have to operate on a first set of data (e.g., a first file stored in a file system) stored on the first node and the second task may have to operate on a second set of data (e.g., metadata related to the first file that is stored in a database) stored on the second node. In some embodiments, one or more tasks associated with a job may have an affinity to a specific node in a cluster. In one example, if the one or more tasks require access to a database that has been replicated on three nodes in a cluster, then the one or more tasks may be executed on one of the three nodes. In another example, if the one or more tasks require access to multiple chunks of data associated with a virtual disk that has been replicated over four nodes in a cluster, then the one or more tasks may be executed on one of the four nodes. Thus, the distributed job scheduler308may assign one or more tasks associated with a job to be executed on a particular node in a cluster based on the location of data required to be accessed by the one or more tasks. In some embodiments, the distributed job scheduler308may manage a first job associated with capturing and storing a snapshot of a virtual machine periodically (e.g., every 30 minutes). The first job may include one or more tasks, such as communicating with a virtualized infrastructure manager, such as the virtualized infrastructure manager222inFIG.2, to create a frozen copy of the virtual machine and to transfer one or more chunks (or one or more files) associated with the frozen copy to a storage appliance, such as storage appliance300inFIG.1. The one or more tasks may also include generating metadata for the one or more chunks, storing the metadata using the distributed metadata store310, storing the one or more chunks within the distributed file system312, and communicating with the virtualized infrastructure manager222that the frozen copy of the virtual machine may be unfrozen or released from a frozen state. The metadata for a first chunk of the one or more chunks may include information specifying a version of the virtual machine associated with the frozen copy, a time associated with the version (e.g., the snapshot of the virtual machine was taken at 5:30 p.m. on Jun. 29, 2018), and a file path to where the first chunk is stored within the distributed file system92(e.g., the first chunk is located at /snapshots/NM_B/s1/s1.chunk1). The one or more tasks may also include deduplication, compression (e.g., using a lossless data compression algorithm such as L24 or LZ77), decompression, encryption (e.g., using a symmetric key algorithm such as Triple DES or AES-256), and decryption related tasks. The virtualization interface304may provide an interface for communicating with a virtualized infrastructure manager managing a virtualization infrastructure, such as virtualized infrastructure manager222inFIG.2, and requesting data associated with virtual machine snapshots from the virtualization infrastructure. The virtualization interface304may communicate with the virtualized infrastructure manager using an Application Programming Interface (API) for accessing the virtualized infrastructure manager to communicate a request for a snapshot of a virtual machine). In this case, storage appliance300may request and receive data from a virtualized infrastructure without requiring agent software to be installed or running on virtual machines within the virtualized infrastructure. The virtualization interface304may request data associated with virtual blocks stored on a virtual disk of the virtual machine that have changed since a last snapshot of the virtual machine was taken or since a specified prior point in time. Therefore, in some cases, if a snapshot of a virtual machine is the first snapshot taken of the virtual machine, then a full image of the virtual machine may be transferred to the storage appliance. However, if the snapshot of the virtual machine is not the first snapshot taken of the virtual machine, then only the data blocks of the virtual machine that have changed since a prior snapshot was taken may be transferred to the storage appliance. The virtual machine search index306may include a list of files that have been stored using a virtual machine and a version history for each of the files in the list. Each version of a file may be mapped to the earliest point-in-time snapshot of the virtual machine that includes the version of the file or to a snapshot of the virtual machine that includes the version of the file (e.g., the latest point-in-time snapshot of the virtual machine that includes the version of the file). In one example, the virtual machine search index306may be used to identify a version of the virtual machine that includes a particular version of a file (e.g., a particular version of a database, a spreadsheet, or a word processing document). In some cases, each of the virtual machines that are backed up or protected using storage appliance300may have a corresponding virtual machine search index. In some embodiments, as each snapshot of a virtual machine is ingested, each virtual disk associated with the virtual machine is parsed in order to identify a file system type associated with the virtual disk and to extract metadata (e.g., file system metadata) for each file stored on the virtual disk. The metadata may include information for locating and retrieving each file from the virtual disk. The metadata may also include a name of a file, the size of the file, the last time at which the file was modified, and a content checksum for the file. Each file that has been added, deleted, or modified since a previous snapshot was captured may be determined using the metadata (e.g., by comparing the time at which a file was last modified with a time associated with the previous snapshot). Thus, for every file that has existed within any of the snapshots of the virtual machine, a virtual machine search index may be used to identify when the file was first created (e.g., corresponding with a first version of the file) and at what times the file was modified (e.g., corresponding with subsequent versions of the file). Each version of the file may be mapped to a particular version of the virtual machine that stores that version of the file. In some cases, if a virtual machine includes a plurality of virtual disks, then a virtual machine search index may be generated for each virtual disk of the plurality of virtual disks. For example, a first virtual machine search index may catalog and map files located on a first virtual disk of the plurality of virtual disks and a second virtual machine search index may catalog and map files located on a second virtual disk of the plurality of virtual disks. In this case, a global file catalog or a global virtual machine search index for the virtual machine may include the first virtual machine search index and the second virtual machine search index. A global file catalog may be stored for each virtual machine backed up by a storage appliance within a file system, such as distributed file system312inFIG.3. The data management system302may comprise an application running on the storage appliance300that manages and stores one or more snapshots of a virtual machine. In one example, the data management system302may comprise a highest-level layer in an integrated software stack running on the storage appliance. The integrated software stack may include the data management system302, the virtualization interface304, the distributed job scheduler308, the distributed metadata store310, and the distributed file system312. In some cases, the integrated software stack may run on other computing devices, such as a server or computing device106inFIG.1. The data management system302may use the virtualization interface304, the distributed job scheduler308, the distributed metadata store310, and the distributed file system312to manage and store one or more snapshots of a virtual machine. Each snapshot of the virtual machine may correspond with a point-in-time version of the virtual machine. The data management system302may generate and manage a list of versions for the virtual machine. Each version of the virtual machine may map to or reference one or more chunks and/or one or more files stored within the distributed file system312. Combined together, the one or more chunks and/or the one or more files stored within the distributed file system312may comprise a full image of the version of the virtual machine. FIG.4shows an example cluster400of a distributed decentralized database, according to some example embodiments. As illustrated, the example cluster400includes five nodes, nodes 1-5. In some example embodiments, each of the five nodes runs from different machines, such as physical machine334inFIG.3or virtual machine220inFIG.2. The nodes in the example cluster400can include instances of peer nodes of a distributed database (e.g., cluster-based database, distributed decentralized database management system, a NoSQL database, Apache Cassandra, DataStax, MongoDB, CouchDB), according to some example embodiments. The distributed database system is distributed in that data is sharded or distributed across the example cluster400in shards or chunks and decentralized in that there is no central storage device and no single point of failure. The system operates under an assumption that multiple nodes may go down, up, or become non-responsive. In some example embodiments, data written to one of the nodes is replicated to one or more other nodes per a replication protocol of the example cluster400. For example, data written to node 1 can be replicated to nodes 2 and 3. If node 1 prematurely terminates, node 2 and/or 3 can be used to provide the replicated data. In some example embodiments, each node of example cluster400frequently exchanges state information about itself and other nodes across the example cluster400using gossip protocol. Gossip protocol is a peer-to-peer communication protocol in which each node randomly shares (e.g., communicates, requests, transmits) location and state information about the other nodes in a given cluster. Writing: For a given node, a sequentially written commit log captures the write activity to ensure data durability. The data is then written to an in-memory structure (e.g., a memtable, write-back cache). Each time the in-memory structure is full, the data is written to disk in a Sorted String Table data file. In some example embodiments, writes are automatically partitioned and replicated throughout the example cluster400. Reading: Any node of example cluster400can receive a read request (e.g., query) from an external client. If the node that receives the read request manages the data requested, the node provides the requested data. If the node does not manage the data, the node determines which node manages the requested data. The node that received the read request then acts as a proxy between the requesting entity and the node that manages the data (e.g., the node that manages the data sends the data to the proxy node, which then provides the data to an external entity that generated the request). The distributed decentralized database system is decentralized in that there is no single point of failure due to the nodes being symmetrical and seamlessly replaceable. For example, whereas conventional distributed data implementations have nodes with different functions (e.g., master/slave nodes, asymmetrical database nodes, federated databases), the nodes of example cluster400are configured to function the same way (e.g., as symmetrical peer database nodes that communicate via gossip protocol, such as Cassandra nodes) with no single point of failure. If one of the nodes in example cluster400terminates prematurely (“goes down”), another node can rapidly take the place of the terminated node without disrupting service. The example cluster400can be a container for a keyspace, which is a container for data in the distributed decentralized database system (e.g., whereas a database is a container for containers in conventional relational databases, the Cassandra keyspace is a container for a Cassandra database system). FIG.5depicts a flowchart illustrating example data migration operations in a method, according to some embodiments. The operations of process500may be performed by any number of different systems, such as the cross-platform data migration system314or the data management system302as described herein, or any portion thereof, such as a processor included in any of the systems. At operation502, the cross-platform data migration system314may request backup data of a virtual machine running on server200. Specifically, system314may manage the extraction and storage of virtual machine backup data (e.g., snapshots) associated with different point in time versions of one or more virtual machines running within the data center104. A snapshot of a virtual machine may correspond with a state of the virtual machine at a particular point in time, and may be stored as one or more files, with each file associated with a virtual disk of the virtual machine. In some embodiments, the backup data may include physical data (e.g., data content) of the virtual disk and the virtual machine configuration file. The virtual machine configuration file stores configuration settings for the virtual machine at a particular point in time. The contents of the virtual disk file may include the operating system used by the virtual machine, local applications stored on the virtual disk, and user files, such as images and word processing documents. The configuration file, on the other hand, may include logic profile of the backup data, including the associated database schema and the associated database control logic data items, such as database stored procedures, and database logic views. The logic profile of the backup data is a subset of the backup data associated with the snapshot. At operation504, system314identifies the first file format associated with the backup data. A file format is a particular way to encode information for storage in a computer system. In some embodiments, a conversion of the first file format may include conversion of data content of the backup data associated with the virtual machine at a particular point in time, and may also include conversion of the logic profile of the backup data, including the associated database schema and the associated database control logic data items. At operation506, system314converts the first file format associated with the backup data into a platform-neutral file format that is compatible with any storage platform that is known to the system. The conversion includes conversion of data content of the backup data, and conversion of logic profile of the backup data. The conversion of the logic profile of the backup data at least includes conversion of database schema, database stored procedures, and database logic views. At operation508, system314may store the converted backup data in the platform-neutral file format in a storage platform. In some embodiments, the storage platform is the on-premises local storage, such as storage device108or storage appliance300. The venue where the backup data is stored may be determined based on user requests or based on a pre-configured backup procedure. In some embodiments, a client or a user may request the backup data, such as snapshots, to be stored in a storage platform different from the on-premises local storage. Since the backup data source file format is converted into a platform-neutral file format on the fly before being stored, a user may choose to store the converted backup data in any storage platform without concerning the file format compatibility issues between the on-premises and target platforms. In some embodiments, when the selection of potential target storage platforms is known to the host system, system314ensures the platform-neutral file format is compatible with all potential target storage platforms for selection. This way, when the backup data is restored in the target storage platform in the event of unplanned downtime (e.g., data failure, data corruption, ransomware attack, etc.) of the on-premises data center104, the restored virtual machines may immediately support operations of applications that require database functions that have been pre-configured in the on-premises data center. FIG.6AandFIG.6Bdepict flowcharts illustrating example file format conversion operations in a method, according to some embodiments. The operations of processes600may be performed by any number of different systems, such as the cross-platform data migration system314or the data management system302as described herein, or any portion thereof, such as a processor included in any of the systems. In some embodiments, conversion of a file format of backup data may include the conversion of the logic aspect of the backup data, in some examples, the logic aspect of the backup data is the logic profile of a snapshot, including a subset of the backup data at a particular point in time. The logic aspect of the backup data may include the associated database schema and the associated database control logic. The database control logic may include database control logic data items associated with data changes in a database, such as database stored procedures and database logic views. Stored procedures are packaged sequences of program instructions that perform specific tasks as individual units. Stored procedures are subroutines available to applications that access a database. Logic views are logical projections of the source data based on a selection of criteria. Each logic view is a logical entity of the source data that can be operated independently as if it was a physical data table. In some embodiments, the conversion of file format includes the conversion of each logic view. By integrating the database control logic into the backup data, once restored, applications may immediately run on the restored database without having to reconfigure the control logic to invoke pre-configured database functions. As illustrated inFIG.6A, the method begins with operation602. The system314identifies a database schema (e.g., the first database schema) associated with the backup data in the first file format. The first database schema associated with the backup data may be included in the logic profile of the snapshot, in conjunction with the associated database control logic data items, such as database stored procedures, and database logic views. Database schema is a set of integrity constraints imposed on a particular database to define the structure of data. Conversion of database schema associated with the backup data is required when the backup data is to be restored in another platform, so that the restored system may be able to handle Unicode strings, for example. At operation604, the system314converts the first file schema by generating a platform-neutral database schema based on the first file schema. At operation606, the system314associates (or encodes) the generated platform-neutral database schema with the backup data in the platform-neutral file format before storing the converted backup data to a target storage platform. In some embodiments, when the selection of potential target storage platforms for storing backup data is known to the host system, system314may ensure the converted platform-neutral store procedures and logic views are compatible with all potential target storage platforms. This way, when the backup data is restored in the target storage platform in the event of unplanned downtime of the data center104(e.g., the host system), the restored database may immediately support operations of applications that require database functions that have been pre-configured in the source storage platform. As illustrated inFIG.6B, the method begins with operation608, the system314identifies a first database control logic data item associated with a database of the virtual machine at the first point in time. At operation610, system314generates a platform-neutral database control logic data item based on the first database control logic data item. At operation612, system314associates the platform-neutral database control logic data item with the backup data in the platform-neutral file format. In some embodiments, the first database control logic data item may be a database stored procedure or a database logic view associated with the virtual machine at the first point in time. The platform-neutral database control logic data item may be a converted platform-neutral database stored procedure or a converted platform-neutral database logic view associated with the virtual machine at the first point in time. In some embodiments, the conversion of the first file format may include conversion of the data content of physical data of the virtual disk associated with the snapshot in the backup data. In some embodiments, the first storage platform is the host system, such as the data center104. The first storage platform is a cloud-based storage platform hosting a cloud-based database. In some embodiments, system314receives a client request or a user request from a computing device. The client request or the user request includes instructions of exporting the backup data in the platform-neutral format onto a cloud-based storage platform. Based on the user request, system314exports the backup data in the platform-neutral file format in the cloud-based storage platform at the user's request. In some embodiments, the cloud-based storage platform corresponds to a cloud service provider (e.g., a third-party cloud service provider) different from a storage service provider of the host system, such as the data center104. In some embodiments, the backup data includes a snapshot of the virtual machine captured at the first point in time based on a pre-determined service-level requirement, such as an SLA agreement. FIG.7is a block diagram700illustrating an architecture of software702, which can be installed on any one or more of the devices described above.FIG.7is merely a non-limiting example of a software architecture, and it will be appreciated that many other architectures can be implemented to facilitate the functionality described herein. In various embodiments, the software702is implemented by hardware such as a machine800ofFIG.8that includes processor(s)746, memory748, and I/O components750. In this example architecture, the software702can be conceptualized as a stack of layers where each layer may provide a particular functionality. For example, the software702includes layers such as an operating system704, libraries706, frameworks708, and applications710. Operationally, the applications710invoke API calls712(application programming interface) through the software stack and receive messages714in response to the API calls712, consistent with some embodiments. In various implementations, the operating system704manages hardware resources and provides common services. The operating system704includes, for example, a kernel716, services718, and drivers720. The kernel716acts as an abstraction layer between the hardware and the other software layers, consistent with some embodiments. For example, the kernel716provides memory management, processor management (e.g., scheduling), component management, networking, and security settings, among other functionality. The services718can provide other common services for the other software layers. The drivers720are responsible for controlling or interfacing with the underlying hardware, according to some embodiments. For instance, the drivers720can include display drivers, camera drivers, BLUETOOTH® or BLUETOOTH® Low Energy drivers, flash memory drivers, serial communication drivers (e.g., Universal Serial Bus (USB) drivers), WI-FI® drivers, audio drivers, power management drivers, and so forth. In some embodiments, the libraries706provide a low-level common infrastructure utilized by the applications710. The libraries706can include system libraries722(e.g., C standard library) that can provide functions such as memory allocation functions, string manipulation functions, mathematic functions, and the like. In addition, the libraries706can include API libraries724such as media libraries (e.g., libraries to support presentation and manipulation of various media formats such as Moving Picture Experts Group-4 (MPEG4), Advanced Video Coding (H.264 or AVC), Moving Picture Experts Group Layer-3 (MP3), Advanced Audio Coding (AAC). Adaptive Multi-Rate (AMR) audio codec, Joint Photographic Experts Group (JPEG or JPG), or Portable Network Graphics (PNG)), graphics libraries (e.g., an OpenGL framework used to render in two dimensions (2D) and three dimensions (3D) in a graphic content on a display), database libraries (e.g., SQLite to provide various relational database functions), web libraries (e.g., WebKit to provide web browsing functionality), and the like. The libraries706can also include a wide variety of other libraries726to provide many other APIs to the applications710. The frameworks708provide a high-level common infrastructure that can be utilized by the applications710, according to some embodiments. For example, the frameworks708provide various graphic user interface (GUI) functions, high-level resource management, high-level location services, and so forth. The frameworks708can provide a broad spectrum of other APIs that can be utilized by the applications710, some of which may be specific to a particular operating system or platform. In an example embodiment, the applications710include built-in applications728and a broad assortment of other applications, such as a third-party application744. The built-in applications728may include a home application, a contacts application, a browser application, a book reader application, a location application, a media application, a messaging application, a game application. According to some embodiments, the applications710are programs that execute functions defined in the programs. Various programming languages can be employed to create one or more of the applications710, structured in a variety of manners, such as object-oriented programming languages (e.g., Objective-C, Java, or C++) or procedural programming languages (e.g., C or assembly language). In a specific example, the third-party application744(e.g., an application developed using the ANDROID™ or IOS™ software development kit (SDK) by an entity other than the vendor of the particular platform) may be mobile software running on a mobile operating system such as IOS™, ANDROID™, WINDOWS® Phone, or another mobile operating system. In this example, the third-party application744can invoke the API calls712provided by the operating system704to facilitate functionality described herein. FIG.8illustrates a diagrammatic representation of a machine800in the form of a computer system within which a set of instructions may be executed for causing the machine to perform any one or more of the methodologies discussed herein, according to an example embodiment. Specifically,FIG.8shows a diagrammatic representation of the machine800in the example form of a computer system, within which instructions806(e.g., software, a program, an application, an apples, an app, or other executable code) for causing the machine800to perform any one or more of the methodologies discussed herein may be executed. Additionally, or alternatively, the instructions806may implement the operations of the method shown inFIG.5, or as elsewhere described herein. The instructions806transform the general, non-programmed machine800into a particular machine800programmed to carry out the described and illustrated functions in the manner described. In alternative embodiments, the machine800operates as a standalone device or may be coupled (e.g., networked) to other machines. In a networked deployment, the machine800may operate in the capacity of a server machine or a client machine in a server-client network environment, or as a peer machine in a peer-to-peer (or distributed) network environment. The machine800may comprise, but not be limited to, a server computer, a client computer, a personal computer (PC), a tablet computer, a laptop computer, a netbook, a set-top box (STB), a PDA, an entertainment media system, a cellular telephone, a smart phone, a mobile device, a wearable device (e.g., a smart watch), a smart home device (e.g., a smart appliance), other smart devices, a web appliance, a network router, a network switch, a network bridge, or any machine capable of executing the instructions806, sequentially or otherwise, that specify actions to be taken by the machine800. Further, while only a single machine800is illustrated, the term “machine” shall also be taken to include a collection of machines800that individually or jointly execute the instructions806to perform any one or more of the methodologies discussed herein. The machine800may include processor(s)746, memory748, and I/O components750, which may be configured to communicate with each other such as via a bus802. In an example embodiment, the processor(s)746(e.g., a Central Processing Unit (CPU), a Reduced Instruction Set Computing (RISC) processor, a Complex Instruction Set Computing (CISC) processor, a Graphics Processing Unit (GPU), a Digital Signal Processor (DSP), an ASIC, a Radio-Frequency Integrated Circuit (RFIC), another processor, or any suitable combination thereof) may include, for example, a processor804and a processor808that may execute the instructions806. The term “processor” is intended to include multi-core processors that may comprise two or more independent processors (sometimes referred to as “cores”) that may execute instructions contemporaneously. AlthoughFIG.8shows multiple processor(s)746, the machine800may include a single processor with a single core, a single processor with multiple cores (e.g., a multi-core processor), multiple processors with a single core, multiple processors with multiples cores, or any combination thereof. The memory748may include a main memory810, a static memory812, and a storage unit814, each accessible to the processor(s)746such as via the bus802. The main memory810, the static memory812, and storage unit814store the instructions806embodying any one or more of the methodologies or functions described herein. The instructions806may also reside, completely or partially, within the main memory810, within the static memory812, within the storage unit814, within at least one of the processor(s)746(e.g., within the processor's cache memory), or any suitable combination thereof, during execution thereof by the machine800. The I/O components750may include a wide variety of components to receive input, provide output, produce output, transmit information, exchange information, capture measurements, and so on. The specific I/O components750that are included in a particular machine will depend on the type of machine. For example, portable machines such as mobile phones will likely include a touch input device or other such input mechanisms, while a headless server machine will likely not include such a touch input device. It will be appreciated that the I/O components750may include many other components that are not shown in FIG.8. The I/O components750are grouped according to functionality merely for simplifying the following discussion and the grouping is in no way limiting. In various example embodiments, the I/O components750may include output components818and input components820. The output components818may include visual components (e.g., a display such as a plasma display, panel (PDP), a light emitting diode (LED) display, a liquid crystal display (LCD), a projector, or a cathode ray tube (CRT)), acoustic components (e.g., speakers), haptic components (e.g., a vibratory motor, resistance mechanisms), other signal generators, and so forth. The input components820may include alphanumeric input components (e.g., a keyboard, a touch screen configured to receive alphanumeric input, a photo-optical keyboard, or other alphanumeric input components), point-based input components (e.g., a mouse, a touchpad, a trackball, a joystick, a motion sensor, or another pointing instrument), tactile input components (e.g., a physical button, a touch screen that provides location and/or force of touches or touch gestures, or other tactile input components), audio input components (e.g., a microphone), and the like. In further example embodiments, the I/O components750may include biometric components822, motion components824, environmental components826, or position components828, among a wide array of other components. For example, the biometric components822may include components to detect expressions (e.g., hand expressions, facial expressions, vocal expressions, body gestures, or eye tracking), measure biosignals (e.g., blood pressure, heart rate, body temperature, perspiration, or brain waves), identify a person (e.g., voice identification, retinal identification, facial identification, fingerprint identification, or electroencephalogram-based identification), and the like. The motion components824may include acceleration sensor components (e.g., accelerometer), gravitation sensor components, rotation sensor components (e.g., gyroscope), and so forth. The environmental components826may include, for example, illumination sensor components (e.g., photometer), temperature sensor components (e.g., one or more thermometers that detect ambient temperature), humidity sensor components, pressure sensor components (e.g., barometer), acoustic sensor components (e.g., one or more microphones that detect background noise), proximity sensor components (e.g., infrared sensors that detect nearby objects), gas sensors (e.g., gas detection sensors to detection concentrations of hazardous gases for safety or to measure pollutants in the atmosphere), or other components that may provide indications, measurements, or signals corresponding to a surrounding physical environment. The position components828may include location sensor components (e.g., a GPS receiver component), altitude sensor components (e.g., altimeters or barometers that detect air pressure from which altitude may be derived), orientation sensor components (e.g., magnetometers), and the like. Communication may be implemented using a wide variety of technologies. The I/O components750may include communication components830operable to couple the machine800to a network836or devices832via a coupling838and a coupling834, respectively. For example, the communication components830may include a network interface component or another suitable device to interface with the network836. In further examples, the communication components830may include wired communication components, wireless communication components, cellular communication components, Near Field Communication (NFC) components, Bluetooth® components (e.g., Bluetooth® Low Energy), Wi-Fi® components, and other communication components to provide communication via other modalities. The devices832may be another machine or any of a wide variety of peripheral devices (e.g., a peripheral device coupled via a USB). Moreover, the communication components830may detect identifiers or include components operable to detect identifiers. For example, the communication components830may include Radio Frequency Identification (MID) tag reader components, NFC smart tag detection components, optical reader components (e.g., an optical sensor to detect one-dimensional bar codes such as Universal Product Code (UPC) bar code, multi-dimensional bar codes such as Quick Response (QR) code, Aztec code, Data Matrix, Dataglyph, PDF417, Ultra Code, UCC RSS-2D bar code, and other optical codes), or acoustic detection components (e.g., microphones to identify tagged audio signals). In addition, a variety of information may be derived via the communication components830, such as location via Internet Protocol (IP) geolocation, location via Wi-Fi® signal triangulation, location via detecting an NFC beacon signal that may indicate a particular location, and so forth. The various memories (i.e., memory748, main memory810, and/or static memory812) and/or storage unit814may store one or more sets of instructions and data structures (e.g., software) embodying or utilized by any one or more of the methodologies or functions described herein. These instructions (e.g., the instructions806), when executed by processor(s)746, cause various operations to implement the disclosed embodiments. As used herein, the terms “machine-storage medium.” “device-storage medium,” “computer-storage medium” mean the same thing and may be used interchangeably in this disclosure. The terms refer to a single or multiple storage devices and/or media (e.g., a centralized or distributed database, and/or associated caches and servers) that store executable instructions and/or data. The terms shalt accordingly be taken to include, but not be limited to, solid-state memories, and optical and magnetic media, including memory internal or external to processors. Specific examples of machine-storage media, computer-storage media and/or device-storage media include non-volatile memory, including by way of example semiconductor memory devices, e.g., erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), FPGA, and flash memory devices; magnetic disks such as internal hard disks and removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks. The terms “machine-storage media,” “computer-storage media,” and “device-storage media” specifically exclude carrier waves, modulated data signals, and other such media, at least some of which are covered under the term “signal medium” discussed below. In various example embodiments, one or more portions of the network836may be an ad hoc network, an intranet, an extranet, a VPN, a LAN, a WLAN, a WAN, a WWAN, a MAN, the Internet, a portion of the Internet, a portion of the PSTN, a plain old telephone service (POTS) network, a cellular telephone network, a wireless network, a Wi-Fi® network, another type of network, or a combination of two or more such networks. For example, the network836or a portion of the network836may include a wireless or cellular network, and the coupling838may be a Code Division Multiple Access (CDMA) connection, a Global System for Mobile communications (GSM) connection, or another type of cellular or wireless coupling. In this example, the coupling838may implement any of a variety of types of data transfer technology, such as Single Carrier Radio Transmission Technology (1×RTT), Evolution-Data Optimized (EVDO) technology, General Packet Radio Service (GPRS) technology, Enhanced Data rates for GSM Evolution (EDGE) technology, third Generation Partnership Project (3GPP) including 3G, fourth generation wireless (4G) networks, Universal Mobile Telecommunications System (UMTS), High Speed Packet Access (HSPA), Worldwide Interoperability for Microwave Access (WiMAX), Long Term Evolution (LTE) standard, others defined by various standard-setting organizations, other long range protocols, or other data transfer technology. The instructions806may be transmitted or received over the network836using a transmission medium via a network interface device (e.g., a network interface component included in the communication components830) and utilizing any one of a number of well-known transfer protocols (e.g., hypertext transfer protocol (HTTP)). Similarly, the instructions806may be transmitted or received using a transmission medium via the coupling834(e.g., a peer-to-peer coupling) to the devices832. The terms “non-transitory computer-readable storage medium,” “transmission medium” and “signal medium” mean the same thing and may be used interchangeably in this disclosure. The terms “transmission medium” and “signal medium” shall be taken to include any intangible medium that is capable of storing, encoding, or carrying the instructions806for execution by the machine800, and includes digital or analog communications signals or other intangible media to facilitate communication of such software. Hence, the terms “transmission medium” and “signal medium” shall be taken to include any form of modulated data signal, carrier wave, and so forth. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a matter as to encode information in the signal. The terms “machine-readable medium,” “computer-readable medium” and “device-readable medium” mean the same thing and may be used interchangeably in this disclosure. The terms are defined to include both machine-storage media and transmission media. Thus, the terms include both storage devices/media and carrier waves/modulated data signals. Although examples have been described with reference to specific example embodiments or methods, it will be evident that various modifications and changes may be made to these embodiments without departing from the broader scope of the embodiments. Accordingly, the specification and drawings are to be regarded in an illustrative rather than a restrictive sense. The accompanying drawings that form a part hereof, show by way of illustration, and not of limitation, specific embodiments in which the subject matter may be practiced. The embodiments illustrated are described in sufficient detail to enable those skilled in the art to practice the teachings disclosed herein. Other embodiments may be utilized and derived therefrom, such that structural and logical substitutions and changes may be made without departing from the scope of this disclosure. This detailed description, therefore, is not to be taken in a limiting sense, and the scope of various embodiments is defined only by the appended claims, along with the full range of equivalents to which such claims are entitled. Such embodiments of the inventive subject matter may be referred to herein, individually and/or collectively, by the term “invention” merely for convenience and without intending to voluntarily limit the scope of this application to any single invention or inventive concept if more than one is in fact disclosed. Thus, although specific embodiments have been illustrated and described herein, it should be appreciated that any arrangement calculated to achieve the same purpose may be substituted for the specific embodiments shown. This disclosure is intended to cover any and all adaptations or variations of various embodiments. Combinations of the above embodiments, and other embodiments not specifically described herein, will be apparent to those of skill in the art upon reviewing the above description.
92,632
11860743
DETAILED DESCRIPTION Described herein are systems and methods for maintaining operational compatibility when a database instance is restored. Operational compatibility refers to the ability to continue the operation of a database, and its client applications, even when the database and the environment in which it operates have undergone configuration changes. For example, an application that depends on a particular database setting being enabled may not function properly if, after being restored, the database no longer has that particular setting enabled. Database management systems may use failover techniques to improve reliability and availability. Typically, such systems have a standby database node configured as a mirror or replica of a primary database node. The primary database node may, for example, transmit a record of each transaction it has performed, or is about to perform, to the standby node. The standby node may then process the transaction in the same manner as the primary node. In the event that the primary node fails, or is taken offline for maintenance, the secondary node thereby has an up-to-date copy of the data and can take over the role of the primary node. However, there are drawbacks to this approach: operating the standby node consumes power and computing capacity, and there may be licensing costs associated with the operation of the standby node. These technical and monetary costs may be incurred even though the standby node may be used only infrequently. Another issue is that the operating environment in which the standby database executes may not be fully compatible with the operating environment of the primary database, jeopardizing operational compatibility. A distributed system may provide a point-in-time restoration capability without instantiating a standby node. Technical and monetary costs associated with the standby node are therefore avoided. The example distributed system may, for example, begin metering capacity utilized by a restored instance when such an instance is restored. Prior to that point, the technical and monetary costs associated with providing the restoration capability may be less than would be incurred by operating a standby database. The operational compatibility safeguards described herein may further enhance the capability to restore the database and continue operations. Provision of the restoration capability may comprise replication between operating environments. The replication involves data that represents the transactions processed by the primary database instance, as well as configuration data. The transaction data can include snapshots, baselines, or image files of a collection of data maintained by the primary database instance, as well as data pertaining to individual transactions. For example, for a given object, a complete set of transaction data might include a snapshot of the object and a record of any changes, additions, or deletions to the object that occurred subsequent to the snapshot. A transaction may refer to an operation on the data, such as an insert, update, or delete operation. A transaction may also refer, in some cases, to sets or collections of such operations. The replicated configuration data can include aspects of the configuration of the operating environment on which the primary database instance is dependent. These aspects include, but are not limited to, configuration settings for the database itself, such as schema of the database, user-defined functions, credentials, and so forth. The replicated configuration data may also include configuration settings that are relevant to clients of the primary database instance. For example, client applications may rely on the database being localized to a particular jurisdiction, or dependent on the database conforming to a particular version of a schema. The replicated configuration data can also include aspects of the operating environment such as network settings, user accounts, virtual machine configurations, and so on. Replicating the transaction data and configuration data between operating environments facilitates the provision of a point-in-time restoration capability. An operating environment refers to a combination of components and devices, connected by a network, which collectively perform a computing function. Operating environments may be isolated from each other in a manner that reduces the risk that a failure in one operating environment would be repeated in another. For example, a power failure in one operating environment is not likely to affect another operating environment in a distant geographic location. This example should not, however, be viewed as limiting the scope of the present disclosure to only those embodiments in which operating environments are located at different geographic locations. Replication of the transaction and configuration data may be accomplished via the use of a distributed storage service, which may also be referred to herein as a storage service. A distributed storage service comprises computing devices and components that, collectively, provide scalable storage infrastructure. A distributed storage service may further provide replication features suitable to replicate transaction and configuration data between locations accessible to a source operating environment, in which a primary database instance executes, and a target operating environment, in which the primary database instance may, upon request, be restored. In an example, operational compatibility safeguards comprise systems and procedures for linking configuration information associated with accounts and operating environments associated with those accounts. For example, a distributed system may receive a request to enable a point-in-time restoration capability for a database instance. In response, the distributed system may monitor, by a control plane, a configuration change to the operating environment in which the database instance executes, which may be referred to as the source operating environment. The distributed system may store a record of the monitored change, and replicate the record to a target operating environment. The configuration change may, based on the record, apply the configuration change to the second operating environment. The distributed system may apply further configuration changes so that the target operating environment is made to have configuration settings that correspond to that of the source operating environment. The database instance can then be restored to the target operating environment. By applying configuration changes up to a designated point in time, the database instance and its operating environment can be restored to its state as of the designated point in time. In the preceding and following description, various techniques are described. For purposes of explanation, specific configurations and details are set forth in order to provide a thorough understanding of possible ways of implementing the techniques. However, it will also be apparent that the techniques described below may be practiced in different configurations without the specific details. Furthermore, well-known features may be omitted or simplified to avoid obscuring the techniques being described. As one skilled in the art will appreciate in light of this disclosure, certain embodiments may be capable of achieving certain advantages, including some or all of the following. In some embodiments, the computing capacity consumed to provide a point-in-time restore capability is reduced. In some embodiments, the technical and monetary costs of maintaining an operational standby database are reduced or eliminated. In some embodiments, greater flexibility regarding the location of a restored database is provided. Moreover, as described herein, some or all of these advantages may be achieved in combination with improved capability for maintaining operational compatibility in the event that a backup database instance is instantiated. FIG.1illustrates a distributed system100supporting point-in-time restoration with operational compatibility safeguards, in accordance with an embodiment. In the example distributed system100, a database instance110is operative in a first operating environment102a, and can be restored by the distributed system100in a second operating environment102b. The first operating environment102ais associated with a first user account120a, and the second operating environment102bis associated with a second user account120b. A user account may refer to a security principal used to authenticate access a computing function of the distributed system100. A user account may be associated with a set of credentials, such as a user name and password combination. A user account, which may also be referred to as an account, may be associated with the operation of a database instance. Examples of such association include, but are not necessarily limited to, ownership of a collection of data maintained by the database instance, association with a security principal under which one or more threads of the database instance execute, association with administrative rights for the database, and so on. User accounts may also be associated with operating environments. Examples of such association include administrative and access rights. In an embodiment, a user account may be used to perform a variety of operations related to the configuration and functioning of an operating environment, such as the depicted operating environment102ain which the database instance110operates. The operations that may be performed on behalf of the first user account120ainclude creating the database instance110, accessing the database instance110, configuring network and security settings of the operating environment102a, and so forth. These operations may be performed by and logged by the control plane106b. Storage for records describing configuration changes can be stored as configuration data112aon the storage service116a, or on a locally or remote managed storage device. Records of configuration changes made to the operating environment102amay be considered to be associated with and/or owned by the corresponding user account120a, and any linked user accounts, such as the depicted second user account120b. Configuration continuity involves the maintenance and replication of this data, such that the configuration data is available to use when restoring a database instance or the operating environment in which the database instance is to execute. In the example ofFIG.1, a second user account120bis associated with the operating environment102bin which the creation of a backup database instance may be requested. In this example, the operating environment is instantiated prior to the backup database instance118. The second user account120bcan be used, prior to creating the backup database instance118, to perform various operations in the second operating environment102b. In cases and embodiments, configuration changes made to the first operating environment102acan be applied to the second operating environment102bwithin a threshold period of time. In other cases and embodiments, the records of the changes, as stored in the configuration data112b, can be applied to the second operating environment102bat a later time, such as on-demand or when the backup database instance118is instantiated. In some embodiments, however, the second operating environment102bis not created until needed for instantiating the backup database instance118. In these circumstances, the second user account120bis still created, but is not yet explicitly linked to the second operating environment102b. The configuration changes can be applied to the second operating environment102bwhen the operating environment102bis instantiated, or at a later time, such when the backup database instance118is instantiated. An operating environment refers to a combination of components and devices, connected via a network, which collectively perform a computing function. An operating environment may be said to be instantiated when a combination of such components and devices has been configured to perform the function. Note that in various cases and embodiments, a given combination of components and devices may be configured to host a number of operating environments. As depicted inFIG.1, operating environments102a,102bmay be isolated from each other. For example, the components and devices in operating environment102amay be in geographic proximity with each other, for example at the same data center. The components and devices in operating environment102bmay be in geographic proximity with each other, e.g., at the same data center, but geographically remote from the other operating environment102a. The components within an operating environment102amay be connected by a communications network that is relatively high-speed compared to the network122that connects the two depicted operating environments102a,102b. Further aspects of an operating environment are depicted inFIG.7. The operating environments102a,bmay, in some cases and embodiments, correspond to operating regions of the distributed system100. For example, the components and devices of a first operating environment102amay be located in a first geographic region, and the components and devices of a second operating environment102bmay be located in a second geographic region. The operating environments may be connected by a network122, which may include any of various communications networks, including but not limited to the Internet, wide-area networks, mesh networks, fiber optic networks, wireless networks, satellite-based networks, powerline networks, and so on, individually and in various combinations. In the example distributed system100, control planes106a,bperform operations to coordinate the activities and operation of components within their respective operating environments102a,b. In an embodiment, each of the control planes106a,bcomprises a module installed on an application server, such as the control plane and application server depicted byFIG.6. A module, as used herein, refers to processor-executable instructions stored in a non-transitory memory of a computing device. The instructions, when executed by at least one processor of the computing device, cause the computing device to perform at least the described operations of the module. Examples of operations performed by a control plane106a,binclude configuring networks within an operating environment102a,b, allocating and configuring hardware devices, including computing devices, allocating and configuring virtual machines, installing software on the virtual machines, and so forth. Further operations of the control plane106a,bcan include, in various embodiments, enforcing compliance with access policies related to ensuring operational continuity in the event that a user account is compromised. For example, the control planes106a,bcan ensure that configuration changes related to operational continuity, including restoration capabilities, are not terminated without mutual authorization from the primary and secondary accounts. A control plane106ain the first operating environment102acontrols and monitors execution of a database instance110. The database instance110is in the same operating environment102aas the control plane106a. The controlling and monitoring may comprise monitoring and record information about the configuration of the database instance110. This can include information about the storage devices used by the database, database account information, and so on. It can also include preventing configuration changes to the database, where those changes might interfere with the replication of transaction data, or otherwise interfere with the ability to do a restoration of the database instance. These types of changes can be prevented by the control plane106a, except when the control plane is able to obtain authorization for the change from both of the first and second user accounts120a,b. The control plane106amay also control and monitor the operating environment102afor changes to aspects of the configuration of the operating environment. The control plane106amay monitor and record information concerning aspect of the operating environment102aon which the database instance110depends. This may include information on which replication depends. Examples of configuration information which might be monitored and recorded include, but is not limited to, routing tables, domain name service entries, virtual private network settings, encryption keys, and so on. The control plane106amay also log changes to the operating environment102a, including changes to any processes, modules, or subsystems which may be hosted in the operating environment102a, including the database instance110. The control plane106amay also, in some embodiments, send data indicative of configuration changes to other operating environments, such as the depicted operating environment102b. In the example illustrated byFIG.1, the control plane106areceives a request to enable a point-in-time restoration capability for the database instance110. A point-in-time restoration capability refers to an ability to restore a database instance such that the data managed by the restored database instance is up-to-date as of an indicated time. For example, if the database instance110were to crash, a client might request that the database instance110be restored with data that was current as of the time of the crash. Having a point-in-time restoration capability does not necessarily require that all data from the initial database instance110be available. However, the distributed system100may ensure that a point-in-time restoration will likely be able to restore all relevant data within a threshold range of a requested time, so that the restored database instance can effectively act as a replica or replacement of the original database instance. In the example distributed system100, point-in-time capability is provided without a live backup instance. The backup database instance118may therefore remain uninstantiated until a point-in-time restoration is requested. This approach reduces the costs of providing the point-in-time restoration capability, in both technical and monetary aspects. For example, prior to a restoration, no processors need to be allocated for executing a backup database instance, or processing transactions. Database licensing costs may also be reduced, since in distributed system100the backup database is not instantiated unless a restoration is requested. Instantiation refers to the process of creating the database instance. An instantiated database instance is therefore a database instance that has been created or made operational. The control plane106aresponds to the request to enable point-in-time restore capabilities by causing configuration data112aand transaction data114ato be written to a storage location. In the example ofFIG.1, the control plane106acauses the configuration data112aand transaction data114ato be written to the storage service116a. The configuration data112acomprises data pertaining to or describing the operating environment102aand the database instance110, particularly data relating to configuration of the operating environment102a, include the configuration of the database instance110. For example, the configuration data112acan include details about the configuration of the operating environment102aon which the database instance110depends. Examples of configuration data include, but are not limited to, virtual machine images, database snapshots, user credentials, account credentials, digital certificates, network settings, virtual private network (“VPN”) settings, operating system patches, registry entries, and so on. Further examples include storage device settings, storage service properties, database schemas, and so on. In general, the configuration data112aincludes any information that is to be restored in the event that a backup database instance118were to be instantiated. This may include whatever settings might be desired or needed to ensure that the backup database instance118can operate effectively as a replacement for the primary database instance110. The transaction data114acomprises records of transactions performed by the database instance110. The transaction data114amay include a transaction log. A transaction log is a record of the operations performed by a database. Typically, a transaction log is ordered by the time at which the operations were performed. A transaction log can be replayed to a database in order to repeat operations that were recorded in the transaction log but never committed to the data managed by a database instance, such as the depicted database instance110. The transaction data114acan also include a snapshot of the database. The control plane106acauses at least a subset of data from the transaction log of the database instance110to be written to the storage service116a. This transaction data114acan include records of transactions subsequent to a snapshot of the database instance110. Although not explicitly illustrated inFIG.1, the control plane106amay also cause a snapshot of the database instance110to be written to the storage service. Here, the snapshot refers to a baseline version of the collection of data that is managed by the database instance110. Together, the transaction data114aand the snapshot can be used, as described herein, to reconstruct the data state of the database instance110at a requested point in time. In some instances, the database instance110writes data to a storage device that is not replicated. In such instances, the control plane106areads the transaction data114afrom this storage device and sends it to the storage service116a. This process of replication is ongoing once started in response to the request to enable a point-in-time restore capability. In an embodiment, the storage service116ais independent of the operation of the database instance110, and possibly independent of the operating environment102a. For example, the storage service116amight, instead of being part of the operating environment102a, be an Internet-based storage service that is accessible to the control plane106ain the operating environment102a. In an embodiment, the storage service116acomprises scalable infrastructure for data storage. For example, a storage service116acan comprise a control plane that is similar though not necessarily identical to the control plane106a, a plurality of application servers such as the application server depicted inFIG.6, and a plurality of storage devices. The storage service116ais accessible to other components and devices within the operating environment102a, and is also connected via the network122to a comparable storage service116bin another operating environment102b. The data stored by the storage service, e.g., the configuration data112aand transaction data114a, is therefore accessible to both the control plane106ain the first operating environment102aand the control plane106bin the second operating environment102b. Data maintained in the storage service116aof the first operating environment102ais replicated to the storage service116bin the second operating environment102b. In some embodiments, the replication is performed by the storage services116a,b, using replication modules operative as components of the storage services116a,bwithin each respective operating environment102. The replication modules, which may be operative on an application server such as those depicted inFIG.6, communicate with each other via the network122. The replication module in the first operating environment102areads data from storage devices of the storage service116aand transmit the data to the replication module in the other operating environment102b. There, the replication module in the second operating environment102bstores the data on storage on devices of the storage service116bin the second operating environment102b. Although not explicitly depicted inFIG.1, some embodiments may utilize replication modules similar to those just described, but which operate independently of the respective storage services116in the operating environments102a,b. In such embodiments, a replication module in the first operating environment102atransmits data from the storage service116ain the first operating environment to the replication module in the second operating environment102b. The replication module in the second operating environment102bthen causes the received data to be stored by the storage service116bin the second operating environment. Although not explicitly depicted inFIG.1, in some cases a plurality of storage devices may be used in place of the respective storage services116in each of the operating environments102. In such cases, a control plane may coordinate utilization of the storage devices, and a replication module in the first operating environment102atransmits data to a replication module in the other operating environment102b. The replicated data can comprise the configuration data112band transaction data114b. A snapshot of the database can also be replicated to the second operating environment102b. Regardless of whether the replication is done by the storage services116a,bor done independently, these components may be replicated independently of the operation of the database instance110. It may, however, be advisable to tune the speed of replication in accordance with target times for restoration. Lag in the replication process may delay instantiation and restoration of a backup database instance, or limit how up-to-date the point-in-time restoration may be. A control plane106bin the second operating environment may receive a request to restore the database instance110. In general terms, the restoration process involves the various steps and/or operations needed to instantiate a backup database instance118, and to restore the data maintained by the primary database instance110. As described above, the backup database instance118is not instantiated until such a request is received. As such, it may be the case that the only technical costs associated with the provision of the point-in-time restore capability are those costs associated with storage and data replication. Here, costs refer primarily to the consumption of computing or other technical resources. The control plane106bmay receive the request to restore the database instance110from a control plane106ain the other operating environment102a, or from a message triggered by a user interface or application programming interface (“API”). Although not explicitly depicted inFIG.1, a user interface or API may be provided to allow for the submission of a request to enable point-in-time restore capabilities, and to allow for the submission of a request to initiate a restoration process. The user interface may communicate these instructions to control plane106aor control plane106b. Note that the capabilities and functioning of the user interface, particularly with respect to requesting that a database instance be restored, should be available to a client even in cases where the first operating environment102ais unavailable. Thus, in at least some embodiments, the control plane106bcan receive a request to restore a database instance even when the first operating environment102ais unavailable. In response to the request to restore the database instance110, the control plane106bin the second operating environment performs operations to configure the operating environment102bto support the backup database instance118, installing and activating the backup database instance118on a virtual machine, configuring the backup database instance118, obtaining database snapshots (if available), and replaying the transactions log data114bto restore the data state of the primary database instance110to the backup database instance118. Note that although the term restore is used with respect to the process for instantiating the database instance110, there may be cases where the database instance110remains available and/or operative after the backup database instance is instantiated. Thus, the term restoration does not necessarily imply that the primary database instance110has failed, been interrupted, or has ceased to exist. Instead, the term restore refers to creating a copy of the primary instance, with data that is current up to an indicated point in time, regardless of the current state of the primary database instance110. As used herein, a database instance refers to a module for performing the functions of a database and a collection of data on which the module performs those functions. In various embodiments, the collection of data is owned either by a single entity, or by a single tenant of a multi-tenant database system. A multi-tenant database system is one in which data owned by a number of entities is managed by a single database, or by a plurality of databases which are collectively managed and owned by a plurality of entities. In the case of a multi-tenant database, a database instance generally refers to the module and the collection of data owned by a particular tenant, or to a set of tenants for whom a point-in-time restore capability is being enabled. FIG.2illustrates aspects of configuration synchronization, in accordance with an embodiment. In the example ofFIG.2, configuration changes214are routes through a control plane206and applied to an operating environment202. In embodiments, the control plane206generates s configuration log216, image218, and other forms of configuration data212. The configuration log216refers to records or other data indicative of changes made to the operating environment202. For example, in an embodiment the configuration log216comprises a file containing ordered records, each of which describes a change changes made to the operating environment202. In embodiments, the control plane206records entries in the change log in response to requests to change the configuration of the operating environment202. For example, the control plane206may receive a request to change the operating environment202, record the entry, and complete the request by changing the operating environment202in accordance with the request. The log may further contain an indication of whether or not the change was successful. In some embodiments, changes which were not successful in operating environment202are not subsequently applied to other operating environments. This is to maintain consistency between the existing operating environments and any new operating environments, such as the depicted uninstantiated operating environment204. The image218refers to binary data that is representative of state information. For example, with respect to virtual machines, the image218can correspond to state data for the virtual machine. In various embodiments, image data for a virtual machine can be used to capture the state of the virtual machine, and to subsequently resume execution of the virtual machine. These functions may be performed by making use of the features of a hypervisor. In embodiments, the image218comprises data representing a virtual machine state. For example, the control plane206may store an image of a virtual machine on which a database instance operates. For example, with respect toFIG.1, the database instance110may operate on a virtual machine, such as those depicted inFIG.7. The control plane206may cause the image to be generated in a various ways, such as using programmatic interfaces provided by a hypervisor of the virtual machine. In embodiments, the control plane206causes the image218to be generated in response to a change to the configuration of the virtual machine, such as the installation of an application. For example, the control plane206may generate the virtual machine after database software is installed on the virtual machine, or after client applications are installed and configured. In embodiments, the control plane206causes the image218to be generated on a periodic basis, in order to capture configuration changes that the control plane206might not be aware of, or is unable to accurately or efficiently record in a log of configuration changes. For example, certain changes might be made without involvement of the control plane206. In embodiments, the control plane206causes the image218to be generated on upon request. Images may be recorded in response to configuration changes that are not routed through the control plane206, configuration changes that are unable to be accurately recorded in a log of configuration changes, or for configuration changes that are efficiently applied via imaging. An administrator of the operating environment202might request that the control plane generate the image after making such a change. In the example ofFIG.2, a second control plane208is not yet instantiated during the operation of the first control plane206. During this period, configuration changes are applied to the first control plane206. At some later point, such as when a database instance is to be restored, the second control plane208may be instantiated. In embodiments, this process involves the initialization of a control plane208within the new operating environment204. The control plane208then directs the further configuration of the new operating environment. The control plane208may, for example, cause the installation of operating systems, hypervisors, virtual machine images, executable programs, and so forth. In embodiments, the configuration data212is replicated and made available to the control plane208. After initializing the new operating environment204to a baseline state, the control plane208applies the configuration data212to the new operating environment204. The application of the configuration data may, in embodiments, proceed in accordance with the following procedure. The control plane may first apply the most recently captured images for virtual machines. Configuration changes which occurred prior to the generation of these images may be discarded, in some cases, when those changes applied to the configuration of the virtual machine and are therefore already reflected in the image. Next, any snapshots, baselines, or other data may be stored on the virtual machines. Then, each configuration change may be applied in the order they are found in the log. Note that although these operations have been described as occurring in a particular order, this order should not be construed as limiting the scope of the present disclosure to only those embodiments that perform the operations in the provided order. Except where logically required, the provided operations may be altered, reordered, omitted, or performed in parallel. FIG.3is a flow diagram illustrating aspects of a distributed system performing configuration synchronization. AlthoughFIG.3is depicted as a sequence of steps, the depicted sequence should not be construed as limiting the scope of the present disclosure to only those embodiments conforming to the depicted order. For example, unless otherwise indicated or clear from context (e.g., when the output of one step is used as input into another), the at least some of the depicted steps may be reordered or performed in parallel. The example process300may be implemented by a distributed system. In an embodiment, a control plane performs one or more of the depicted operations. Examples of a control plane that may implement the depicted operations are provided at least inFIGS.1and7. At302, the distributed system enables a point-in-time restoration capability. Aspects of enabling a point-in-time restoration are provided byFIG.5. At304, the distributed system obtains information indicative of classes of configuration changes. This information can comprise data or code usable to classify a configuration change. For example, a configuration change can be classified as being related to the operation of the database instance, related to the operation of a client application, and so forth. An aspect of the classification can include whether or not a configuration change should be replicated to the target environment in the event that a database instance is to be restored to the target environment. Another aspect of the classification can include how the configuration should be persisted. In some cases and embodiments, the distributed system obtains information indicating classes of configuration changes that may be applied to a target environment in different ways. The information may further comprise information indicating how and when a corresponding category of configuration change should be changed. In an embodiment, the distributed system can be provided with a metadata file that maps from configuration settings applicable to a source operating environment to configuration settings applicable to a target operating environment. For example, the metadata file might comprise information that indicates how physical memory and processing capacity might be allocated to virtual machines in the source and target operating environments. In this way, the system can adapt the configuration settings to the capabilities of virtual or physical devices, such as virtual machines or application servers, in the target environment. Given that each operating environment might have different amounts of memory or capacity available, these factors might need to be adjusted when restoring a database instance to the target operating environment. The metadata can describe permissible ranges for these adaptations. In some cases and environments, the distributed system can be provided with script or executable code which describes procedures for adapting configuration changes to the target operating environment. In an embodiment, a control plane in an operating environment, such as the first operating environment102adepicted inFIG.1, obtains the information indicative of the classification. The control plane may, for example, have installed on it executable code and metadata comprising the information indicative of the classifications. At306, the distributed system obtains information indicative of a particular change to the configuration of the operating environment. This refers to a change to the operating environment that has been requested, or to a change that has been performed. At308, the distributed system selects a mode of persisting the configuration change. Persisting refers to storing information describing or representing the configuration change. Selecting the mode of persisting refers to determining a format or procedure for storing or representing the configuration change. In an example, the control plane stores information indicative of the configuration change in a log of configuration changes. The log format may be suited for configuration changes that involve altering settings, adding registry entries, executing simple commands, and so forth. These examples should not, however, be construed as limiting. In an example, the control plane stores information indicative of the configuration change in an image. The control plane may, for example, cause a hypervisor of a virtual machine to generate and store an image of the virtual machine. The image format may be suited to capture configuration changes that are relatively complex, such as the myriad configuration changes that may result from running an installation program. The image format may also be suited to capture the state of a virtual machine in a baseline state. At310, the distributed system persists the configuration change in accordance with the selected mode. In an embodiment, the control plane causes the configuration data, such as configuration logs and image data, to be written to a storage device or a storage service. In some instances, the data can then be replicated or otherwise made available to other operating environments. Persisting the configuration change can comprise storing information indicative of the order in which the configuration change should be applied. For example, the distributed system might store information indicating that a virtual machine image should be applied prior to the changes represented in a configuration log. The entries in a configuration log might be stored in the order in which the changes should be performed, or might contain some other information indicative of the order in which the changes should be performed. In some instances, the distributed system may store information indicating the relative order of applying different units of configuration data, such as information specifying the relative order of applying configuration log files and image files. In an embodiment, the order of application is determined based at least in part on the classification of the configuration change. As noted, the classification may be based on code or metadata. The order of application may, in some cases and embodiments, involve parallel application of the changes. Whether or not parallelism is used may be determined, in some embodiments, based on the classification, and on any dependencies. At312, the distributed system applies the configuration change to a second operating environment, in accordance with the selected mode. In cases and embodiments, the configuration changes are applied by a control plane in the target operating environment. For configuration changes represented by entries in a configuration log, the control plane performs a configuration action corresponding to each entry. For example, if entry comprises a command to change a registry setting, the control plane causes that command to be executed on the computing device or virtual machine whose registry is to be affected. Various management interfaces may be used to execute the commands. As noted above, the configuration changes may be adjusted in order to better suit the target operating environment. In an embodiment, the distributed system applies metadata to identify configuration changes to adapt to the target operating environment. As explained in more detail above, this may involve applying information that maps between aspects of the configuration of the source environment to aspects of configuration of the target environment, and functions, procedures, or transforms for adapting the configuration settings to the target environment. FIG.4is a flow diagram illustrating an example process for synchronizing configuration between operating environments, in accordance with an embodiment. AlthoughFIG.4is depicted as a sequence of steps, the depicted sequence should not be construed as limiting the scope of the present disclosure to only those embodiments conforming to the depicted order. For example, unless otherwise indicated or clear from context (e.g., when the output of one step is used as input into another), the at least some of the depicted steps may be reordered or performed in parallel. The example process400may be implemented by a distributed system. In an embodiment, a control plane performs one or more of the depicted operations. Examples of a control plane that may implement the depicted operations are provided at least inFIGS.1and7. At402, the distributed system monitors changes to the configuration of a first operating environment. The control plane of an operating environment, in embodiments, may receive or otherwise obtain requests to perform configuration changes to the operating environment. The control plane can monitor configuration changes by examining these requests. Configuration changes for the first operating environment can also be routed through the control plane, even though they might be handled elsewhere. For example, a request to perform a command to add a table to a database instance might be sent to the control plane. The control plane can then forward the request to the database instance. At404, the distributed system determines that a first database instance is dependent on a change to the configuration of the first operating environment. If performed, this operation pertains to determining whether or not a record of a configuration change should be stored, or to determining whether or not the configuration change should be applied to an operating environment in which a database instance is to be restored. In an embodiment, the determination to store a record of the configuration change is based at least partly on metadata that comprises information indicative of configuration settings on which the first databases depends. At406, the distributed system stores a record of the change to the configuration. In general, the distributed system stores information sufficient to allow the configuration change to be subsequently reapplied in a new operating environment. The information may further comprise and indication of the time at which the configuration change was made, or the order in which it was made. A timestamp or other value may be used. In some cases and embodiments, storing a record of the monitored change comprises generating an image of a virtual machine. The image can, for example, be stored in response to a change to the operating environment, in order to capture and subsequently reapply the configuration settings reflected in the image. In some cases and embodiments, storing a record comprises storing a snapshot of a database instance. The snapshot of a database may be viewed as comprising transaction data, but may also be viewed as comprising configuration data. For example, the schema of a database might be treated as configuration data. Doing so has the technical effect of improving compatibility, since the snapshot can be used to recreate whatever schema existed as of the indicated point-in-time of the restoration. Note that subsequent transactions on the database might also alter the schema, but if so the schema is still current as of the indicated point-in-time, since transactions up to that point, but not afterwards, can be replayed. At408, the distributed system obtains a request to restore the database instance. For example, a control plane of the distributed system may, for example, receive a request from an administrative application to perform the restoration. Alternatively, an automated process may determine that a restoration is warranted and send a restoration request to a control plane. A request to perform a point-in-time restoration may include a time value, or other indicator, to indicate the point-in-time to which the database should be restored. With respect to configuration settings, in various embodiments the restoration process includes steps or operations to apply configuration changes made to the source environment, up to the indicated point in time. Changes made to the configuration after the indicated point-in-time may be skipped. This has the technical effect of improving compatibility between the database and its operating environment. At410, the distributed system provides the record of the configuration change to a second operating environment. In some embodiments, this comprises replicating the configuration data to a geographic region proximate to the second operating environment. In other embodiments, this comprising assigning access rights to a user account associated with the second operating environment. At412, the distributed system configures the second operating environment in accordance with the record of the configuration change. In some cases, the distributed system configures the second operating also in accordance with a requested point-in-time for the restoration, so that configuration changes up to the indicated point-in-time are applied to the second operating environment. In an embodiment, configuring the second operating environment in accordance with the record of the monitored change comprises adapting the configuration change to conform to the second operating environment. In one example, configuration settings applicable to the original operating environment are not applicable to the new operating environment, and the adaptation corresponds to transforming the configuration change to a null operation. In another example, configuration settings appropriate to a device found in the original environment are mapped to settings that are more appropriate, but still compatible, with a corresponding device found in the new operating environment. In an embodiment, a control plane of the distributed system configures the second operating environment by sending instructions to restore an image of a virtual machine to an application server in the second operating environment, and to cause the restored virtual machine to execute. In an embodiment, a control plane of the distributed system configures the second operating environment by sending instructions to perform a configuration command to application servers in the second operating environment. These commands may, for example, correspond to commands or other configuration changes reflected in a log of configuration changes. In some cases and embodiments, configuration changes are applied to the second operating environment after a database instance has been restored to it. For example, there might be various configuration settings, such as those related to time zone or database name, which are adjusted subsequent to the restoration. These changes may be automated via a control plane in the second operating environment. In some instances, the second operating environment may operate in a standby mode, even if no standby database instance has been instantiated within it. In such cases, configuration changes can be replicated to the second operating environment on an ongoing basis. For example, the control planes in the respective operating environments might cause changes to configuration made in the first operating environment to be applied within a threshold amount of time, to the second operating environment. At414, the distributed system restores the database instance to the second operating environment. An example of restoring the database instance is illustrated byFIG.7. FIG.5is a flow diagram illustrating an example process for enabling a point-in-time database restoration capability, with delayed instantiating of the backup instance, in accordance with an embodiment. AlthoughFIG.5is depicted as a sequence of steps, the depicted sequence should not be construed as limiting the scope of the present disclosure to only those embodiments conforming to the depicted order. For example, unless otherwise indicated or clear from context (e.g., when the output of one step is used as input into another), the at least some of the depicted steps may be reordered or performed in parallel. The example process500may be performed by a distributed system, such as the data distributed system100depicted inFIG.1. In some embodiments, the depicted steps are performed or initiated by a control plane of the distributed system100, such as the control plane106adepicted inFIG.1. At502, the control plane receives or otherwise obtains a request to enable a point-in-time restoration capability for a database instance, where the restoration capability is protected by operational continuity safeguards. In embodiments, the restoration capability is provided with no backup instance being created, unless and until such an instance is requested. A backup instance refers to a node or instance of the database, such as a read replica of a database that processes the same transactions as the database instance, or a subset thereof. Instantiating the backup instance refers to executing the instance. As a consequence of not immediately instantiating the backup instance, the technical and monetary costs associated with executing such an instance is not incurred until and if such an instance is needed. The operational continuity safeguards are associated with two accounts. A first account, which may be referred to as a primary account, is associated with the operation of the database instance for which the restoration capability is being enabled. A second account, which may be referred to as a secondary account, is used to act as a joint owner of account, configuration, and/or transaction data produced by the database instance. The secondary account is not necessarily limited to this role, however. At504, the control plane obtains configuration data for the operating environment and database instance. The operating environment refers to the operating environment in which the database instance is executing, or alternatively the operating environment in which the primary database instance will execute, in cases where restoration capability is requested when the database instance is initially configured. The configuration data is stored with information indicating that it is jointly owned by both the primary and secondary accounts, and as such is protected from deletion except where the distributed system obtains authorization from both of the primary and secondary accounts. In an embodiment, the control plane obtains configuration data by recording snapshots of the database instance and of the computing device and/or virtual machine on which the database instance executes. In an embodiment, the control plane obtains configuration data by monitoring changes made to the operating environment. For example, configuration changes to the operating environment may be routed through the control plane. The control plane can then store records of the configuration changes. In some instances, the control plane may initiate or facilitate configuration changes, and can record them. In an embodiment, the control plane maintains metadata indicative of configuration aspects on which the database is dependent. For example, the control plane may store metadata indicative of configuration changes that have been made by or routed through the control plane. The metadata may further indicate which of these changes are pertinent to the operation of the database, and on which the database may therefore be considered dependent. The metadata might also provide means of identifying relevant configuration changes to an operating system or database configuration. In an embodiment, the distributed system marks as jointly owned the aspects of configuration that are indicated by the metadata as being related to the operation of the database. In this manner, aspects of the configuration which may be relevant to the database instance are preserved for subsequent application to an operating environment in which a restored database can operate. At506, the control plane initiates maintenance of transaction log data on replicated storage. In this step, initiating maintenance refers to an ongoing basis to copy transaction data to a replicated storage location, such as a storage service accessible to the operating environment in which the original database instance is executing. The distributed system stores the transaction data with information indicating that it is jointly owned by both the primary and secondary accounts, and as such is protected from deletion except where the distributed system obtains authorization from both of the primary and secondary accounts. In an embodiment, the control plane initiates maintenance of the transaction data by launching a thread or executable process which copies transaction data to a replicated storage location, such as a storage service with replication features. In an embodiment, the control plane monitors the transaction log for new entries, and copies the new transaction data to replicated storage. The data can be marked as jointly owned by the primary and secondary accounts. In another embodiment, the control plane monitors a directory or other storage location for new transaction log files, and copies the new files to replicated storage. Alternatively, the thread or process may periodically copy a transaction log, or a portion thereof, to replicated storage. The transaction data, however stored, may protected by the distributed system from deletion except where authorization can be obtained from both of the primary and secondary accounts. The replicated storage refers to a storage service that has replication features, including the ability to generate and store a copy of the files or other data stored on the service. For example, storage service with replication features may automatically store copies of data in at least two geographic regions. The transaction data copied to the replicated storage is therefore replicated in accordance with the replication features of the service. Alternatively, the replicated storage refers to a storage system or storage device that is replicated to another operating environment by the operating of control planes in the respective environments. At508, the control plane stores the configuration data for the operating environment and database instance on the replicated storage. This information may then be replicated to another location for use in a restoration of the primary database instance. This information may be replicated so that any information that indicates it should not be deleted without mutual authorization is preserved when it is replicated. Alternatively, it make be replicated to a location in which it is protected from deletion without mutual authentication. At510, the control plane configures replication to the target operating environment. When a client requests that a restoration capability be enabled, they may also specify one or more target operating environments. A target environment refers to an environment in which the database might be restored. For example, if the primary database instance operates in an operating environment geographically located on the West Coast, a target environment for restoring the database might be specified as the East Coast. This step may be optional, in the sense that the storage service may have replication features do not require such configuration. Also note that the replication may not always be to the target operating environment, but rather to a location that is accessible to the target operating environment. Embodiments may configure the features of replication in accordance with requested attributes of the restoration. An example of such an attribute is latency of replication, since the ability to restore a database instance may depend on how long the data takes to replicate to the operating environment in which the database is to be restored. In some cases, replicating the data to the target environment helps to minimize the time needed to restore the database, since the transaction log and configuration data will have been transferred to the target operating environment prior to the restoration request. Another aspect of replication involves ownership of the replicated data. Ownership refers to the account, or accounts, that can read or modify the replicated data. In an embodiment, the replication is configured to associate the replicated data with the same account used in conjunction with the primary database instance in the original operating environment. In another aspect, the replicated data is associated with another account that is used only in a target environment. In another aspect, the replicated data cannot be modified or deleted (but may be accessed) without approval from both accounts. In the absence of approval, the system prevents the replicated data from being modified or deleted. FIG.6is a flow diagram illustrating an example process for performing a point-in-time database restoration, in accordance with an embodiment. AlthoughFIG.6is depicted as a sequence of steps, the depicted sequence should not be construed as limiting the scope of the present disclosure to only those embodiments conforming to the depicted order. For example, unless otherwise indicated or clear from context (e.g., when the output of one step is used as input into another), the at least some of the depicted steps may be reordered or performed in parallel. The example process600may be performed by a distributed system, such as the distributed system100depicted inFIG.1. In some embodiments, the depicted steps are performed or initiated by a control plane of the distributed system100, such as the control plane106adepicted inFIG.1. At602, the distributed system receives a request to instantiate a backup database instance. As noted, the instantiation of the backup database instance occurs when the restoration is requested, rather than when the client requests that the restoration capability be enabled. A control plane in an active operating environment may receive the request. If not in the target operating environment, the control plane that received the request may then forward it to the control plane in the target operating environment. In an embodiment, metering the capacity utilized for executing the restored instance is initiated in response to receiving the request to instantiate the backup instance, e.g., once the backup instance has become operable. Prior to the request, no such costs are associated with the backup instance. There may, however, be costs associated with monitoring, storing, and replicating transaction log and configuration data. The metering may be initiated by the control plane after receiving the request and completing the instantiation of the restored instance. The control plane may, for example, begin collecting data pertaining to how much data is read from or written to the restored database instance, and store records associating those activities with a corresponding account. In an embodiment, the distributed system configures the operating environment and database by accessing configuration and replication data based on the credentials of the secondary account. For example, the control plane may verify that the request to perform the restoration was obtained from interactions with a user interface or invocations of application programming interface in which the credentials of the secondary account were provided. In embodiments, the control plane may verify that files, records, or other data accessed during the restoration process is associated with the secondary account. In some embodiments, security features of the operating system may be leveraged to indicate and confirm that the secondary account is an owner of the data accessed during the restoration, and to prevent accounts other than the primary and secondary accounts from accessing the data. At604, the distributed system obtains the replicated configuration data for the operating environment and database. The information is accessed based on authorization associated with the second account. In an embodiment, a control plane operating in a second operating environment obtains configuration data for a first operating environment. In an embodiment, the control plane identifies data relevant to the database instance (for example, based on account information, a database instance identifier, an operating environment identifier, and so forth) and retrieves the corresponding configuration information from a storage device or service accessible in the second operating environment. The information may be included in the received request to restore the database instance. Once identified based on this information, the data can be retrieved from a storage location accessible to the second operating environment. At606, the distributed system obtains the replicated transaction log data. The data is accessed based on authorization associated with the second account. In an embodiment, the control plane in the target operating environment retrieves the transaction data from a storage service. On the storage service, the transaction data is identified based on a naming pattern or identification system that incorporates information such as account number, database instance identifier, operating environment identifier, and so forth. The information that identifies the transaction data that may be included in the request to restore the database instance. The data may be stored on the storage service with information indicating that it can be accessed based on the secondary account. At608, the distributed system configures the operating environment and database according the obtained configuration data. In an embodiment, this is done via a control plane in the second operating environment. For example, the configuration information may be structured as a collection of configuration logs and snapshots. The configuration logs may describe a series of changes made to the first operating environment. The logs may include information indicating when the configuration changes were made. The snapshot information can include database snapshots, virtual machine snapshots, and so forth, and may be associated with information indicating when the snapshots were taken. Using the time information, the control plane may apply relevant configuration changes up to the desired time for the point-in-time restoration. Similarly, snapshots current as of the indicated restoration time may also be used. In this manner, the configuration of the first operating environment can be recreated in the second operating environment, to the extent necessary to allow the restored database to run in the second environment. At610, the distributed system executes the new database instance in the target operating environment. This may be done via inter-process communication between the control plane in the target environment and the operating system of the computing device or virtual machine on which the database instance is to be executed on. At612, the distributed system replays the transaction from the transaction log. Replaying the transaction log refers to causing the new database instance to process the transactions represented by entries in the transaction log. In an embodiment, the control plane initiates the replaying by sending a command to the new database instance. The command indicates that the database instance should replay entries in a transaction log, and provides a location where the database instance can access the log. FIG.7illustrates aspects of an example system700for implementing aspects in accordance with an embodiment. As will be appreciated, although a web-based system is used for purposes of explanation, different systems may be used, as appropriate, to implement various embodiments. In an embodiment, the system includes an electronic client device702, which includes any appropriate device operable to send and/or receive requests, messages, or information over an appropriate network704and convey information back to a user of the device. Examples of such client devices include personal computers, cellular or other mobile phones, handheld messaging devices, laptop computers, tablet computers, set-top boxes, personal data assistants, embedded computer systems, electronic book readers, and the like. In an embodiment, the network includes any appropriate network, including an intranet, the Internet, a cellular network, a local area network, a satellite network or any other such network and/or combination thereof and components used for such a system depend at least in part upon the type of network and/or system selected. Many protocols and components for communicating via such a network are well known and will not be discussed herein in detail. In an embodiment, communication over the network is enabled by wired and/or wireless connections and combinations thereof. In an embodiment, the network includes the Internet and/or other publicly-addressable communications network, as the system includes a web server706for receiving requests and serving content in response thereto, although for other networks an alternative device serving a similar purpose could be used as would be apparent to one of ordinary skill in the art. In an embodiment, the illustrative system includes at least one application server(s)708, a control plane709, and a data store710. It should be understood that there can be several application servers, control planes, layers or other elements, processes or components, which may be chained or otherwise configured, which can interact to perform tasks such as obtaining data from an appropriate data store. Servers, in an embodiment, are implemented as hardware devices, virtual computer systems, programming modules being executed on a computer system, and/or other devices configured with hardware and/or software to receive and respond to communications (e.g., web service application programming interface (API) requests) over a network. As used herein, unless otherwise stated or clear from context, the term “data store” refers to any device or combination of devices capable of storing, accessing and retrieving data, which may include any combination and number of data servers, databases, data storage devices and data storage media, in any standard, distributed, virtual or clustered system. Data stores, in an embodiment, communicate with block-level and/or object level interfaces. The application server can include any appropriate hardware, software and firmware for integrating with the data store as needed to execute aspects of one or more applications for the client device, handling some or all of the data access and business logic for an application. In an embodiment, the application server provides access control services in cooperation with the data store and generates content including, but not limited to, text, graphics, audio, video and/or other content that is provided to a user associated with the client device by the web server in the form of HyperText Markup Language (“HTML”), Extensible Markup Language (“XML”), JavaScript, Cascading Style Sheets (“CSS”), JavaScript Object Notation (JSON), and/or another appropriate client-side or other structured language. Content transferred to a client device, in an embodiment, is processed by the client device to provide the content in one or more forms including, but not limited to, forms that are perceptible to the user audibly, visually and/or through other senses. The handling of all requests and responses, as well as the delivery of content between the client device702and the application server(s)708, in an embodiment, is handled by the web server using PHP: Hypertext Preprocessor (“PHP”), Python, Ruby, Perl, Java, HTML, XML, JSON, and/or another appropriate server-side structured language in this example. In an embodiment, operations described herein as being performed by a single device are performed collectively by multiple devices that form a distributed and/or virtual system. In an embodiment, the control plane709performs operations to coordinate the activities and operation of components within the system700. The control plane may comprise a computing device with at least one processor, one or more non-transitory memories, and instructions that, in response to being executed, perform operations of a control plane as described herein. The control plane709may further comprise one or more network interfaces for communicating with the web server706, application server(s)708, and data store710. The control plane709, in various embodiments, is configured to have access to information not accessible to the web server706and/or application server(s)708. This information may include aspects of user information716, such as credentials, certificates, account and billing information, system configuration data, and so forth. The control plane709, in various embodiments, manages the allocation and configuration of the application server(s)708and virtual machines711. The virtual machines711operate on the application server(s)708. In various embodiments, when a computing resource, such as a database instance, is requested within an operating environment, the control plane709identifies an application server(s)708with sufficient available capacity and assigns it to execute a virtual machine. The control plane709then configures the virtual machine, including performing steps to ensure that software (such as software for a database instance) is installed. In various embodiments, the control plane709can perform operations to configure communications networks. For example, the control plane709can configure communications between the web server706and application server(s)708. The control plane709may also configure routers, gateways, and other devices in order to provide and secure communications between the web server706, application server(s)708, and client device702. The data store710, in an embodiment, includes several separate data tables, databases, data documents, dynamic data storage schemes and/or other data storage mechanisms and media for storing data relating to a particular aspect of the present disclosure. In an embodiment, the data store illustrated includes mechanisms for storing data712and user information716, which are used to serve content. The data store also is shown to include a mechanism for storing operations data714, which is used, in an embodiment, for reporting, computing resource management, analysis or other such purposes. In an embodiment, other aspects such as page image information and access rights information (e.g., access control policies or other encodings of permissions) are stored in the data store in any of the above listed mechanisms as appropriate or in additional mechanisms in the data store710. The data store710, in an embodiment, is operable, through logic associated therewith, to receive instructions from the application server(s)708and obtain, update or otherwise process data in response thereto and the application server(s)708provides static, dynamic, or a combination of static and dynamic data in response to the received instructions. In an embodiment, dynamic data, such as data used in web logs (blogs), shopping applications, news services, and other such applications are generated by server-side structured languages as described herein or are provided by a content management system (“CMS”) operating on, or under the control of, the application server. In an embodiment, a user, through a device operated by the user, submits a search request for a certain type of item. In this example, the data store accesses the user information to verify the identity of the user, accesses the catalog detail information to obtain information about items of that type, and returns the information to the user, such as in a results listing on a web page that the user views via a browser on the user device702. Continuing with example, information for a particular item of interest is viewed in a dedicated page or window of the browser. It should be noted, however, that embodiments of the present disclosure are not necessarily limited to the context of web pages, but are more generally applicable to processing requests in general, where the requests are not necessarily requests for content. Example requests include requests to manage and/or interact with computing resources hosted by the system700and/or another system, such as for launching, terminating, deleting, modifying, reading, and/or otherwise accessing such computing resources. In an embodiment, each server typically includes an operating system that provides executable program instructions for the general administration and operation of that server and includes a computer-readable storage medium (e.g., a hard disk, random access memory, read only memory, etc.) storing instructions that, if executed (i.e., as a result of being executed) by a processor of the server, cause or otherwise allow the server to perform its intended functions. The system700, in an embodiment, is a distributed and/or virtual computing system utilizing several computer systems and components that are interconnected via communication links (e.g., transmission control protocol (TCP) connections and/or transport layer security (TLS) or other cryptographically protected communication sessions), using one or more computer networks or direct connections. However, it will be appreciated by those of ordinary skill in the art that such a system could operate in a system having fewer or a greater number of components than are illustrated inFIG.7. Thus, the depiction of the system600inFIG.6should be taken as being illustrative in nature and not limiting to the scope of the disclosure. The various embodiments further can be implemented in a wide variety of operating environments, which in some cases can include one or more user computers, computing devices or processing devices which can be used to operate any of a number of applications. In an embodiment, user or client devices include any of a number of computers, such as desktop, laptop or tablet computers running a standard operating system, as well as cellular (mobile), wireless and handheld devices running mobile software and capable of supporting a number of networking and messaging protocols and such a system also includes a number of workstations running any of a variety of commercially-available operating systems and other known applications for purposes such as development and database management. In an embodiment, these devices also include other electronic devices, such as dummy terminals, thin-clients, gaming systems and other devices capable of communicating via a network, and virtual devices such as virtual machines, hypervisors, software containers utilizing operating-system level virtualization and other virtual devices or non-virtual devices supporting virtualization capable of communicating via a network. These non-virtual devices operate on physical computing devices, such as the depicted application servers. In an embodiment, a system utilizes at least one network that would be familiar to those skilled in the art for supporting communications using any of a variety of commercially-available protocols, such as Transmission Control Protocol/Internet Protocol (“TCP/IP”), User Datagram Protocol (“UDP”), protocols operating in various layers of the Open System Interconnection (“OSI”) model, File Transfer Protocol (“FTP”), Universal Plug and Play (“UpnP”), Network File System (“NFS”), Common Internet File System (“CIFS”) and other protocols. The network, in an embodiment, is a local area network, a wide-area network, a virtual private network, the Internet, an intranet, an extranet, a public switched telephone network, an infrared network, a wireless network, a satellite network, and any combination thereof. In an embodiment, a connection-oriented protocol is used to communicate between network endpoints such that the connection-oriented protocol (sometimes called a connection-based protocol) is capable of transmitting data in an ordered stream. In an embodiment, a connection-oriented protocol can be reliable or unreliable. For example, the TCP protocol is a reliable connection-oriented protocol. Asynchronous Transfer Mode (“ATM”) and Frame Relay are unreliable connection-oriented protocols. Connection-oriented protocols are in contrast to packet-oriented protocols such as UDP that transmit packets without a guaranteed ordering. In an embodiment, the system utilizes a web server that run one or more of a variety of server or mid-tier applications, including Hypertext Transfer Protocol (“HTTP”) servers, FTP servers, Common Gateway Interface (“CGI”) servers, data servers, Java servers, Apache servers, and business application servers. In an embodiment, the one or more servers are also capable of executing programs or scripts in response to requests from user devices, such as by executing one or more web applications that are implemented as one or more scripts or programs written in any programming language, such as Java©, C, C# or C++, or any scripting language, such as Ruby, PHP, Perl, Python or TCL, as well as combinations thereof. In an embodiment, the one or more servers also include database servers, including without limitation those commercially available from Oracle*, Microsoft*, Sybase*, and IBM© as well as open-source servers such as MySQL, Postgres, SQLite, MongoDB, and any other server capable of storing, retrieving, and accessing structured or unstructured data. In an embodiment, a database server includes table-based servers, document-based servers, unstructured servers, relational servers, non-relational servers, or combinations of these and/or other database servers. In an embodiment, the system includes a variety of data stores and other memory and storage media as discussed above which can reside in a variety of locations, such as on a storage medium local to (and/or resident in) one or more of the computers or remote from any or all of the computers across the network. In an embodiment, the information resides in a storage-area network (“SAN”) familiar to those skilled in the art and, similarly, any necessary files for performing the functions attributed to the computers, servers or other network devices are stored locally and/or remotely, as appropriate. In an embodiment where a system includes computerized devices, each such device can include hardware elements that are electrically coupled via a bus, the elements including, for example, at least one central processing unit (“CPU” or “processor”), at least one input device (e.g., a mouse, keyboard, controller, touch screen, or keypad), at least one output device (e.g., a display device, printer, or speaker), at least one storage device such as disk drives, optical storage devices, and solid-state storage devices such as random access memory (“RAM”) or read-only memory (“ROM”), as well as removable media devices, memory cards, flash cards, etc., and various combinations thereof. In an embodiment, such a device also includes a computer-readable storage media reader, a communications device (e.g., a modem, a network card (wireless or wired), an infrared communication device, etc.), and working memory as described above where the computer-readable storage media reader is connected with, or configured to receive, a computer-readable storage medium, representing remote, local, fixed, and/or removable storage devices as well as storage media for temporarily and/or more permanently containing, storing, transmitting, and retrieving computer-readable information. In an embodiment, the system and various devices also typically include a number of software applications, modules, services, or other elements located within at least one working memory device, including an operating system and application programs, such as a client application or web browser. In an embodiment, customized hardware is used and/or particular elements are implemented in hardware, software (including portable software, such as applets), or both. In an embodiment, connections to other computing devices such as network input/output devices are employed. In an embodiment, storage media and computer readable media for containing code, or portions of code, include any appropriate media known or used in the art, including storage media and communication media, such as, but not limited to, volatile and non-volatile, removable and non-removable media implemented in any method or technology for storage and/or transmission of information such as computer readable instructions, data structures, program modules or other data, including RAM, ROM, Electrically Erasable Programmable Read-Only Memory (“EEPROM”), flash memory or other memory technology, Compact Disc Read-Only Memory (“CD-ROM”), digital versatile disk (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices or any other medium which can be used to store the desired information and which can be accessed by the system device. Based on the disclosure and teachings provided herein, a person of ordinary skill in the art will appreciate other ways and/or methods to implement the various embodiments. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense. It will, however, be evident that various modifications and changes may be made thereunto without departing from the broader spirit and scope of the invention as set forth in the claims. Other variations are within the spirit of the present disclosure. Thus, while the disclosed techniques are susceptible to various modifications and alternative constructions, certain illustrated embodiments thereof are shown in the drawings and have been described above in detail. It should be understood, however, that there is no intention to limit the invention to the specific form or forms disclosed, but on the contrary, the intention is to cover all modifications, alternative constructions, and equivalents falling within the spirit and scope of the invention, as defined in the appended claims. The use of the terms “a” and “an” and “the” and similar referents in the context of describing the disclosed embodiments (especially in the context of the following claims) are to be construed to cover both the singular and the plural, unless otherwise indicated herein or clearly contradicted by context. Similarly, use of the term “or” is to be construed to mean “and/or” unless contradicted explicitly or by context. The terms “comprising,” “having,” “including,” and “containing” are to be construed as open-ended terms (i.e., meaning “including, but not limited to,”) unless otherwise noted. The term “connected,” when unmodified and referring to physical connections, is to be construed as partly or wholly contained within, attached to, or joined together, even if there is something intervening. Recitation of ranges of values herein are merely intended to serve as a shorthand method of referring individually to each separate value falling within the range, unless otherwise indicated herein and each separate value is incorporated into the specification as if it were individually recited herein. The use of the term “set” (e.g., “a set of items”) or “subset” unless otherwise noted or contradicted by context, is to be construed as a nonempty collection comprising one or more members. Further, unless otherwise noted or contradicted by context, the term “subset” of a corresponding set does not necessarily denote a proper subset of the corresponding set, but the subset and the corresponding set may be equal. The use of the phrase “based on,” unless otherwise explicitly stated or clear from context, means “based at least in part on” and is not limited to “based solely on.” Conjunctive language, such as phrases of the form “at least one of A, B, and C,” or “at least one of A, B and C,” (i.e., the same phrase with or without the Oxford comma) unless specifically stated otherwise or otherwise clearly contradicted by context, is otherwise understood with the context as used in general to present that an item, term, etc., may be either A or B or C, any nonempty subset of the set of A and B and C, or any set not contradicted by context or otherwise excluded that contains at least one A, at least one B, or at least one C. For instance, in the illustrative example of a set having three members, the conjunctive phrases “at least one of A, B, and C” and “at least one of A, B and C” refer to any of the following sets: {A}, {B}, {C}, {A, B}, {A, C}, {B, C}, {A, B, C}, and, if not contradicted explicitly or by context, any set having {A}, {B}, and/or {C} as a subset (e.g., sets with multiple “A”). Thus, such conjunctive language is not generally intended to imply that certain embodiments require at least one of A, at least one of B and at least one of C each to be present. Similarly, phrases such as “at least one of A, B, or C” and “at least one of A, B or C” refer to the same as “at least one of A, B, and C” and “at least one of A, B and C” refer to any of the following sets: {A}, {B}, {C}, {A, B}, {A, C}, {B, C}, {A, B, C}, unless differing meaning is explicitly stated or clear from context. In addition, unless otherwise noted or contradicted by context, the term “plurality” indicates a state of being plural (e.g., “a plurality of items” indicates multiple items). The number of items in a plurality is at least two, but can be more when so indicated either explicitly or by context. Operations of processes described herein can be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by context. In an embodiment, a process such as those processes described herein (or variations and/or combinations thereof) is performed under the control of one or more computer systems configured with executable instructions and is implemented as code (e.g., executable instructions, one or more computer programs or one or more applications) executing collectively on one or more processors, by hardware or combinations thereof. In an embodiment, the code is stored on a computer-readable storage medium, for example, in the form of a computer program comprising a plurality of instructions executable by one or more processors. In an embodiment, a computer-readable storage medium is a non-transitory computer-readable storage medium that excludes transitory signals (e.g., a propagating transient electric or electromagnetic transmission) but includes non-transitory data storage circuitry (e.g., buffers, cache, and queues) within transceivers of transitory signals. In an embodiment, code (e.g., executable code or source code) is stored on a set of one or more non-transitory computer-readable storage media having stored thereon executable instructions that, when executed (i.e., as a result of being executed) by one or more processors of a computer system, cause the computer system to perform operations described herein. The set of non-transitory computer-readable storage media, in an embodiment, comprises multiple non-transitory computer-readable storage media and one or more of individual non-transitory storage media of the multiple non-transitory computer-readable storage media lack all of the code while the multiple non-transitory computer-readable storage media collectively store all of the code. In an embodiment, the executable instructions are executed such that different instructions are executed by different processors for example, a non-transitory computer-readable storage medium store instructions and a main CPU execute some of the instructions while a graphics processor unit executes other instructions. In an embodiment, different components of a computer system have separate processors and different processors execute different subsets of the instructions. Accordingly, in an embodiment, computer systems are configured to implement one or more services that singly or collectively perform operations of processes described herein and such computer systems are configured with applicable hardware and/or software that enable the performance of the operations. Further, a computer system that implement an embodiment of the present disclosure is a single device and, in another embodiment, is a distributed computer systems comprising multiple devices that operate differently such that the distributed computer system performs the operations described herein and such that a single device does not perform all operations. The use of any and all examples, or exemplary language (e.g., “such as”) provided herein, is intended merely to better illuminate embodiments of the invention and does not pose a limitation on the scope of the invention unless otherwise claimed. No language in the specification should be construed as indicating any non-claimed element as essential to the practice of the invention. Embodiments of this disclosure are described herein, including the best mode known to the inventors for carrying out the invention. Variations of those embodiments may become apparent to those of ordinary skill in the art upon reading the foregoing description. The inventors expect skilled artisans to employ such variations as appropriate and the inventors intend for embodiments of the present disclosure to be practiced otherwise than as specifically described herein. Accordingly, the scope of the present disclosure includes all modifications and equivalents of the subject matter recited in the claims appended hereto as permitted by applicable law. Moreover, any combination of the above-described elements in all possible variations thereof is encompassed by the scope of the present disclosure unless otherwise indicated herein or otherwise clearly contradicted by context. All references, including publications, patent applications, and patents, cited herein are hereby incorporated by reference to the same extent as if each reference were individually and specifically indicated to be incorporated by reference and were set forth in its entirety herein.
93,153
11860744
To facilitate understanding, identical reference numerals have been used, where possible, to designate identical elements that are common to the figures. DETAILED DESCRIPTION The present disclosure describes methods, computer-readable media, and apparatuses for replacing a first data source with a replacement data source as an active data source for communication network monitoring in response to verifying an invalid data pattern of the first data source. In particular, examples of the present disclosure provide communication network data integrity validation for developing and operating data analytics and data-powered services for network management and configuration. For instance, examples of the present disclosure provide a machine learning-based data source fault detection and mitigation system for data applications and services. In one example, the present disclosure includes a data integrity system, between data sources and a data processing layer for closed-loop operation. In addition, in one example, the data integrity system, e.g., a data source management system, may provide a three-stage machine learning-based approach that identifies and learns new patterns of faults. The data source management system may include modularized components and is extensible to support multiple applications and different operation modes, e.g., proactive or reactive. In one example, an end-to-end network data processing workflow may include multiple layers. For instance, raw network operational data may be first generated in a network elements layer. Network elements are logical network entities comprising one or more physical devices, e.g., eNodeBs, mobility management entities (MMEs), etc. Data collection elements in a next layer may be responsible for collecting raw data from network elements and may comprise, for example, OSS-RC (operations support system-radio and core), network element vendor-specific management applications, or third-party management applications. After collection, the data may be transferred to be stored as data sources, which may be referred to as a data sources layer. The data stored in the data sources may be in the form of files or databases stored on physical or virtual servers. The data sources can include end-to-end Transport Control Protocol (TCP) data, radio access network (RAN) data, Internet Protocol (IP) transport data, and so forth. Next, the data stored in data sources may be retrieved and processed in the data processing layer, and eventually consumed by upper-layer services and applications. In accordance with the present disclosure, a data integrity layer, e.g., a data integrity system, operates between the data sources layer and the data processing layer. For instance, the present disclosure recognizes that invalid data may be caused by problematic raw data, errors introduced during data pre-processing and manipulation performed in data sources, e.g., aggregation across time, data storing, network issues, data loss due to physical emergency such as power outage, and so on. In the absence of the present disclosure, invalid data is often still processed and consumed by services and applications, potentially leading to unusual or abnormal data or network behaviors observed and/or reported by network personnel or end-users. In particular, data integrity is not checked and validated before data is further processed and used. This may result in poor user experiences, inaccurate results, potentially missed faults, belated alerts or reporting, and protracted issue resolution. Examples of the present disclosure provide several advantages, including a closed-loop data validity check for fault detection, fault mitigation, and reporting that is accurate and timely. Examples of the present disclosure are also scalable by employing a separate data integrity layer and modularized components, which in turn may support more applications and services in the upper layer. In addition, the present disclosure provides an improvement over detection or check mechanisms placed within the services and applications layer. For instance, results computed from invalid data could in some cases show abnormity, which may be detected by an end-user or a detection mechanism. However, there is a risk of missing potential faults hidden in calculated results and there may be delays in alerting and reporting of issues, which may be avoided by the present examples. Moreover, examples of the present disclosure are also extensible by using an expandable data pattern table that is able to keep track of recognized correct and faulty data patterns. For instance, this data pattern table may be consumed or expanded by an external data analytics module or a software defined network (SDN) controller in an SDN environment. To illustrate, closed-loop network monitoring and optimization may be deployed for existing Uniform Mobile Telecommunications System (UMTS) and Long Term Evolution (LTE) networks, e.g., in the form of SON (self-optimizing networks), which may operate concurrently as an SDN/network function virtualization (NFV)-based network architecture. In addition, user equipment (UE)/app-level control capabilities are increasingly being made available for (4G) and (5G) networks. Closed-loop control capabilities in the RAN framework, where real-time data is ingested and used to improve user experience and optimize network performance and capacity, are thus improved by the data source management system of the present disclosure. For instance, in one example, a quality of experience (QoE)-based video traffic steering application improves video user experience by intelligently steering traffic among multiple cells via a closed-loop UE-level performance monitoring and control. By ensuring valid underlying data is consumed by the control system, e.g., an SON orchestrator and/or an SDN controller or the like, the accuracy and hence the QoE of the user-consumed videos may be improved via examples of the present disclosure. Similarly, in the area of SON, the present disclosure may provide cell-level parameter optimization that controls load balancing among neighbor cells such that certain QoE requirements are met for a set of users or overall network utilization is more balanced. In such closed-loop control examples, accuracy is improved via automatic detection and mitigation of data faults so that incorrect and/or unreliable data are not used for control decisions. The present disclosure also provides a federated, reliable, and highly secure system for sharing data across entities to solve complex analytical problems. For example, once any fault or invalid data is detected before being consumed by applications and services, the alerting or reporting is triggered by the present data source management system. For instance, corresponding entities with data access privileges may be notified of detailed information and reports. With such an automatic and accurate approach, irrelevant entities or entities without privileges in the middle of customers and data source owners are excluded, enhancing the security in the environment. A further benefit is a shortening of trace-back cycles compared to observing issues from upper layers. Moreover, in accordance with the present disclosure, any data faults are transparent to customers insofar as faulty data sources are replaced by reliable alternative data sources with regard to one or more performance indicators of interest. These and other aspects of the present disclosure are discussed in greater detail below in connection with the examples ofFIGS.1-9. To aid in understanding the present disclosure,FIG.1illustrates a block diagram depicting one example of a communications network or system100for performing or enabling the steps, functions, operations, and/or features described herein. The system100may include any number of interconnected networks which may use the same or different communication technologies. As illustrated inFIG.1, system100may include a network105, e.g., a core telecommunication network. In one example, the network105may comprise a backbone network, or transport network, such as an Internet Protocol (IP)/multi-protocol label switching (MPLS) network, where label switched paths (LSPs) can be assigned for routing Transmission Control Protocol (TCP)/IP packets, User Datagram Protocol (UDP)/IP packets, and other types of protocol data units (PDUs) (broadly “traffic”). However, it will be appreciated that the present disclosure is equally applicable to other types of data units and network protocols. For instance, the network105may alternatively or additional comprise components of a cellular core network, such as a Public Land Mobile Network (PLMN), a General Packet Radio Service (GPRS) core network, and/or an evolved packet core (EPC) network, a 5G core network, an Internet Protocol Multimedia Subsystem (IMS) network, a Voice over Internet Protocol (VoIP) network, and so forth. In one example, the network105uses a network function virtualization infrastructure (NFVI), e.g., servers in a data center or data centers that are available as host devices to host virtual machines (VMs) and/or containers comprising virtual network functions (VNFs). In other words, at least a portion of the network105may incorporate software-defined network (SDN) components. In this regard, it should be noted that as referred to herein, “traffic” may comprise all or a portion of a transmission, e.g., a sequence or flow, comprising one or more packets, segments, datagrams, frames, cells, PDUs, service data units, bursts, and so forth. The particular terminology or types of data units involved may vary depending upon the underlying network technology. Thus, the term “traffic” is intended to refer to any quantity of data to be sent from a source to a destination through one or more networks. It should also be noted that the term “packet” may also be used to refer to any of a segment, a datagram, a frame, a cell, a PDU, a service data unit, a burst, and so forth, such as an IP packet. In one example, the network105may be in communication with networks160and networks170. Networks160and170may each comprise a wireless network (e.g., an Institute of Electrical and Electronics Engineers (IEEE) 802.11/Wi-Fi network and the like), a cellular access network (e.g., a Universal Terrestrial Radio Access Network (UTRAN) or an evolved UTRAN (eUTRAN), and the like), a circuit switched network (e.g., a public switched telephone network (PSTN)), a cable network, a digital subscriber line (DSL) network, a metropolitan area network (MAN), an Internet service provider (ISP) network, a peer network, and the like. In one example, the networks160and170may include different types of networks. In another example, the networks160and170may be the same type of network. The networks160and170may be controlled or operated by a same entity as that of network105or may be controlled or operated by one or more different entities. In one example, the networks160and170may comprise separate domains, e.g., separate routing domains as compared to the network105. In one example, networks160and/or networks170may represent the Internet in general. In one particular example, networks160and170may comprise 5G radio access networks. For example, as illustrated inFIG.1, the system100may represent a “non-stand alone” (NSA) mode architecture where 5G radio access network components, such as a “new radio” (NR), “gNodeB” (or “gNB”), and so forth are supported by a 4G/LTE core network (e.g., where network105represents an Evolved Packet Core (EPC) network). However, in another example, system100may instead comprise a 5G “standalone” (SA) mode point-to-point or service-based architecture where EPC components and functions of network105are replaced by a 5G core network, which may include an access and mobility management function (AMF), a user plane function (UPF), a session management function (SMF), a policy control function (PCF), a unified data management function (UDM), an authentication server function (AUSF), an application function (AF), a network repository function (NRF), and so on. In addition, these various components may comprise VNFs, as described herein. In one example, network105may transport traffic to and from user devices141-143. For instance, the traffic may relate to communications such as voice telephone calls, video and other multimedia, text messaging, emails, and so forth among the user devices141-143, or between the user devices141-143and other devices that may be accessible via networks160and170. User devices141-143may comprise, for example, cellular telephones, smart phones, personal computers, other wireless and wired computing devices, private branch exchanges, customer edge (CE) routers, media terminal adapters, cable boxes, home gateways and/or routers, and so forth. In accordance with the present disclosure, user devices141-143may communicate with or may communicate via network105in various ways. For example, user device141may comprise a cellular telephone which may connect to network105via network170, e.g., a cellular access network. For instance, such an example network170may include one or more cell sites, e.g., comprising, a base transceiver station (BTS), a NodeB, an evolved NodeB (eNodeB), or the like (broadly a “base station”), a remote radio head (RRH) and baseband unit, a base station controller (BSC) or radio network controller (RNC), and so forth. In addition, in such an example, components183and184in network105may comprise a serving gateway (SGW), a mobility management entity (MME), or the like. In one example, user device142may comprise a customer edge (CE) router which may provide access to network105for additional user devices (not shown) which may be connected to the CE router. For instance, in such an example, component185may comprise a provider edge (PE) router. As mentioned above, various components of network105may comprise virtual network functions (VNFs) which may physically comprise hardware executing computer-readable/computer-executable instructions, code, and/or programs to perform various functions. As illustrated inFIG.1, units123and124may reside on a network function virtualization infrastructure (NFVI)113, which is configurable to perform a broad variety of network functions and services. For example, NFVI113may comprise shared hardware, e.g., one or more host devices comprising line cards, central processing units (CPUs), or processors, memories to hold computer-readable/computer-executable instructions, code, and/or programs, and so forth. For instance, in one example unit123may be configured to be a firewall, a media server, a Simple Network Management protocol (SNMP) trap, etc., and unit124may be configured to be a PE router, e.g., a virtual provide edge (VPE) router, which may provide connectivity to network105for user devices142and143. As noted above, these various virtual network functions may be container-based VNFs and/or VM-based VNFs. In one example, NFVI113may represent a single computing device. Accordingly, units123and124may physically reside on the same host device. In another example, NFVI113may represent multiple host devices such that units123and124may reside on different host devices. In one example, unit123and/or unit124may have functions that are distributed over a plurality of host devices. For instance, unit123and/or unit124may be instantiated and arranged (e.g., configured/programmed via computer-readable/computer-executable instructions, code, and/or programs) to provide for load balancing between two processors and several line cards that may reside on separate host devices. In one example, network105may also include an additional NFVI111. For instance, unit121may be hosted on NFVI111, which may comprise host devices having the same or similar physical components as NFVI113. In addition, NFVI111may reside in a same location or in different locations from NFVI113. As illustrated inFIG.1, unit121may be configured to perform functions of an internal component of network105. For instance, due to the connections available to NFVI111, unit121may not function as a PE router, a SGW, a MME, a firewall, etc. Instead, unit121may be configured to provide functions of components that do not utilize direct connections to components external to network105, such as a call control element (CCE), a media server, a domain name service (DNS) server, a packet data network gateway (PGW), a gateway mobile switching center (GMSC), a short message service center (SMSC), etc. As further illustrated inFIG.1, network105includes management components199, which may include data source management system190(including several components as described in greater detail below), one or more data processing modules192, one or more data storage and archiving systems198, a self-optimizing network (SON)/software defined network (SDN) controller155. In one example, each of the management components199or the management components199collectively may comprise a computing system or server, such as computing system900depicted inFIG.9, or a processing system comprising multiple computing systems and/or servers, and may be configured to provide one or more operations or functions for replacing a first data source with a replacement data source as an active data source for communication network monitoring in response to verifying an invalid data pattern of the first data source. In addition, it should be noted that as used herein, the terms “configure,” and “reconfigure” may refer to programming or loading a processing system with computer-readable/computer-executable instructions, code, and/or programs, e.g., in a distributed or non-distributed memory, which when executed by a processor, or processors, of the processing system within a same device or within distributed devices, may cause the processing system to perform various functions. Such terms may also encompass providing variables, data values, tables, objects, or other data structures or the like which may cause a processing system executing computer-readable instructions, code, and/or programs to function differently depending upon the values of the variables or other data structures that are provided. As referred to herein a “processing system” may comprise a computing device including one or more processors, or cores (e.g., a computing system as illustrated inFIG.9and discussed below) or multiple computing devices collectively configured to perform various steps, functions, and/or operations in accordance with the present disclosure. In one example, NFVI111and unit121, and NFVI113and units123and124may be controlled and managed by the SON/SDN controller155. For instance, in one example, SON/SDN controller155is responsible for such functions as provisioning and releasing instantiations of VNFs to perform the functions of routers, switches, and other devices, provisioning routing tables and other operating parameters for the VNFs, and so forth. In one example, SON/SDN controller155may maintain communications with VNFs and/or host devices/NFVI via a number of control links151which may comprise secure tunnels for signaling communications over an underling IP infrastructure of network105. In other words, the control links151may comprise virtual links multiplexed with transmission traffic and other data traversing network105and carried over a shared set of physical links. For ease of illustration control links associated with some of the components of network105are omitted fromFIG.1. In one example, the SON/SDN controller155may also comprise a virtual machine operating on NFVI/host device(s), or may comprise a dedicated device. For instance, SON/SDN controller155may be collocated with one or more VNFs, or may be deployed in a different host device or at a different physical location. The functions of SON/SDN controller155may include the selection of NFVI from among various NFVI available in network105(e.g., NFVI111or113) to host various devices, such as routers, gateways, switches, etc., and the instantiation of such devices. For example, with respect to units123and124, SON/SDN controller155may download computer-executable/computer-readable instructions, code, and/or programs (broadly “configuration code”) for units123and124respectively, which when executed by a processor of the NFVI113, may cause the NFVI113to perform as a PE router, a gateway, a route reflector, a SGW, a MME, a firewall, a media server, a DNS server, a PGW, a GMSC, a SMSC, a CCE, and so forth. In one example, SDN controller155may download the configuration code to the NFVI113. In another example, SON/SDN controller155may instruct the NFVI113to load the configuration code previously stored on NFVI113and/or to retrieve the configuration code from another device in network105that may store the configuration code for one or more VNFs. The functions of SON/SDN controller155may also include releasing or decommissioning unit123and/or unit124when no longer required, the transferring of the functions of units123and/or124to different NFVI, e.g., when NVFI113is taken offline, and so on. In addition, in one example, SON/SDN controller155may represent a processing system comprising a plurality of controllers, e.g., a multi-layer SDN controller, one or more federated layer 0/physical layer SDN controllers, and so forth. For instance, a multi-layer SDN controller may be responsible for instantiating, tearing down, configuring, reconfiguring, and/or managing layer 2 and/or layer 3 VNFs (e.g., a network switch, a layer 3 switch and/or a router, etc.), whereas one or more layer 0 SDN controllers may be responsible for activating and deactivating optical networking components, for configuring and reconfiguring the optical networking components (e.g., to provide circuits/wavelength connections between various nodes or to be placed in idle mode), for receiving management and configuration information from such devices, for instructing optical devices at various nodes to engage in testing operations in accordance with the present disclosure, and so forth. In one example, the layer 0 SDN controller(s) may in turn be controlled by the multi-layer SDN controller. For instance, each layer 0 SDN controller may be assigned to nodes/optical components within a portion of the network105. In addition, these various components may be co-located or distributed among a plurality of different dedicated computing devices or shared computing devices (e.g., NFVI) as described herein. In one example, SON/SDN controller155may function as a self-optimizing network (SON) orchestrator that is responsible for activating and deactivating, allocating and deallocating, and otherwise managing a variety of network components. For instance, SON/SDN controller155may set and adjust configuration parameters for various routers, switches, firewalls, gateways, and so forth. In one example, one or more of networks160or networks170may comprise cellular access networks, and SON/SDN controller155may activate and deactivate antennas/remote radio heads, may steer antennas/remote radio heads, may allocate or deallocate (or activate or deactivate) baseband units in a baseband unit (BBU) pool, may add (or remove) one or more network slices, may set and adjust various configuration parameters for carriers in operation at the various cell sites, and may perform other operations for adjusting configurations of cellular access network components in accordance with the present disclosure. As illustrated inFIG.1, network105may also include internal nodes131-135, which may comprise various components, such as routers, switches, route reflectors, etc., cellular core network, IMS network, and/or VoIP network components, and so forth. In one example, these internal nodes131-135may also comprise VNFs hosted by and operating on additional NFVIs. For instance, as illustrated inFIG.1, internal nodes131and135may comprise VNFs residing on additional NFVI (not shown) that are controlled by SON/SDN controller155via additional control links. However, at least a portion of the internal nodes131-135may comprise dedicated devices or components, e.g., non-SDN reconfigurable devices. Similarly, network105may also include components181and182, e.g., PE routers interfacing with networks160, and component185, e.g., a PE router which may interface with user device142. For instance, in one example, network105may be configured such that user device142(e.g., a CE router) is dual-homed. In other words, user device142may access network105via either or both of unit124and component185. As mentioned above, components183and184may comprise a serving gateway (SGW), a mobility management entity (MME), or the like. However, in another example, components183and184may also comprise PE routers interfacing with network(s)170, e.g., for non-cellular network-based communications. In one example, components181-185may also comprise VNFs hosted by and operating on additional NFVI. However, in another example, at least a portion of the components181-185may comprise dedicated devices or components. In one example, various components of network105and/or the system100may be configured to collect, enhance, and forward network operational data, e.g., to one or more of data storage and archiving systems198. In one example, the network operational data may include raw packets that may be copied from various routers, gateways, firewalls, or other components in the network105(e.g., nodes131-135, units121,123,124, etc.). In one example, the network operational data may comprise traffic flow data that includes information derived from the raw packets of various flows, such as packet header data (e.g., 5-tuple information, such as source IP address, destination IP address, source port, destination portion, and transport layer protocol), packet size, packet arrival time, and so forth. In one example, the traffic flow data may be aggregated over a plurality of packets of a flow, or multiple flows. With respect to packet-level sampling, various first-level sampling components of system100, such as routers, gateways, firewalls, etc., may be configured to sample various packets at a particular packet sampling rate (or different packet sampling rates), and may forward either the sampled packets, or information regarding the sampled packets to one or more collectors (e.g., “data collection elements”). For instance, each of components181-184may be first-level (e.g., “first layer”) sampling components that may forward packets or information regarding packets to units121and123, comprising collectors. For instance, components181and182may forward to unit123, while components183and184may forward to unit121. In one example, the sampled packets or information regarding sampled packets may be organized by flow. For instance, sampled packets and/or information regarding sampled packets may be gathered and forwarded to collectors every one minute, every five minutes, etc. In one example, one or more of components181-184may forward information regarding all packets handled by the respective component, while the collector(s) (e.g., units121and123) may then perform packet-level sampling by sampling from among the information regarding all of the packets. In one example, units121and123may forward sampled packets, or information regarding the sampled packets to data storage and archiving system(s)198. In one example, units121and123may perform flow-level sampling before forwarding information regarding the sampled packets to data storage and archiving system198. In any case, the particular configuration(s) of the first level sampling components181-184and the collector(s) (e.g., units121and123) may be under the command of SON/SDN controller155. The data storage and archiving systems198may comprise data repositories of various types of data of the system100, such as end-to-end Transport Control Protocol (TCP) data, radio access network (RAN) data, Internet Protocol (IP) traffic data (e.g., packet and/or flow-level data such as discussed above), SDN status data (e.g., NFVI statuses and performance data, VM and/or VNF statuses and performance data), user equipment (UE) status data, usage data, etc., and so forth. For instance, one of the data storage and archiving systems198may obtain information regarding sampled packets for various flows. In one example, the collector(s) (e.g., units121and123) may have already performed flow-level sampling. In another example, one of the data storage and archiving systems198may initially obtain information regarding packets for all flows that are observed within the relevant portion of system100. The one of data storage and archiving systems198may organize the information regarding the sampled packets into a flow record. For instance, information regarding sampled packets may be organized by flow at the units121and123and forwarded to data storage and archiving systems198as one-minute records, 5-minute records, etc. The one of the data storage and archiving systems198may then aggregate these records over an entire flow into a flow record. In one example, a flow may be considered ended when there are no new packets observed for a particular flow for a particular duration of time, e.g., no new packets for the flow (e.g., identified by a 5-tuple, or the like) within a five minute interval, a ten minute interval, etc. In one example, the first-level sampling components, the collector(s), and the one of the data storage and archiving systems198may comprise a data distribution and/or stream processing platform, such as instances of Apache Kafka, Apache Pulsar, or the like. In one example, data storage and archiving systems198may represent one or more distributed file systems, e.g., a Hadoop® Distributed File System (HDFS™), or the like. Although the foregoing is described primary with respect to Internet Protocol (IP) traffic data (e.g., packet and/or flow-level data such as discussed above), it should be understood that various network elements, such as routers, gateways, firewalls, NFVI, etc., VMs and/or VNFs, RAN components, such as base station equipment, cellular core network components, and so forth, may generate network operational data with regard to such components' own performances and statuses (e.g., processor utilization, memory utilization, temperature, throughput, packet loss ratio, packet delay, moving averages or weighted moving averages of any of the preceding examples, and so on). Similar to the foregoing example, these components may also forward such network operational data in raw, sampled, or aggregate form to collectors (e.g., units121and123, also referred to herein as “data collection elements”), which may further forward such data in raw, sampled, or aggregate form to one or more of data storage and archiving systems198(e.g., “data sources” in a “data sources layer” as referred to herein). In one example, the collectors may comprise vendor specific monitoring equipment, e.g., to specifically collect network operational data from network elements manufactured by such vendor(s) an OSS-RC, open-source or proprietary third party management applications, network operator provided data collection system(s), and so on. In accordance with the present disclosure, data source management system190may operate between data storage and archiving system(s)198(e.g., “data sources” or a “data source layer”) and one or more data consumers, e.g., a “data processing layer” comprising data processing module(s)192and/or data consuming applications or services (such as SON/SDN controller155). For instance, a data consumer may comprise one of the data processing modules192for generating a performance indicator regarding a percentage of video traffic in encrypted traffic in a portion of network105. To illustrate, the module may apply a machine learning algorithm (MLA) that analyzes a portion of traffic data and determines whether the portion contains video data (e.g., a binary classifier) and/or determines a category of the encrypted traffic within the portion from among a plurality of possible categories (e.g., video streaming, audio streaming, gaming, email, Voice over IP (VoIP), video call, web browsing, etc.). Thus, the module may obtain relevant traffic data from one or more of the data storage and archiving systems198. In the absence of the present data source management system190, the one of the data processing modules192may obtain the data directly from the one or more of the data storage and archiving systems198. However, in accordance with the present example, data requests, such as from the one of the data processing modules192for determining a percentage of video traffic, may be handled by data source management system190which may verify the integrity of data from one or more data sources, e.g., the one or more of the data storage and archiving systems198. In this regard, it should be noted that data storage and archiving systems198may include multiple data sources with the same data, or alternate data that is also useable with respect to the same performance indicator of interest. As such, data source management system190may maintain, for each performance indicator, a list of available data sources, the statuses of such data sources (e.g., “safe,” “warning,” or “faulty”) and the designations of such data sources (e.g., in one example, “primary,” “secondary,” “tertiary,” “standby” or “waiting list,” and “faulty”). Accordingly, data source management system190may provide the requesting data consumers, such as the one of the data processing modules192for determining a percentage of video traffic, with identifications of “safe” data sources, identifications of data sources with a “warning” status, and so on. The data source management system190is illustrated and described at a high-level in connection with the example ofFIG.1. For instance, modules or components of the data source management system190, and so forth are omitted from illustration inFIG.1. Thus, these and other aspects of data source management system190are described in greater detail below in connection with the examples ofFIGS.2-8. It should also be noted that while various data within the data storage and archiving systems198may be considered as “performance indicators,” the data stored in these “data sources” may be referred to herein as “network operational data,” whereas the outputs of data processing module(s)192may be referred to herein as “performance indicators” or “key performance indicators,” which may be calculated from the network operational data, e.g., obtained from data sources that have been verified and provided to the data processing module(s)192via the data source management system190. It should also be noted that in one example, various applications/services, such as SON/SDN controller155, network monitoring and alerting systems, user devices of network personnel, and so forth may also obtain results of the data processing layer. As just one example, SON/SDN controller155may obtain the results of data processing by one of data processing modules192that is tasked with determining a percentage of video traffic in a portion of network105. SON/SDN controller155may similarly obtain the results of data processing by various other data processing modules192of a data processing layer, such as metrics of utilization level of RAN equipment at one or more cell sites, a demand prediction for a content distribution network (CDN) node, etc. In response, SON/SDN controller155may configure/reconfigure aspects of the system100based on the performance indicators from the data processing layer, such as re-routing at least a portion of the traffic in a selected portion of the system100, load-balancing at least a portion of the traffic in the selected portion of the system100, offloading at least a portion of the traffic in the selected portion of the system100, applying a denial-of-service mitigation measure in the selected portion of the system100, and so forth. For instance, if a percentage of video traffic in a portion of network105exceeds a threshold, SON/SDN controller155may offload a portion of the video traffic or other traffic by instantiating one or more new VMs/VMFs (e.g., a new CDN edge node), or the like. For example, if the percentage of video traffic exceeds a threshold, the quality of experience (QoE) of users of other types of traffic may degrade. Alternatively, or in addition, the QoE experienced by end users of various video streams may also suffer if left unaddressed. In each example, the adjusting may include allocating at least one additional resource of the system100based upon the performance indicator(s) and/or removing at least one existing resource of the communication network based upon the performance indicator(s), such as adding or removing a VM/VNF at NFVI111. In one example, the processing system may reconfigure at least one allocated resource of the communication network differently based upon the at least one aggregate statistic that is determined, i.e., without having to allocate a new resource of the communication network. An additional resource that may be added or an existing resource that may be removed (e.g., deactivated and/or deallocated) or reconfigured may be a hardware component of the network, e.g., a baseband unit, a remote radio head, NFVI, such as NFVI111and113, etc., or may be provided by hardware, e.g., bandwidth on a link, line card, router, switch, or other processing nodes, a CDN storage resource, a VM and/or a VNF, etc. It should be noted that the system100has been simplified. In other words, the system100may be implemented in a different form than that illustrated inFIG.1. For example, the system100may be expanded to include additional networks, such as a network operations center (NOC) network, and additional network elements (not shown) such as border elements, routers, switches, policy servers, security devices, gateways, a content distribution network (CDN) and the like, without altering the scope of the present disclosure. In addition, system100may be altered to omit various elements, substitute elements for devices that perform the same or similar functions and/or combine elements that are illustrated as separate devices. In still another example, SON/SDN controller155, all or some of the components of data source management system190, and/or other network elements may comprise functions that are spread across several devices that operate collectively as a SDN controller, a data distribution platform, a stream processing system, a data storage system, etc. Thus, these and other modifications of the system100are all contemplated within the scope of the present disclosure. FIG.2illustrates an example architecture200for network data collection and consumption in a telecommunication network comprising multiple layers210-260. For instance, raw network operational data may be first generated in a network elements layer210. Network elements are physical network entities or logical network entities operating on one or more physical devices, e.g., eNodeBs, mobility management entities (MMEs), etc. Data collection elements in a next layer220may be responsible for collecting raw data from network elements and may comprise, for example, OSS-RC, network element vendor-specific management applications, or third-party management applications. After collection, the data may be transferred to be stored as data sources, which may be referred to as a data sources layer230. The data stored in the data sources may be in the form of files or databases stored on physical or virtual servers. The data sources can include end-to-end Transport Control Protocol (TCP) data, radio access network (RAN) data, Internet Protocol (IP) transport data, and so forth. In accordance with the present disclosure, a data integrity layer240, e.g., a data source management system, operates between the data sources layer230and a data processing layer250. For instance, the data stored in data sources may be retrieved and processed in the data processing layer250(e.g., as directed by the data source management system of the data integrity layer240), and eventually consumed by upper-layer services and applications in layer260. Further details of an example data source management system of the data integrity layer240are illustrated inFIG.3and described in greater detail below. FIG.3illustrates an example data source management system340within an example architecture300for network data collection and consumption in a telecommunication network. For instance, as shown inFIG.3, the data source management system340sits between a data sources layer330, having data sources (DSs)331-333, and a data processing layer350, having performance indicator (PI) modules351-353. The data sources layer330and data processing layer350may be the same as or similar to the counterparts ofFIG.2. In the present example, the data source management system340may comprise four primary modules: a data source selector (DSS)341, a data source mediator (DSM)342, a data source fault detector (DSFD)343, and a machine learning engine (MLE)344. Collectively, the data source management system340determines and detects invalid data from the data sources layer330automatically and manages invalid data for the data processing layer350. A redundancy and mediation table (red-med table345) may maintain data source (DS) metric information that may be used by data source selector (DSS)341, data source mediator (DSM)342, and data source fault detector (DSFD)343. The information in the red-med table345may include, for each specified performance indicator (PI), e.g., (“key performance indicators” (KPIs)): all alternative data sources, and/or ordered data sources (e.g., primary, secondary, tertiary, waiting list, and faulty data sources). For each specified performance indicator, each data source is flagged to indicate a level of correctness: [safe] 100% valid, [faulty] contains faults, and [warning] validity not guaranteed, may contain faults. A data source with the level of [safe] may be permitted to be consumed by applications, but the level [faulty] triggers fault alerting and reporting. An example schema in the red-med table345is shown in table400ofFIG.4. For instance, each performance indicator (PI) (e.g., KPI_1to KPI_3) may have an entry/row in the table400. The columns indicate for each performance indicator, a primary data source (P-DS), a secondary data source (S-DS), a tertiary data source (T-DS) (if any), and a data source waiting list containing zero, one, or multiple data sources that are available with regard to the performance indicator, but are not currently in active use. Lastly, a faulty data source column contains any data sources available with regard to a performance indicator that have known [faulty] statuses. In addition, it should be noted that the other four columns may also contain, for each data source, a status of the data source (e.g., [safe], [warning], or [faulty]). In one example data source selector (DSS)341is responsible for obtaining requests from performance indicator modules351-353in the data processing layer350. The requests may be passed to the data source mediator (DSM)342that works with the data source fault detector (DSFD)343to select the best data source(s) for each request. For instance, the DSS341may pull data from one of data sources331-333for data source mediator (DSM)342to construct an input dataset for checking the validity of the respective data source. In one example, data source selector (DSS)341may also obtain and provide data from one or more selected data sources to a requesting PI module. However, in another example, the requesting PI module from the data processing layer350may specify whether to ask data source selector (DSS)341to pull and return data from the data sources, or to be directed to the appropriate data source(s) by the data source selection341after data source verification by the data source mediator (DSM)342in conjunction with the data source fault detector (DSFD)343. To illustrate, data source mediator (DSM)342may be responsible for initializing the red-med table345(e.g., based on data processing efficiency, user preference, etc.) and may work with data source fault detector (DSFD)343to determine data source ordering in the red-med table345for each performance indicator of interest. The data source mediator (DSM)342may also notify data source selector (DSS)341of the selected data source(s) (e.g., primary data source, secondary data source, etc.) with flag(s) for the data source(s) for each request from the modules of data processing layer350. In one example, the data source mediator (DSM)342may also pass input datasets from the data source selector (DSS)341to the data source fault detector (DSFD)343for checking validity. Depending on the number of available data sources for a given data request, the actions data source mediator (DSM)342will utilize the determinations from data source fault detector (DSFD)343of the statuses of different data sources. For instance, example DSM action tables510,520, and600for different cases are shown inFIGS.5and6. To illustrate, for a single data source that is available with respect to a performance indicator (PI) (e.g., in reference to table510ofFIG.5), the data source fault detector (DSFD)343may return a status of [safe], [warning], or [faulty] depending upon the results of the machine learning fault detection applied by data source fault detector (DSFD)343via machine learning engine344. For the case of [warning], the data source may be used at the discretion of the requesting PI module, with the understanding that the result(s) may not be accurate. In an example where there are two data sources (e.g., in reference to table520ofFIG.5), a [faulty] or [warning] determination with regard to a primary data source may result in a secondary data source becoming the primary data source and the primary data source being moved to a “secondary” designation with a flag of [warning], or to a “faulty” designation. Other changes to the statuses and/or designations of the data sources depending upon the outcomes from the machine learning fault detection applied by data source fault detector (DSFD)343via machine learning engine344are shown in the table520. Similarly, with more than two available data sources (e.g., with reference to table600ofFIG.6), the designations of different data sources may be changed depending upon the outcomes from the machine learning fault detection applied by data source fault detector (DSFD)343via machine learning engine344. In general, primary, secondary, or tertiary data sources that are determined to be [faulty] or [warning] may be replaced by another data source that is non-faulty. For instance, when a primary or secondary data source is determined to have a [warning] status and there are no available data sources that have a [safe] status, the data source may remain at a current designation with a flag of [warning] returned to data source selector (DSS)341to provide to the requesting module. Other changes to the statuses and/or designations of the data sources depending upon the outcomes from the machine learning fault detection applied by data source fault detector (DSFD)343via machine learning engine (MLE)344are shown in the table600. Lastly, data source fault detector (DSFD)343is responsible for checking validity of a specific data source requested by data source mediator (DSM)342by running a validity check via machine learning engine (MLE)344and replying with one of the three states: [safe], [warning], or [faulty]. An example flowchart700for a data source validity check via machine learning engine (MLE)344and data pattern table346is illustrated inFIG.7. In one embodiment, the objective of machine learning engine (MLE)344is to classify the data set contained in a request received from data source fault detector (DSFD)343into one of three possible categories: (1) correct data pattern (CDP), (2) faulty data pattern (FDP), or (3) unknown data pattern (UDP). Before machine learning engine (MLE)344starts to classify the data from a request, there may be preprocessing steps to create and update the machine learning (ML)/artificial intelligence (AI) models used. For instance, a training data set may be labeled based on a set of criteria for validity. This set of validity criteria may be determined based on domain knowledge of each KPI from each data source of interest. This set of validity criteria is regularly updated as the data source management system340ofFIG.3processes more data and more data patterns are discovered. To illustrate, in one example, a training data set may be labeled into two groups: “correct data pattern” (CDP) and “faulty data pattern” (FDP). In one example, machine learning engine (MLE)344creates a classification ML/AI model based on the training data set (e.g., a binary classifier). It should be noted that different ML/AI models may be selected for each KPI and/or for each data source, e.g., where the selection may depend on the characteristics of the KPI and/or the DS. Supervised learning models, e.g., based on logistic regression classification, support vector machine (SVM) classification, and so forth may be adopted for some KPIs, and deep-learning models, e.g., long short-term memory (LSTM)-based models, or the like, may be adopted for other KPIs. In one example, the machine learning engine (MLE)344may select a particular model by experimenting with multiple models and selecting a model with a highest accuracy. However, in another example, alternative or additional selection criteria (such as cost of licensing and/or use of a particular model, speed of processing, etc., may be applied). In one example, machine learning engine (MLE)344stores the selected ML/AI models (e.g., binary classifiers) in data pattern table346. In one example, if no ML/AI model is found to meet the selection criteria for a KPI and/or for a DS, then a data source validity check is not supported for the DS (or for the DS with respect to at least one KPI; e.g., in one example, it is possible that other model(s) may be available for a validity check of the DS with respect to other KPI(s)). Next, for the portion of the training data set of each DS (or each DS with respect to a specific KPI) that is labeled as faulty data patterns (FDPs), the machine learning engine (MLE)344may create a clustering model, e.g., K-means clustering model or the like, that meets specified requirements (e.g., accuracy, statistical robustness, etc.), and save the clustering model and/or the attributes of the model in data pattern table346. In addition, from the portion of the training data set labeled as FDPs, machine learning engine (MLE)344may select a smaller set of samples which are to be used for clustering in Stage 2 (described below). For instance, machine learning engine (MLE)344may select these samples so as to fully represent the faulty data types, e.g., when the clustering model created in the previous step is applied to the selected sample data set, the attributes of the clusters (e.g., number of clusters, relative positions within a feature space (such as cluster centroid location), etc.), should remain the same as those of the clusters that are constructed based on the full FDP portion of the training data set. The selected FDP samples may also be saved into the data pattern table346. After machine learning engine (MLE)344saves the foregoing into data pattern table346, machine learning engine (MLE)344is ready to check the input data from data source fault detector (DSFD)343. For instance, in the example ofFIG.7, the flowchart700illustrates a three-stage workflow. The flowchart700begins at operation701with data source mediator (DSM)342requesting data source fault detector (DSFD)343to check a data source. At operation702, data source fault detector (DSFD)343requests machine learning engine (MLE)344to check the data source. Next the flowchart700proceeds to operation703where Stage 1, binary classification for determining whether an input data pattern is consistent with known/labeled “correct data patterns” (CDPs) is applied. For instance, machine learning engine (MLE)344may receive the request from data source fault detector (DSFD)343, retrieve the trained ML/AI model (e.g., a binary classifier) for the specified DS (and/or for the DS with respect to a particular KPI) in the request, and apply the model to the input data in the request for binary classification. If the input data is classified as CDP (e.g., “consistent” with known CDPs) as an output of the binary classifier, machine learning engine (MLE)344may return a status of [safe] to data source fault detector (DSFD)343at operation705. At operation720, data source fault detector (DSFD)343may forward the status determination to the data source mediator (DSM)342. At operation703, data source mediator (DSM)342may take actions depending upon the results (e.g., in accordance with one of the tables510,520, or600ofFIGS.5and6). On the other hand, if the input data is not classified as “correct data pattern” (CDP) at operation704(for instance, the input data may be classified as “not consistent”), machine learning engine (MLE)344may proceed to Stage 2: clustering for identifying/confirming FDP Data, e.g., operation706. To illustrate, machine learning engine (MLE)344may retrieve the stored FDP samples (the smaller, representative set) and the associated clustering model and attributes from data pattern table (346). Next, machine learning engine (MLE)344may combine the representative FDP samples with the input data from Stage 1 and apply the ML/AI clustering model, e.g., K-means clustering model, to (1) the original FDP sample set, and (2) the combined data set of the FDP sample set and the input data. Machine learning engine (MLE)344may then compare key attributes of the resultant clusters from the two data sets, e.g., a number of clusters (or one or more attributes comprising: a number of clusters, the positions of the clusters in a feature space, etc.). If the attributes of the resultant clusters are considered the same, e.g., the numbers of clusters are the same, the positions of the centroids are the same or close to the same (such as within a threshold distance), the input data may be determined to be a “faulty data pattern” (FDP) and classified as such. In other words, machine learning engine (MLE)344may return a status of [faulty] to data source fault detector (DSFD)343at operation708. Conversely, if attributes of the resultant clusters are considered different, e.g., the numbers of clusters are different, the positions of the centroids are not the same or are not within a threshold distance, etc., the input data may be considered as an unknown or unrecognized data pattern. In other words, machine learning engine (MLE)344may return a status of [warning] to data source fault detector (DSFD)343at operation709. In addition, for an unknown or unrecognized data pattern, machine learning engine (MLE)344may enter Stage 3 at operation710. For instance, operation710may comprise an optional step for examining and labeling an unknown data pattern by an expert, e.g., network personnel. For instance, network personnel may be notified of the unknown data pattern, thereby allowing the network personnel to examine the unknown or unrecognized data pattern and to label the data as “correct data pattern” (CDP) or “faulty data pattern” (FDP). At operation711, machine learning engine (MLE)344may add the labeled data into the training data and update the ML/AI models (e.g., for either or both of Stage 1 and Stage 2 ML/AI models). Returning to the example ofFIG.3, data source selection may operate in three modes: 1) reactive mode (open-loop applications) in which data source selector (DSS)341works with data source mediator (DSM)342and data source fault detector (DSFD)343in the runtime, 2) proactive mode (closed-loop applications) in which data source selector (DSS)341retrieves results directly from red-med table345for a given request, and in which data source mediator (DSM)342and data source fault detector (DSFD)343work in the background and update red-med table345periodically, and 3) hybrid mode, which may comprise a mix of the above two modes depend on the data request criteria, e.g., a latency requirement. For instance, for hybrid mode, if a request is not time sensitive, a runtime check may be applied. Otherwise, a current entry in red-med table345may be relied upon. To illustrate, the workflow of the proactive mode may comprise data source selector (DSS)341receiving a data request from a PI module in data processing layer350and querying for a primary data source (P-DS) from red-med table345for the performance indicator of the request. Optionally, data source selector (DSS)341may query for data from the selected primary data source and/or any secondary data source, tertiary data source, etc. In one example, a requesting PI module may indicate whether only a single data source is being requested or whether multiple data sources (e.g., secondary data source, etc.) should also be returned. The data source selector (DSS)341may forward at least the P-DS status ([safe], [warning], or [faulty]) and any other data sources (if requested) to the requesting performance indicator module; DSS341may also forward the data from the data source(s), if requested. In the background, to keep red-med table345up to date a background update procedure may comprise data source mediator (DSM)342querying for the primary data source (P-DS) from red-med table345for each performance indicator of interest. Data source mediator (DSM)342may request data source selector (DSS)341to provide input data that may then be passed to data source fault detector (DSFD)343along with a request to check the validity of the primary data source. For instance, data source fault detector (DSFD)343may run a validity check via machine learning engine344. Data source fault detector (DSFD)343may return the resulting status determination of the primary data source to data source mediator (DSM)342. Data source mediator (DSM)342may update the red-med table345according to the returned results (e.g., per one of the action tables510,520, or600). The background update procedure may be performed periodically based on the reporting granularity of data source and/or specific application requirements. For example, if a data source generates data every hour, the update procedure for the data source can be performed hourly; if a data processing module requests a performance indicator every day, the update procedure for the data source (and/or for the data source with respect to the particular performance indicator) can be performed daily. In addition, the machine learning engine (MLE)344may update the ML/AI models stored in data pattern table346on the same or a different schedule. Additionally, when a set of unknown data pattern(s) is labeled, e.g., by network personnel, the ML/AI models may also be updated to reflect the new, known data patterns. In the reactive mode, data source selector (DSS)341may receive a data request from a data processing module (one of performance indicator modules351-353) in data processing layer350. Data source selector (DSS)341may request results from data source mediator (DSM)342for the performance indicator in the request. Data source mediator (DSM)342may retrieve the identification of the primary data source (P-DS), from red-med table345for the performance indicator in the request. Next, data source mediator (DSM)342may request data source selector (DSS)341to obtain and provide an input data set from the primary data source (P-DS) to be used for a validity check by data source fault detector (DSFD)343. Thus, data source mediator (DSM)342may pass the input data set(s) to data source fault detector (DSFD)343with a request to check the primary data source. Data source fault detector (DSFD)343may then apply the input data set to the machine learning engine344and return the resulting status of the primary data source to the data source mediator (DSM)342. If the primary data source is [safe], data source mediator (DSM)342may return the identification of the primary data source and an indication of the status to data source selector (DSS)341. However, if the primary data source is [faulty] or [warning], data source mediator (DSM)342may update red-med table345(e.g., per one of the action tables510,520, or600). Optionally, data source selector (DSS)341may obtain data from the identified primary data source. Data source selector (DSS)341forwards the identification of the primary data source and an indicator of the status to the requesting data processing module from data processing layer350. In one example, data source selector (DSS)341also forwards the data from the data source, if requested by the requesting data processing module and obtained by DSS341. It should again be noted that the hybrid mode is a mixed use of both the proactive mode and reactive mode. In one example, data source selector (DSS)341can decide to retrieve an identification of a primary data source from the red-med table345or data source mediator (DSM)342, depending on request criteria, e.g., a latency requirement, etc. FIG.8illustrates a flowchart of an example method800for replacing a first data source with a replacement data source as an active data source for communication network monitoring in response to verifying an invalid data pattern of the first data source, in accordance with the present disclosure. In one example, the method800is performed by a processing system (such as data source management system190ofFIG.1; similarly, data source management system340ofFIG.3) or by one or more components thereof, (e.g., a processor, or processors, performing operations stored in and loaded from a memory), or by a data source management system in conjunction with other components such as routers, switches, firewalls, or other first-level sampling components, collector(s), an SON orchestrator and/or SDN controller, data sources, such as data storage and archiving systems198, performance indicator modules353/data processing modules192, etc. In one example, the steps, functions, or operations of method800may be performed by a computing device or system900, and/or processor902as described in connection withFIG.9below. For instance, the computing device or system900may represent any one or more components of the system100, the architecture300, etc. that is/are configured to perform the steps, functions and/or operations of the method800. Similarly, in one example, the steps, functions, or operations of method800may be performed by a processing system comprising one or more computing devices collectively configured to perform various steps, functions, and/or operations of the method800. For instance, multiple instances of the computing device or processing system900may collectively function as a processing system. For illustrative purposes, the method800is described in greater detail below in connection with an example performed by a processing system. The method800begins in step805and may proceed to one of the optional steps810,820, or825, or to step830. At optional step810, the processing system may apply a clustering model, e.g., a clustering algorithm, to a (second) input data set comprising a plurality of sample invalid data patterns to obtain a (second) plurality of clusters. In one example, the plurality of sample invalid data patterns comprises a selection of invalid data patterns from among a larger set of invalid data patterns for a first data source from among a plurality of data sources associated with a performance indicator (e.g., associated with at least one component or aspect of a communication network). It should also be noted that although the terms, “first,” “second,” “third,” etc., are used herein, the use of these terms are intended as labels only. Thus, the use of a term such as “third” in one example does not necessarily imply that the example must in every case include a “first” and/or a “second” of a similar item. In other words, the use of the terms “first,” “second,” “third,” and “fourth,” do not imply a particular number of those items corresponding to those numerical values. In addition, the use of the term “third” for example, does not imply a specific sequence or temporal relationship with respect to a “first” and/or a “second” of a particular type of item, unless otherwise indicated. At optional step815, the processing system may verify that the selection of invalid data patterns is representative of the larger set of invalid data patterns. In one example, the verifying comprises determining that the (second) plurality of clusters is the same as a (third) plurality of clusters. For instance, the (third) plurality of clusters may be obtained by applying the clustering model to the larger set of invalid data patterns. At optional step820, the processing system may obtain a request for data relating to the performance indicator from a requesting computing system. For instance, the requesting computing system may comprise one of the performance indicator modules/data processing modules, e.g., of a data processing layer, or may comprise a consuming application or service, such as SON/SDN controller, a network monitoring and alerting system, one or more user devices of network personnel, etc. At optional step825, the processing system may obtain a request to verify the first data source. In one example, the request to verify the first data source is associated with a request for data relating to the performance indicator (e.g., that may be obtained at optional step820). For instance, the processing system may operate in the “reactive” or “hybrid” modes described above. For instance, the processing system may comprise a data source fault detector (DSFD), and the request to verify the first data source may be received from a data source mediator (DSM) and/or a data source selector (DSS), such as illustrated inFIG.3. However, it should be noted that in another example, the processing system performing the method800may comprise all of the example components of a data source management system as described herein. At step830, the processing system applies a binary classifier to detect whether a first data pattern of the first data source is consistent with prior data patterns of the first data source that are labeled as correct data patterns from one or more time periods prior to the first data pattern. For instance, in one example, the binary classifier is trained based upon the prior data patterns, e.g., to generate an output comprising a determination of whether an input data pattern either is or is not consistent with the prior data patterns. In one example, the first data source may comprise one of a plurality of data sources associated with the performance indicator of the communication network. In addition, in one example, the first data source may comprise an active data source. For instance, the plurality of data sources associated with the performance indicator may comprise a primary data source, that is an active data source, and at least a secondary data source. In one example, the secondary data source may also be an active data source. In one example, the first data source comprises the primary data source, or one of the at least the secondary data source. In one example, the plurality of data sources associated with the performance indicator further comprises at least one standby/non-active data source (e.g., a “wait list” data source). At step835, the processing system determines, via the binary classifier, that the first data pattern is not consistent with the prior data patterns of the first data source that are labeled as correct data patterns. In other words, the output of the binary classifier may be a classification of the first data pattern as “not consistent” (e.g., anomalous). In one example, steps830and835may comprise the same or similar operations as described in connection with Stage 1 of the flowchart700ofFIG.7. At step840, the processing system applies a clustering model, e.g., a clustering algorithm, to a first input data set comprising a combination of: (1) the first data pattern and (2) a plurality of invalid data patterns of the first data source, in order to obtain a first plurality of clusters. In one example, step840may be commenced in response to determining that the first data pattern is “not consistent” at step835. At step845, the processing system verifies that the first data pattern is an invalid data pattern for the first data source when the first plurality of clusters is the same as a second plurality of clusters. For instance, the second plurality of clusters may be generated by applying the clustering model to a second input data set comprising the plurality of sample invalid data patterns. For example, the second plurality of clusters may be generated per optional step810above. In one example, steps840and845may comprise the same or similar operations as described in connection with Stage 2 of the flowchart700ofFIG.7. At step850, the processing system replaces the first data source with a replacement data source as an active data source from among the plurality of data sources in response to verifying that the first data pattern is an invalid data pattern for the first data source (e.g., when such a replacement data source is available). For instance, step850may comprise an operation, or operations, in accordance with action table520ofFIG.5or action table600ofFIG.6. At optional step855, the processing system may direct the requesting computing system to access the replacement data source to obtain the data relating to the performance indicator. For instance, in an example in which steps830-850are performed in response to a request obtained at optional step820from a requesting computing system, the processing system may identify the replacement data source as a primary data source (or other active data source, such as a secondary data source that is also “active”) from which the requesting computing system may access the data associated with the performance indicator. At optional step860, the processing system may obtain the data relating to the performance indicator from the replacement data source. For instance, in one example, the request from the requesting computing system may specify that the processing system should obtain and provide the data relating to the performance indicator, e.g., rather than directing the requesting computing system to the appropriate data source(s). However, in another example, the request to provide the data may be implied in the request obtained at optional step820. At optional step865, the processing system may provide the data relating to the performance indicator to the requesting computing system. In particular, optional step865may be performed following and in conjunction with optional step860. Following step850, or one of the optional steps855or865, the method800proceeds to step895where the method800ends. It should be noted any of the example method800may be expanded to include additional steps, or may be modified to replace steps with different steps, to combine steps, to omit steps, to perform steps in a different order, and so forth. For instance, in one example the processing system may repeat one or more steps of the method800, such as steps830-850for additional data sources with respect to the same or a different performance indicator, step830with respect to different data sources (e.g., where these data sources are determined to have “consistent” or “correct” data patterns and subsequent steps of the method800are not reached), and so forth. For instance, the method800may be applied to verify the data integrity of all available data sources with respect to the performance indicator, active data sources with respect to the performance indicator, and/or at least those that are not already determined to be faulty. In one example, the method800may further include training the binary classifier. Alternatively, or in addition, the method800may further include retraining the binary classifier, retraining the clustering model and/or applying the clustering model to an updated training data set to determine an updated set of cluster information, and so forth. In one example, the method800may include determining unknown data patterns for one or more data sources, obtaining labels for the unknown data patterns, e.g., from network personnel, and retraining the model(s) based upon the newly labeled data pattern(s). In one example, the method800may be modified to include presenting a warning on the validity of the first data source following step845(e.g., at step850, or as an alternative to step850when a replacement data source is not available). For instance, the first data source may be allowed to continue as an active data source, but with the warning being provided along with the data of the first data source to any entities obtaining such data. In one example, the method800may include further confirming that the first data pattern of the first data source is not a valid data pattern by comparing the first data pattern to one or more additional data patterns from secondary data sources regarding the performance indicator. For instance, when there is congruence between the first data pattern and the one or more additional data patterns, this may be indicative that the first data pattern is not invalid, but that there is network event that is causing the first data pattern to appear anomalous. In other words, such additional operation(s) may distinguish between a problem with the first data source (such as an operating system update by a device vendor causing a change in the data format that was not accounted for, e.g., a change from 5 minute records to 2 minute records, or the like) versus correct data that is anomalous due to a network event, such as a major power outage, a widespread Domain Name System (DNS) failure, etc. In still another example, the machine learning-based verification of step830and subsequent steps may be initiated in response to first comparing data patterns from the first data source and one or more secondary data sources and determining that the data patterns do not match. In one example, the processing system may comprise one or more performance indicator modules/data processing modules, e.g., of a data processing layer. In such an example, the method800may further include calculating the performance indicator based upon the data from the replacement data source. In one example, the processing system may further comprise an SON/SDN controller. In such an example, the method800may further include adjusting at least one aspect of the network based upon the performance indicator, such as re-routing at least a portion of the traffic in a selected portion of the network, load-balancing at least a portion of the traffic in the selected portion of the network, offloading at least a portion of the traffic in the selected portion of the network, applying a denial-of-service mitigation measure in the selected portion of the network, and so forth. In each example, the adjusting may include allocating at least one additional resource of the network based upon the performance indicator and/or removing at least one existing resource of the communication network based upon the performance indicator. In one example, the processing system may reconfigure at least one allocated resource of the communication network differently based upon the performance indicator that is determined, i.e., without having to allocate a new resource of the communication network. An additional resource that may be added or an existing resource that may be removed (e.g., deactivated and/or deallocated) may be a hardware component of the network, or may be provided by hardware, e.g., bandwidth on a link, line card, router, switch, or other processing node, a CDN storage resource, a VM and/or a VNF, etc. Thus, these and other modifications are all contemplated within the scope of the present disclosure. In addition, although not expressly specified above, one or more steps of the method800may include a storing, displaying and/or outputting step as required for a particular application. In other words, any data, records, fields, and/or intermediate results discussed in the method(s) can be stored, displayed and/or outputted to another device as required for a particular application. Furthermore, operations, steps, or blocks inFIG.8that recite a determining operation or involve a decision do not necessarily require that both branches of the determining operation be practiced. In other words, one of the branches of the determining operation can be deemed as an optional step. However, the use of the term “optional step” is intended to only reflect different variations of a particular illustrative embodiment and is not intended to indicate that steps not labelled as optional steps to be deemed to be essential steps. Furthermore, operations, steps or blocks of the above described method(s) can be combined, separated, and/or performed in a different order from that described above, without departing from the example embodiments of the present disclosure. FIG.9depicts a high-level block diagram of a computing system900(e.g., a computing device, or processing system) specifically programmed to perform the functions described herein. For example, any one or more components or devices illustrated inFIGS.1-3or discussed in connection with the examples ofFIGS.4-8may be implemented as the computing system900. As depicted inFIG.9, the computing system900comprises a hardware processor element902(e.g., comprising one or more hardware processors, which may include one or more microprocessor(s), one or more central processing units (CPUs), and/or the like, where hardware processor element may also represent one example of a “processing system” as referred to herein), a memory904, (e.g., random access memory (RAM), read only memory (ROM), a disk drive, an optical drive, a magnetic drive, and/or a Universal Serial Bus (USB) drive), a module905for replacing a first data source with a replacement data source as an active data source for communication network monitoring in response to verifying an invalid data pattern of the first data source, and various input/output devices906, e.g., a camera, a video camera, storage devices, including but not limited to, a tape drive, a floppy drive, a hard disk drive or a compact disk drive, a receiver, a transmitter, a speaker, a display, a speech synthesizer, an output port, and a user input device (such as a keyboard, a keypad, a mouse, and the like). Although only one hardware processor element902is shown, it should be noted that the computing device may employ a plurality of hardware processor elements. Furthermore, although only one computing device is shown inFIG.9, if the method(s) as discussed above is implemented in a distributed or parallel manner for a particular illustrative example, i.e., the steps of the above method(s) or the entire method(s) are implemented across multiple or parallel computing devices, e.g., a processing system, then the computing device ofFIG.9is intended to represent each of those multiple computing devices. Furthermore, one or more hardware processors can be utilized in supporting a virtualized or shared computing environment. The virtualized computing environment may support one or more virtual machines representing computers, servers, or other computing devices. In such virtualized virtual machines, hardware components such as hardware processors and computer-readable storage devices may be virtualized or logically represented. The hardware processor element902can also be configured or programmed to cause other devices to perform one or more operations as discussed above. In other words, the hardware processor element902may serve the function of a central controller directing other devices to perform the one or more operations as discussed above. It should be noted that the present disclosure can be implemented in software and/or in a combination of software and hardware, e.g., using application specific integrated circuits (ASIC), a programmable logic array (PLA), including a field-programmable gate array (FPGA), or a state machine deployed on a hardware device, a computing device, or any other hardware equivalents, e.g., computer readable instructions pertaining to the method(s) discussed above can be used to configure a hardware processor to perform the steps, functions and/or operations of the above disclosed method(s). In one example, instructions and data for the present module or process905for replacing a first data source with a replacement data source as an active data source for communication network monitoring in response to verifying an invalid data pattern of the first data source (e.g., a software program comprising computer-executable instructions) can be loaded into memory904and executed by hardware processor element902to implement the steps, functions or operations as discussed above in connection with the example method(s). Furthermore, when a hardware processor executes instructions to perform “operations,” this could include the hardware processor performing the operations directly and/or facilitating, directing, or cooperating with another hardware device or component (e.g., a co-processor and the like) to perform the operations. The processor executing the computer readable or software instructions relating to the above described method(s) can be perceived as a programmed processor or a specialized processor. As such, the present module905for replacing a first data source with a replacement data source as an active data source for communication network monitoring in response to verifying an invalid data pattern of the first data source (including associated data structures) of the present disclosure can be stored on a tangible or physical (broadly non-transitory) computer-readable storage device or medium, e.g., volatile memory, non-volatile memory, ROM memory, RAM memory, magnetic or optical drive, device or diskette and the like. Furthermore, a “tangible” computer-readable storage device or medium comprises a physical device, a hardware device, or a device that is discernible by the touch. More specifically, the computer-readable storage device may comprise any physical devices that provide the ability to store information such as data and/or instructions to be accessed by a processor or a computing device such as a computer or an application server. While various embodiments have been described above, it should be understood that they have been presented by way of example only, and not limitation. Thus, the breadth and scope of a preferred embodiment should not be limited by any of the above-described example embodiments, but should be defined only in accordance with the following claims and their equivalents.
83,446
11860745
DETAILED DESCRIPTION Illustrative embodiments will be described herein with reference to exemplary information processing systems and associated computers, servers, storage devices and other processing devices. It is to be appreciated, however, that embodiments are not restricted to use with the particular illustrative system and device configurations shown. Accordingly, the term “information processing system” as used herein is intended to be broadly construed, so as to encompass, for example, processing systems comprising edge computing, cloud computing and storage systems, as well as other types of processing systems comprising various combinations of physical and virtual processing resources. As used herein, a “component” is to be broadly construed, and can refer to various parts and/or hardware components such as, but not necessarily limited to, central processing units (CPUs), graphics processing units (GPUs) or other processors, storage devices (e.g., hard disk drives, random access memory (RAM), or other memories), controllers (e.g., network interface controllers (NICs)), ports, port connectors, host bus adaptors (HBAs), batteries, motherboards, cards, switches, sensors, buses (e.g., serial buses) or other elements of a computing environment that may fail or malfunction. As used herein, “redundant,” “redundant component” and/or “redundancy” is to be broadly construed, and can refer to a duplicate component of an edge device or system which can be used in parallel with other duplicate components and/or in place of one or more duplicate components in the event of failure or malfunctioning of the one or more of the duplicate components. The number of redundant components in an edge device may vary. Also, in addition to exact duplicate components, the term duplicate component can also include a near duplicate component or a substantially duplicate component in that there may be some difference in the components, but the intended function performed by the component is the same. FIG.1Adepicts an edge location100comprising an edge device101configured for managing component failure in an illustrative embodiment. The edge device101can comprise, for example, a desktop, laptop or tablet computer, server, storage device or other type of processing device capable of processing workloads or other operations. Such devices are examples of what are more generally referred to herein as “processing devices.” Some of these processing devices are also generally referred to herein as “computers.” In some embodiments, an edge device101may also comprise virtualized computing resources, such as virtual machines (VMs), containers, etc. The edge device101in some embodiments comprises a computer associated with a particular company, organization or other enterprise. Although one edge device101is shown, the embodiments are not necessarily limited thereto, and an edge location100may comprise more than one edge device101. Workloads and other operations comprise, for example, applications running as single components or several components working together, input-output (IO) operations (e.g., data read and/or write operations), data transmission operations, or other operations, with an edge device101providing computational resources to allow workloads or other operations to complete tasks. The size of a workload or other operation may be dependent on the amount of data and applications included in a given workload or operation. The terms “client,” “customer,” “administrator” or “user” herein are intended to be broadly construed so as to encompass numerous arrangements of human, hardware, software or firmware entities, as well as combinations of such entities. In some embodiments, users may refer to customers, clients and/or administrators of computing environments for which component failure management is being performed. Compute and/or storage services (e.g., at least a portion of the available services and functionalities provided by the edge devices101) may be provided for users under a Platform-as-a-Service (PaaS) model, an Infrastructure-as-a-Service (IaaS) model, a Function-as-a-Service (FaaS) model, a Containers-as-a-Service (CaaS) model and/or a Storage-as-a-Service (STaaS) model, including cloud-based PaaS, IaaS, FaaS, CaaS and STaaS environments, although it is to be appreciated that numerous other cloud infrastructure arrangements could be used. Also, illustrative embodiments can be implemented outside of the cloud infrastructure context, as in the case of a stand-alone computing and storage system implemented within a given enterprise. Although not explicitly shown inFIG.1A, one or more input-output devices such as keyboards, displays or other types of input-output devices may be used to support one or more user interfaces to an edge device101, as well as to support communication between multiple edge devices101, connected devices and/or other related systems and devices not explicitly shown. A network or networks referenced herein may be implemented using multiple networks of different types. For example, a network may comprise a portion of a global computer network such as the Internet, although other types of networks can be part of the network including a wide area network (WAN), a local area network (LAN), a satellite network, a telephone or cable network, a cellular network, a wireless network such as a WiFi or WiMAX network, a storage area network (SAN), or various portions or combinations of these and other types of networks. The network in some embodiments therefore comprises combinations of multiple different types of networks each comprising processing devices configured to communicate using Internet Protocol (IP) or other related communication protocols. As a more particular example, some embodiments may utilize one or more high-speed local networks in which associated processing devices communicate with one another utilizing Peripheral Component Interconnect express (PCIe) cards of those devices, and networking protocols such as InfiniBand, Gigabit Ethernet or Fibre Channel. Numerous alternative networking arrangements are possible in a given embodiment, as will be appreciated by those skilled in the art. Referring toFIG.1A, which depicts normal operation of the edge device101, the edge device101comprises a plurality of redundant components1,2and3102-1,102-2and102-3(collectively, “redundant components102”) each comprising a static random access memory (SRAM)103-1,103-2and103-3, respectively, (collectively, “SRAMs103”). Although three redundant components102and corresponding SRAMs103are shown, the embodiments are not limited thereto, and an edge device101may comprise more or less than three redundant components102and corresponding SRAMs103. Each of the redundant components102is connected to a counter104and another SRAM105. As explained in more detail herein below, the counter104is configured to count a number of occurrences of context switching by respective ones of the redundant components102, and the SRAMs103and105are encoded with a testing operation. Although illustrative embodiments comprise SRAMs103and105, the embodiments are not limited thereto, and may utilize other types of memory. During normal operation of the edge device101, redundant components1,2and3102-1,102-2and102-3respectively receive Inputs1,2and3and produce Outputs1,2and3. The Inputs1,2and3comprise for example, the workloads or other operations as described herein above, and the Outputs1,2and3comprise the results of processing the workloads or other operations. For example, according to one or more embodiments, if the redundant components102comprise CPUs, the Inputs1,2and3can be respective workloads that are processed in parallel by the CPUs, and the Outputs1,2and3can be respective results of the processing by the CPUs. If the redundant components102comprise storage devices (e.g., RAMs, hard-disk drives (HDDs), solid-state drives (SSDs), etc.) the Inputs1,2and3can be respective instances of data to be stored by the storage devices, and the Outputs1,2and3can be respective instances of stored data by the storage devices. In another example, if the redundant components102are NICs, the Inputs1,2and3can be respective data packets to be routed to a network by the NICs, and the Outputs1,2and3can be respective routed data packets. Referring toFIG.1B, an algorithm encoded as a test workload stored in an SRAM105(e.g., on-chip SRAM) or a register, and/or on SRAMs103or respective registers of the redundant components102is periodically executed on the redundant components102. In an illustrative embodiment, a testing operation is performed when the redundant components102are performing context switching. In more detail, a periodic testing operation to perform a health check of the redundant components102is executed at configurable and dynamic periods that are adjusted based, for example, on workloads running on the redundant components102and other factors. For example, the edge device101will schedule a testing operation when most redundant components102are performing context switching (e.g., while waiting for long IO operations and/or switching to another task, a testing operation will be executed on the redundant components102). In one or more embodiments, in the case of dynamic random access memory (DRAM) redundant components102, the edge device101will schedule a testing operation during DRAM refresh intervals for a majority of the redundant components102. In illustrative embodiments, a testing operation cycle serves as a refresh cycle, whereby no or very little additional latency is introduced in the use of RAM by the edge device101. In other embodiments, in the case of SSDs or other storage devices, the edge device101will schedule a testing operation during periods of low storage device IO for a majority of the redundant components102. Context switching, refresh intervals and/or periods of low IO may occur during relatively brief periods on the order of milliseconds (e.g., 5 ms). Depending on when context switching, refresh intervals and/or periods of low IO occur, time periods for executing the testing operation are dynamically adjusted. In some embodiments, a device level counter104used across all of the redundant components102tracks a number of context switching, refresh interval and/or low IO occurrences corresponding to the redundant components102. The counter104can be configured to signal an SRAM105to commence a testing operation when the number of occurrences of context switching, refresh intervals, and/or low IOs corresponding to the respective ones of the redundant components102reaches or exceeds a predetermined threshold number of context switching, refresh interval and/or low IO occurrences. For example, after each or one or more of the redundant components102reaches or exceeds the threshold number of context switching, refresh interval and/or low IO occurrences, the testing operation is performed during the next instance of context switching, a refresh interval and/or low IO by a majority of the redundant components102. According to illustrative embodiments, the SRAM105transmits an input (Input4) to the respective SRAMs103instructing the SRAMs103to execute the test workloads stored in the SRAMs103so that the testing operation can be run on respective ones of the redundant components102. Alternatively, the SRAMs103receive input directly from the counter104to commence the testing operation on the respective ones of the redundant components102. In another alternative, in the absence of SRAMs103, the SRAM105stores the test workload and transmits to the redundant components102, the test workload to be executed on the respective ones of the redundant components102. As can be seen inFIG.1B, the testing operation comprises invoking a voting pattern in connection with test workload outputs of respective ones of the redundant components102. For example, a common test operation is executed redundantly across each of the redundant components102and the respective outputs of the redundant components102from the testing operation are compared (Compare=). Based on the comparing, a determination is made whether one or more of the respective outputs (e.g., “1” inFIG.1B) differs from a plurality of same outputs (e.g., “0” inFIG.1B) comprising a majority of the respective outputs. An output analysis layer106identifies the redundant component102corresponding to the different output (e.g., “1” (redundant component3102-3)) as a faulty component (e.g., having an operational issue causing it to fail or malfunction) since its output is not consistent with the majority of outputs. For example, in the case of redundant components102that are CPUs, a common test workload is executed redundantly across all of the CPUs, and their individual outputs of processing the test workload are compared to determine any inconsistencies. In the case of redundant components102that are storage devices, common test data is stored across all of the storage devices, and is compared to determine any inconsistencies. For example, if the stored data of one of the redundant storage devices is not consistent with that of a majority of the redundant storage devices, the output analysis layer106concludes that the redundant storage device corresponding to the inconsistency is faulty. In some embodiments, redundant storage devices are written to using test patterns, read back and compared to the test pattern to ensure no errors have occurred. If there are inconsistencies with the test pattern, the output analysis layer106concludes that the redundant storage devices corresponding to the inconsistencies are faulty. In another example, if a test operation is not able to be completed on a given one of the redundant components102, the output analysis layer106identifies the given one of the redundant components102as faulty. Referring toFIG.1C, according to an embodiment, the edge device101performs a “circuit breaker” function by preventing new inputs from being routed to the redundant component3102-3identified as faulty, effectively deactivating the redundant component3102-3so that it does not receive new workloads and is prevented from being used in other operations. As shown inFIG.1C, new inputs (e.g., Input5and Input6) are routed to the redundant components1and2102-1and102-2, but not to deactivated redundant component3102-3. Processing by redundant components1and2102-1and102-2produces Output5and Output6. Accordingly, one or more remaining redundant components not found to be faulty are utilized in operations following the testing operation. In some embodiments, once the circuit breaker is effectively thrown and one or more redundant components102determined to be faulty are deactivated, depending on additional component availability, the deactivated redundant component(s) can be automatically swapped with working replacement component(s) so that the number of redundant components102performing parallel processing at a given time is not reduced. FIG.2Aillustrates normal operation of an edge device201in edge location200. The edge device201has a similar configuration to the edge device101. Like the edge device101, the edge device201comprises a plurality of redundant components1,2and3202-1,202-2and202-3(collectively, “redundant components202”) each comprising an SRAM203-1,203-2and203-3, respectively, (collectively, “SRAMs203”). The redundant components202and SRAMs203are configured the same as or similarly to the redundant components102and SRAMs103. Although three redundant components202and corresponding SRAMs203are shown, the embodiments are not limited thereto, and an edge device201may comprise more or less than three redundant components202and corresponding SRAMs203. Each of the redundant components202is connected to a counter204and another SRAM205, which are configured the same as or similar to the counter104and SRAM105. During normal operation of the edge device201, redundant components1,2and3202-1,202-2and202-3respectively receive Inputs11,12and13and produce Outputs11,12and13in the same manner as or in a similar manner to the receipt of Inputs1,2and3and the production of Outputs1,2and3. According to the embodiment inFIG.2A, when the Outputs11,12and13are generated, they are timestamped. For purposes of explanation, we assume that the Outputs11,12and13are timestamped with time t. However, the actual time of the timestamp may vary between Outputs11,12and13. In addition to the timestamp, the outputs are tagged with other metadata identifying, for example, the redundant component202that produced the output (e.g., component identifiers, component name, etc.). In illustrative embodiments, firmware of storage devices (e.g., SSDs) will add timestamps in data writes to storage. As can be seen inFIG.2A, the Outputs11,12and13are stored in a storage location of the edge device201such as, for example, a dedicated cache207or other location. As explained in more detail in connection withFIGS.2B,2C and2D, the Outputs11,12and13are stored until after a testing operation is performed and faulty redundant components202, if any, are identified. For example, referring toFIG.2B, at time t+1, similar to the testing operation described in connection withFIG.1B, a common test operation (Input14(Test)) is executed redundantly across each of the redundant components202and the respective outputs of the redundant components202from the testing operation are compared (Compare=). The output from redundant component3202-3(e.g., “1” inFIG.2B) differs from a plurality of same outputs from redundant components1and2202-1and202-2(e.g., “0” inFIG.2B). An output analysis layer206identifies redundant component3202-3corresponding to the different output (e.g., “1”) as a faulty component since its output is not consistent with the majority of outputs. Similar to the edge device101, timing of a testing operation for the edge device201is dynamically adjusted based on, for example, when context switching, refresh intervals and/or periods of low IO occur for the redundant components202. In addition, similar to the edge device101, techniques for comparing outputs and the content of the outputs may vary based on whether the redundant components202comprise, for example, CPUs, storage devices, NICs or other devices. Referring toFIG.2C, at time t+2, similar toFIG.1C, the edge device201performs a circuit breaker function by preventing new inputs from being routed to the redundant component3202-3identified as faulty, effectively deactivating the redundant component3202-3so that it does not receive new workloads and is prevented from being used in other operations. In addition, the previous result from redundant component3202-3from time t (Output13) is invalidated and discarded since redundant component3202-3may have had operational issues at time t when Output13was generated. In contrast, Outputs11and12corresponding to redundant components1and2202-1and202-2, which were determined to be healthy, are validated, released from the cache207and propagated. The Outputs11,12and13are identified in the cache207based on their timestamps and other identifying metadata. In illustrative embodiments, if a redundant component202is not identified as faulty (e.g., passes a test operation), the current time (e.g., time t+1) is recorded as the last known positive test result. If a redundant component202is identified as faulty (e.g., fails a test operation), outputs for the faulty redundant component202between a last known positive test result (e.g., a previous time a test operation was conducted) and the current time (e.g., time t+1) are discarded, or, if there was no previous test, outputs for the faulty redundant component202between and including the previous timestamp (e.g., time t) and the current time (e.g., time t+1) are discarded. Since Output13was discarded, the operation for Input13is re-executed by a healthy one of the redundant components202(e.g., redundant component2202-2) and similar to the operation inFIG.2A, the new output from processing Input13(Output13′) is stored in the cache207. Additionally, a new input (e.g., Input15) is routed to the redundant component1202-1. No inputs are routed to deactivated redundant component3202-3. Processing by redundant component1202-1produces Output15, which is also stored in cache207. The remaining redundant components1and2202-1and202-2not found to be faulty are utilized in operations following the testing operation at time t+2. FIG.2Dillustrates a scenario at time t+2, where no faulty redundant components202were found at time t+1. In the case where no faulty redundant components are found in a testing operation, the comparison of the outputs from the testing operation yields no inconsistent outputs. In other words, all of the outputs by the respective redundant components processing a test workload are the same. Referring toFIG.2D, since all of the redundant components202are deemed healthy as a result of a testing operation at time t+1, each of the Outputs11,12and13from time t are validated, released from the cache207and propagated. In addition, since there are no deactivated redundant components202, none of the redundant components202are excluded from receiving new inputs, such that new Inputs15,16and17are routed to the redundant components1,2and3202-1,202-2and202-3, respectively, which process the Inputs15,16and17to generate Outputs15,16and17, which are stored in cache207. FIG.3illustrates a testing operation of an edge device301in edge location300. The edge device301comprises a plurality of redundant components1and2302-1and302-2(collectively, “redundant components302”) each comprising an SRAM303-1and303-2, respectively, (collectively, “SRAMs303”). The redundant components302and SRAMs303are configured the same as or similarly to the redundant components102/202and SRAMs103/203inFIGS.1A-1C and2A-2D. Although two redundant components302and corresponding SRAMs303are shown, the embodiments are not limited thereto, and an edge device301may comprise more or less than two redundant components302and corresponding SRAMs303. Each of the redundant components302is connected to a counter304and another SRAM305, which are configured the same as or similar to the counters104and204, and SRAMs105and205. During a testing operation of the edge device301, a first instance of test data (e.g., input21(Test)) is transmitted from redundant component1302-1to redundant component2302-2, and a second instance of the test data is transmitted from the redundant component2302-2back to redundant component1302-1in a loopback operation. In this example, the redundant components302may be, for example, NICs. The first and second instances of the test data outputted from the redundant components1and2302-1and302-2, respectively, are compared (Compare=). Based on the comparing, a determination is made whether there are differences between the first and second instances of the test data. If there are differences, (e.g., “0” and “1”), an output analysis layer306identifies the redundant component2302-2as a faulty component (e.g., having an operational issue causing it to fail or malfunction) since its output is not consistent with the source output (i.e., the output from redundant component1302-1). In one or more embodiments, the transmission of test data is in packets including a timestamp to test receipt and measure latency. In the event of a failed test (e.g., a redundant component is identified as faulty) on a given NIC and/or port, the given NIC and/or port will be deactivated to prevent network traffic from being routed through the given NIC and/or port and instead the network traffic will be routed through NICs and/or ports determined to be healthy. Data transmitted from the given NIC and/or port found to be faulty between a last known positive test result and a current time, or between and including a previous timestamp and the current time will be retransmitted through an NIC and/or port determined to be healthy. Similar to the edge devices101and201, timing of a testing operation for the edge device301is dynamically adjusted based on, for example, when context switching, refresh intervals and/or periods of low IO occur for the redundant components302. In addition, similar to the edge devices101and201, techniques for comparing outputs and the content of the outputs may vary based on whether the redundant components302comprise, for example, CPUs, storage devices, NICs or other devices. Other factors on which scheduling of testing operations can be based include, but are not necessarily limited to, maximum amounts of permitted latency, dynamically measured workload execution times and availability of resources for an edge device. For example, if a testing operation will result in exceeding a maximum amount of permitted latency, an edge device will schedule the testing operation for another time during which a maximum amount of permitted latency will not be exceeded. In the case of workload execution times, an edge device may schedule the testing operation during periods of relatively long workload execution times. With respect to resource availability, an edge device may schedule testing when there is sufficient cache space available to cache outputs during testing, which may also depend on the size of the outputs to be cached. The number of redundant components being tested at a given time may vary based on the availability of the respective redundant components to be part of a particular testing operation (e.g., whether the respective redundant components are performing context switching, refresh operations and/or are experiencing periods of low IO operations). According to one or more embodiments, a deactivated redundant component102,202or302is re-tested in a subsequent testing operation and can be reactivated if the deactivated redundant component passes the subsequent test. For example, in some cases, the reason a redundant component is determined to be faulty may be transient, such as, for example, due to weather conditions (e.g., temperature), network availability, power outages, etc., which may cause the redundant component to temporarily malfunction. Upon rectification of the transient condition, the redundant component may return to a healthy state and be reactivated for normal operations. In other scenarios, upon re-testing, a redundant component may continue to fail, possibly signaling a more serious issue with the redundant component. In this case, an alert to, for example, an administrator of the edge device and/or edge location, may be issued after a threshold number of test operation failures by a given redundant component. The edge devices101,201and301, redundant components102,202and302, counters104,204and304and other edge location or edge device elements in the embodiments are assumed to be implemented using at least one processing platform, with each processing platform comprising one or more processing devices each having a processor coupled to a memory. Such processing devices can illustratively include particular arrangements of compute, storage and network resources. For example, processing devices in some embodiments are implemented at least in part utilizing virtual resources such as VMs or containers (e.g., Linux containers (LXCs)), or combinations of both as in an arrangement in which Docker containers or other types of LXCs are configured to run on VMs. The edge devices101,201and301, redundant components102,202and302, and counters104,204and304, as well as other edge location or edge device elements may be implemented on respective distinct processing platforms, although numerous other arrangements are possible. For example, in some embodiments at least portions of one or more of the edge devices101,201and301, redundant components102,202and302, and counters104,204and304are implemented on the same processing platform. Additionally, the edge devices101,201and301, redundant components102,202and302, counters104,204and304and other edge location or edge device elements in some embodiments may be implemented as part of a cloud-based system (e.g., a cloud service provider). The edge devices101,201and301, redundant components102,202and302, counters104,204and304and other edge location or edge device elements can be part of what is more generally referred to herein as a processing platform comprising one or more processing devices each comprising a processor coupled to a memory. A given such processing device may correspond to one or more virtual machines or other types of virtualization infrastructure such as Docker containers or other types of LXCs. Communications to and from edge devices, redundant components and counters including, for example, workloads (normal and test), data packets and operational counts, may take place over one or more networks as described herein. The term “processing platform” as used herein is intended to be broadly construed so as to encompass, by way of illustration and without limitation, multiple sets of processing devices and one or more associated storage systems that are configured to communicate over one or more networks. Additional examples of processing platforms utilized to implement portions of the edge devices101,201and301, redundant components102,202and302, counters104,204and304and other edge location or edge device elements in illustrative embodiments will be described in more detail below in conjunction withFIGS.5and6. It is to be understood that the particular set of elements shown are presented by way of illustrative example only, and in other embodiments additional or alternative elements may be used. Thus, another embodiment may include additional or alternative systems, devices and other network entities, as well as different arrangements of modules and other components. It is to be appreciated that these and other features of illustrative embodiments are presented by way of example only, and should not be construed as limiting in any way. An exemplary process for managing component failure in edge hardware will now be described in more detail with reference to the flow diagram ofFIG.4. It is to be understood that this particular process is only an example, and that additional or alternative processes for managing component failure in edge hardware can be carried out in other embodiments. The process400as shown includes steps402through408, and is suitable for use in the edge devices101,201and/or301but is more generally applicable to other types of systems where component failure is to be managed. Other arrangements of edge devices, redundant components, counters and other edge location or edge device elements can be configured to perform at least portions of one or more of the steps in other embodiments. In step402, a testing operation is executed on a plurality of redundant components of an edge device. The testing operation may be encoded in at least one SRAM location of the edge device. The executing of the testing operation is performed during certain time periods and not during other time periods. For example, time periods for performing the executing of the testing operation are dynamically adjusted based, at least in part, on when at least some of the plurality of redundant components are performing context switching and/or are in a refresh interval. In one or more embodiments, the executing of the testing operation is performed responsive to one or more of the plurality of redundant components reaching a threshold number of at least one of context switching occurrences and refresh interval occurrences. The plurality of redundant components comprise, but are not necessarily limited to, a plurality of CPUs, a plurality of storage devices, a plurality of NICs or a plurality of GPUs. In step404, based, at least in part, on the testing operation, at least one redundant component of the plurality of the redundant components is identified as having an operational issue. In step406, the at least one redundant component is deactivated in response to the identifying, and, in step408, one or more remaining redundant components of the plurality of the redundant components are utilized in one or more operations following the testing operation. In illustrative embodiments, the executing of the testing operation comprises executing a test workload on the plurality of redundant components, comparing respective outputs from the execution of the test workload by respective ones of the plurality of redundant components, and determining, based on the comparing, whether one or more of the respective outputs differs from a plurality of same outputs comprising a majority of the respective outputs. One or more of the plurality of redundant components corresponding to the one or more of the respective outputs differing from the plurality of same outputs is identified as the at least one redundant component having the operational issue. In illustrative embodiments, the executing of the testing operation comprises storing test data in the plurality of redundant components, comparing respective instances of the test data stored by respective ones of the plurality of redundant components, and determining, based on the comparing, whether one or more of the respective instances of the test data differs from a plurality of same instances of the test data comprising a majority of the respective instances of the test data. One or more of the plurality of redundant components corresponding to the one or more of the respective instances of the test data differing from the plurality of same instances of the test data is identified as the at least one redundant component having the operational issue. In illustrative embodiments, the executing of the testing operation comprises transmitting a first instance of test data from a first redundant component of the plurality of redundant components to a second redundant component of the plurality of redundant components, transmitting a second instance of the test data from the second redundant component back to the first redundant component, and comparing the first and second instances of the test data to determine whether there are differences between the first and second instances of the test data. The second redundant component is identified as the at least one redundant component having the operational issue responsive to an affirmative determination of differences between the first and second instances of the test data. In one or more embodiments, a plurality of respective workloads are executed on the plurality of redundant components, wherein the executing of the plurality of respective workloads yields a plurality of respective outputs. The plurality of respective outputs are timestamped with a first time, and are stored in a storage location of the edge device such as, for example, a cache of the edge device. The executing of the testing operation is performed at a second time after the first time. An output of the plurality of respective outputs corresponding to at least one redundant component identified as having an operational issue is invalidated. One or more outputs of the plurality of respective outputs corresponding to the one or more remaining redundant components are validated and released from the storage location. A workload of the plurality of respective workloads corresponding to the invalidated output is re-executed on a remaining redundant component of the one or more remaining redundant components. It is to be appreciated that theFIG.4process and other features and functionality described above can be adapted for use with other types of systems configured to manage component failure in edge hardware. The particular processing operations and other system functionality described in conjunction with the flow diagram ofFIG.4are therefore presented by way of illustrative example only, and should not be construed as limiting the scope of the disclosure in any way. Alternative embodiments can use other types of processing operations. For example, the ordering of the process steps may be varied in other embodiments, or certain steps may be performed at least in part concurrently with one another rather than serially. Also, one or more of the process steps may be repeated periodically, or multiple instances of the process can be performed in parallel with one another. Functionality such as that described in conjunction with the flow diagram ofFIG.4can be implemented at least in part in the form of one or more software programs stored in memory and executed by a processor of a processing device such as a computer or server. As will be described below, a memory or other storage device having executable program code of one or more software programs embodied therein is an example of what is more generally referred to herein as a “processor-readable storage medium.” Illustrative embodiments provide technical solutions that periodically execute health checks on redundant components of edge devices using an algorithm encoded as a test workload stored in hardware (e.g., specially added registers or on-chip SRAMs). The embodiments permit fast execution times of test operations, with limited impact on edge device workload processing. Technical problems exist with current approaches, which reduce throughput by continuously running health checks on redundant hardware. The embodiments address this concern by providing technical solutions which run periodic test operations on redundant components, while continuously preventing output of errant results from the redundant components. Advantageously, the embodiments provide techniques for tagging and saving the workload results until health checks pass to ensure that errant results are prevented, even though health checks are periodically executed. For example, in illustrative embodiments, if redundant components are found to be healthy following a testing operation, tags are removed, and the saved results of processing by redundant components determined to be healthy are released from storage and propagated. If, however, a redundant component is not found to be healthy following a testing operation, the saved result of processing by the unhealthy redundant component is discarded, and a workload corresponding to the saved result is re-executed on a healthy one of the redundant components. Advantageously, the embodiments also provide technical solutions which schedule testing operations while redundant components are performing non-workload related tasks (e.g., context switching, refresh operations, etc.) such that impact on throughput is reduced when compared with conventional approaches. As an additional advantage, the embodiments are applicable to multiple types of hardware, including, but not necessarily limited to, CPUs, GPUs, NICs and storage devices, and provide techniques for efficiently isolating faulty components from processing new workloads until the faulty components can re-tested and deemed healthy. Advantageously, the embodiments provide technical solutions which compare outputs of redundant components from a testing operation to determine inconsistencies, and to deactivate the components producing the inconsistencies, while allowing healthy components to continue to process workloads. Accordingly, the embodiments facilitate the identification and isolation of faulty components without disabling an entire system. Technical problems exist in that access to and control of edge devices in certain edge locations (e.g., telecommunications towers, outer space, rooftops, power stations, hospitals, etc.) may be difficult, thereby requiring management of the health of edge device redundancies and techniques for addressing failure of redundant components. For example, within hospitals, there may be closets on each floor filled with gateway devices and other computers used to communicate with and process data from remote medical equipment used for each patient. Such remote equipment may include, for example, smart beds, blood pressure monitors, pulse oximeters and drug infusion pumps. Each medical device may have its own associated gateway, resulting in a proliferation of compute devices in an edge location without IT oversight, cooling, robust power, stable racking, and other benefits of a data center. The embodiments advantageously provide technical solutions which counter the increased chances and associated risks of component failure in adverse edge locations by providing techniques for maintaining redundancy within edge devices by isolating problematic components and continuing to use healthy components in parallel processing. An edge device and/or other edge hardware incorporating the technical solutions of the embodiments described herein includes built-in redundancies and, in the event of failure of one or more of its components, fails over to itself (e.g., to redundant components which are elements of the device) instead of to a separate device. As a result, the embodiments provide an efficient solution in terms of power and space consumption, management and orchestration, with improved overall reliability and performance. It is to be appreciated that the particular advantages described above and elsewhere herein are associated with particular illustrative embodiments and need not be present in other embodiments. Also, the particular types of information processing systems and/or edge location and device features and functionality as illustrated in the drawings and described above are exemplary only, and numerous other arrangements may be used in other embodiments. As noted above, at least portions of the edge devices101,201and301, redundant components102,202and302, counters104,204and304and other edge location or edge device elements, may be implemented using one or more processing platforms. A given such processing platform comprises at least one processing device comprising a processor coupled to a memory. The processor and memory in some embodiments comprise respective processor and memory elements of a virtual machine or container provided using one or more underlying physical machines. The term “processing device” as used herein is intended to be broadly construed so as to encompass a wide variety of different arrangements of physical processors, memories and other device components as well as virtual instances of such components. For example, a “processing device” in some embodiments can comprise or be executed across one or more virtual processors. Processing devices can therefore be physical or virtual and can be executed across one or more physical or virtual processors. It should also be noted that a given virtual device can be mapped to a portion of a physical one. Some illustrative embodiments of a processing platform that may be used to implement at least a portion of an information processing system comprise a cloud infrastructure including virtual machines and/or container sets implemented using a virtualization infrastructure that runs on a physical infrastructure. The cloud infrastructure further comprises sets of applications running on respective ones of the virtual machines and/or container sets. These and other types of cloud infrastructure can be used to provide what is also referred to herein as a multi-tenant environment. One or more system components such as the edge devices101,201and301or portions thereof are illustratively implemented for use by tenants of such a multi-tenant environment. As mentioned previously, cloud infrastructure as disclosed herein can include cloud-based systems. Virtual machines provided in such systems can be used to implement at least portions of one or more of a computer system and a cloud service provider in illustrative embodiments. Illustrative embodiments of processing platforms utilized to implement functionality for managing component failure in edge hardware will now be described in greater detail with reference toFIGS.5and6. Although described in the context of edge locations100,200and300, and edge devices101,201and301or other edge locations and devices, these platforms may also be used to implement at least portions of other information processing systems in other embodiments. FIG.5shows an example processing platform comprising cloud infrastructure500. The cloud infrastructure500comprises a combination of physical and virtual processing resources that may be utilized to implement at least a portion of the edge locations100,200and300, and edge devices101,201and301or other edge locations and devices. The cloud infrastructure500comprises multiple virtual machines (VMs) and/or container sets502-1,502-2, . . .502-L implemented using virtualization infrastructure504. The virtualization infrastructure504runs on physical infrastructure505, and illustratively comprises one or more hypervisors and/or operating system level virtualization infrastructure. The operating system level virtualization infrastructure illustratively comprises kernel control groups of a Linux operating system or other type of operating system. The cloud infrastructure500further comprises sets of applications510-1,510-2, . . .510-L running on respective ones of the VMs/container sets502-1,502-2, . . .502-L under the control of the virtualization infrastructure504. The VMs/container sets502may comprise respective VMs, respective sets of one or more containers, or respective sets of one or more containers running in VMs. In some implementations of theFIG.5embodiment, the VMs/container sets502comprise respective VMs implemented using virtualization infrastructure504that comprises at least one hypervisor. A hypervisor platform may be used to implement a hypervisor within the virtualization infrastructure504, where the hypervisor platform has an associated virtual infrastructure management system. The underlying physical machines may comprise one or more distributed processing platforms that include one or more storage systems. In other implementations of theFIG.5embodiment, the VMs/container sets502comprise respective containers implemented using virtualization infrastructure504that provides operating system level virtualization functionality, such as support for Docker containers running on bare metal hosts, or Docker containers running on VMs. The containers are illustratively implemented using respective kernel control groups of the operating system. As is apparent from the above, one or more of the processing modules or other components of edge locations100,200and300, and edge devices101,201and301or other edge locations and devices may each run on a computer, server, storage device or other processing platform element. A given such element may be viewed as an example of what is more generally referred to herein as a “processing device.” The cloud infrastructure500shown inFIG.5may represent at least a portion of one processing platform. Another example of such a processing platform is processing platform600shown inFIG.6. The processing platform600in this embodiment comprises a portion of edge locations100,200and300, and edge devices101,201and301or other edge locations and devices or the components thereof, and includes a plurality of processing devices, denoted602-1,602-2,602-3, . . .602-K, which communicate with one another over a network604. The network604may comprise any type of network, including by way of example a global computer network such as the Internet, a WAN, a LAN, a satellite network, a telephone or cable network, a cellular network, a wireless network such as a WiFi or WiMAX network, or various portions or combinations of these and other types of networks. The processing device602-1in the processing platform600comprises a processor610coupled to a memory612. The processor610may comprise a microprocessor, a microcontroller, an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA), a central processing unit (CPU), a graphical processing unit (GPU), a tensor processing unit (TPU), a video processing unit (VPU) or other type of processing circuitry, as well as portions or combinations of such circuitry elements. The memory612may comprise random access memory (RAM), read-only memory (ROM), flash memory or other types of memory, in any combination. The memory612and other memories disclosed herein should be viewed as illustrative examples of what are more generally referred to as “processor-readable storage media” storing executable program code of one or more software programs. Articles of manufacture comprising such processor-readable storage media are considered illustrative embodiments. A given such article of manufacture may comprise, for example, a storage array, a storage disk or an integrated circuit containing RAM, ROM, flash memory or other electronic memory, or any of a wide variety of other types of computer program products. The term “article of manufacture” as used herein should be understood to exclude transitory, propagating signals. Numerous other types of computer program products comprising processor-readable storage media can be used. Also included in the processing device602-1is network interface circuitry614, which is used to interface the processing device with the network604and other system components, and may comprise conventional transceivers. The other processing devices602of the processing platform600are assumed to be configured in a manner similar to that shown for processing device602-1in the figure. Again, the particular processing platform600shown in the figure is presented by way of example only, and the edge locations100,200and300, and edge devices101,201and301or other edge locations and devices may include additional or alternative processing platforms, as well as numerous distinct processing platforms in any combination, with each such platform comprising one or more computers, servers, storage devices or other processing devices. For example, other processing platforms used to implement illustrative embodiments can comprise converged infrastructure. It should therefore be understood that in other embodiments different arrangements of additional or alternative elements may be used. At least a subset of these elements may be collectively implemented on a common processing platform, or each such element may be implemented on a separate processing platform. As indicated previously, components of an information processing system as disclosed herein can be implemented at least in part in the form of one or more software programs stored in memory and executed by a processor of a processing device. For example, at least portions of the functionality for utilizing workload stealing to distribute edge workloads between edge nodes attached thereto as disclosed herein are illustratively implemented in the form of software running on one or more processing devices. It should again be emphasized that the above-described embodiments are presented for purposes of illustration only. Many variations and other alternative embodiments may be used. For example, the disclosed techniques are applicable to a wide variety of other types of information processing systems, edge locations, edge devices, redundant components, counters, etc. Also, the particular configurations of system and device elements and associated processing operations illustratively shown in the drawings can be varied in other embodiments. Moreover, the various assumptions made above in the course of describing the illustrative embodiments should also be viewed as exemplary rather than as requirements or limitations of the disclosure. Numerous other alternative embodiments within the scope of the appended claims will be readily apparent to those skilled in the art.
51,884
11860746
DETAILED DESCRIPTION The terminology used in this disclosure is intended to be interpreted broadly within the limits of subject matter eligibility. The terms “disk” and “drive” are used interchangeably to refer to non-volatile storage media and are not intended to refer to any specific type of non-volatile storage media. The terms “logical” and “virtual” are used to refer to features that are abstractions of other features, e.g., and without limitation abstractions of tangible features. The term “physical” is used to refer to tangible features that possibly include, but are not limited to, electronic hardware. For example, multiple virtual computers could operate simultaneously on one physical computer. The term “logic” is used to refer to special purpose physical circuit elements, firmware, software, computer instructions that are stored on a non-transitory computer-readable medium and implemented by multi-purpose tangible processors, and any combinations thereof. Aspects of the inventive concepts are described as being implemented in a data storage system that includes host servers and a storage array. Such implementations should not be viewed as limiting. Those of ordinary skill in the art will recognize that there are a wide variety of implementations of the inventive concepts in view of the teachings of the present disclosure. Some aspects, features, and implementations described herein may include machines such as computers, electronic components, optical components, and processes such as computer-implemented procedures and steps. It will be apparent to those of ordinary skill in the art that the computer-implemented procedures and steps may be stored as computer-executable instructions on a non-transitory computer-readable medium. Furthermore, it will be understood by those of ordinary skill in the art that the computer-executable instructions may be executed on a variety of tangible processor devices, i.e., physical hardware. For practical reasons, not every step, device, and component that may be part of a computer or data storage system is described herein. Those of ordinary skill in the art will recognize such steps, devices, and components in view of the teachings of the present disclosure and the knowledge generally available to those of ordinary skill in the art. The corresponding machines and processes are therefore enabled and within the scope of the disclosure. Aspects of the invention will be described in the context of a DAS storage system. However, the invention is not limited to DAS storage systems. FIG.1illustrates a rack100of converged, homogeneous, software-defined DAS nodes that are configured to rebuild the data of a failed storage node on remaining non-failed storage nodes. The rack will be described as a storage system although multiple racks could be interconnected and converged as a single storage system. A first group102of converged DAS storage nodes provides storage capacity. A second group104of converged DAS compute nodes provides compute capacity. Each converged DAS storage node is a DAS server106with specialized software components and associated attached non-volatile storage108. Each converged DAS compute node is a DAS server106with specialized software components. All of the DAS servers106are interconnected via a switch/fabric110. Host applications run on the DAS servers106of the second group104and use data stored on the non-volatile storage108of the first group102. The host applications may provide business services to client computers112that are in communication with the rack100via a network114. Examples of host applications may include, but are not limited to, software for email, accounting, sales, manufacturing, and inventory control. Although separate groups of converged DAS nodes that respectively provide compute and storage capacity are shown, those functions could be integrated into a single group of dual-function converged DAS nodes. The DAS servers106may be identical, general purpose server computers. As is known in the art, server computers include processors and volatile memory. The processors may include central processing units (CPUs), graphics processing units (GPUs), or both. The volatile memory may include dynamic random-access memory (DRAM) of any kind. The non-volatile storage108may include one or more solid-state drives (SSDs), hard disk drives (HDDs), or both. The DAS storage nodes, which are in the first group102, are homogenous in the sense that they all have the same total non-volatile storage capacity. Moreover, that same-size storage capacity is organized into same-size cells, so each storage node has the same number of cells available for maintenance of host application data. The cells may be partitions or allocations, for example, and without limitation, and multiple drives may be abstracted as a single logical volume. As will be explained in greater detail below, the cells are used to store members of data protection groups such that no more than one member of any single protection group is stored by any one of the storage nodes. Thus, a member that becomes inaccessible due to storage node failure can be rebuilt using the remaining (accessible) members. Spare cells are maintained for rebuilding inaccessible members in the event of storage node failure. More specifically, in response to failure of one of the storage nodes, the protection group members that were stored in cells of that failed storage node are rebuilt in spare cells on the remaining non-failed storage nodes. FIG.2illustrates software components of the converged DAS storage system ofFIG.1. Each of the DAS nodes is converged in the sense that software components enable multiple data access paths so the rack functions as a single storage system. Storage data client (SDC) components200running on the DAS servers106of the second group104(compute nodes) provide the underlying operating system (OS) or hypervisor, and thus the host application instances206, with access to logical blocks of data stored on logical volumes of storage204by sending data access commands to the DAS servers of the first group102(storage nodes). Storage data server (SDS) components202running on the DAS servers106of the first group102respond to the commands by accessing the non-volatile storage108that backs the logical volumes of storage204. The SDS components also provide storage-related services such as creating and maintaining data protection groups and spares and responding to storage node failures. Resiliency is based on redundant array of independent disks (RAID) or erasure coding (EC) protection groups. Each protection group has D data members and P parity members, where the values of D and P depend on the RAID level or EC type that is implemented. A protection group width W=D+P. A failed parity member of a group is rebuilt by using the data members, and a failed data member of a group is rebuilt by using the parity members. FIG.3illustrates a “minimal configuration” of the converged DAS node rack storage system ofFIG.1with spares. The total storage capacity of all non-volatile storage in the rack is modeled as a matrix of indexed storage nodes and indexed cells. Only one member per protection group can be located on the non-volatile storage of any single storage node, so the members of each protection group are stored in W cells that are distributed across W storage nodes. Spare cells are reserved so that protection group members on a failed storage node can be rebuilt on non-failed storage nodes. A minimum of W+1 nodes are required to maintain one node's worth of spare cells to enable recovery from a single storage node failure. W protection groups are created in the minimal configuration, where W−1 protection groups have members distributed vertically, and one protection group has members distributed diagonally. With RAID-5 (5+1) or EC (4+2), for example, the protection group size W=6 and members of protection group 1 are distributed vertically in cell 1 of nodes 1 through 6, members of protection group 2 are distributed vertically in cell 2 of nodes 1 through 5 and 7, etc. Members of protection group 6 are distributed diagonally in cells 1 through 6 of nodes 7 through 2. The spare cells (unnumbered) are distributed in cell 6 of nodes 1 and 3 through 7. Referring toFIGS.4A, and4B, the minimal configuration is created from a simpler configuration with W−1 protection groups distributed vertically over W storage nodes, where the last cell per storage node is unused, as specifically shown inFIG.4A. One protection group member from each vertically-oriented protection group (2 to W−1) is relocated to storage node W+1, thereby freeing cells for the diagonally-oriented protection group (6 in the illustrated example) as specifically shown inFIG.4B. The transformation algorithm can be expressed as follows:1. Add a new storage node N:N=W+1for (i=2; i<W; i++)node [N]. cell [i]=node [N+1−i]. cell [i]//relocate group member to new storage node.2. Create a new protection group (W) using the diagonally-oriented cells.3. Reserve the unused cells as spares. FIGS.5A and5Billustrate contemporaneous addition of multiple new storage nodes. New storage nodes can be added to the system individually using the transformation procedure described above to create new protection groups using diagonally-oriented cells that become free after relocating protection group members of the original (vertically distributed) protection groups. When contemporaneously adding multiple new storage nodes there will be W−2 protection group members moved to one storage node, W−3 protection group members moved to the next storage node, etc. Adding W−2 new storage nodes will incur a maximum data movement of (W−2)(W−1)/2 protection group members per Gauss' formula. Adding more than W−2 new storage nodes will incur the same amount of data movement as adding W−2 storage nodes. For example, two new groups 7 and 8 are created after two new storage nodes 8 and 9 are added, as shown in the figures. There are W spare cells (in last column) for recovery from a single storage node failure. The algorithm for adding K new storage nodes to M existing storage nodes can be expressed as follows:1. N=Minimum (W−2, K)//whichever is smallerfor (i=0; i<N; i++){A=i+2for (j=A; j<W; j++)node [M+K−i]. cell [j]=node [M+A−j]. cell [j]//relocate to new node}2. Create K new protection groups using the diagonally-oriented cells.3. Reserve the unused cells as spare cells. FIGS.6A,6B,7A, and7Billustrate storage node failure recovery. The existence of W spare cells is sufficient to rebuild the protection group members of any single storage node. However, some of the non-failed storage nodes already contain members of the same protection groups as the failed storage node so the protection group members must be rebuilt without locating multiple members of the same protection group on a single storage node. As shown in the simple example illustrated byFIGS.6A and6B, when storage node 4 fails, its protection group members 1, 2, 3, and 5 are rebuilt at spare cells on storage nodes 7, 6, 5, and 3 (in descending order), while its protection group member 6 is rebuilt on the spare cell on storage node 1. This is the only assignment of protection group members to spare node/cell combinations that satisfies the RAID/EC requirement that only one member per protection group can be located on any single storage node. The algorithm for assigning rebuilt members to spare cells can be expressed as follows:1. Let N=the last storage node, and W=the first diagonally-oriented protection group.2. Check protection group number (g) of each cell (from left to right) of the failed storage node:if (g<W) add protection group to list A;else add protection group to list Z.3. List A will be naturally sorted with protection groups in ascending order.4. Sort list Z so that the protection groups will be in descending order.5. Create list L by appending list A to the end of list Z.6. Create a list of spares (S) with the 1stspare from storage node 1, and subsequent spares from storage nodes N, N−1, N−2, etc.7. Assign the spares of list S to the protection groups of list L in order: the 1stspare to the 1stprotection group, the 2ndspare to the 2ndprotection group, etc. As shown in the more complex example illustrated byFIGS.7A and7B, when storage node 5 fails, list A=1, 2 and list Z={9, 8, 7, 6}. The combined list L={9, 8, 7, 6, 1, 2}. List S contains spare cells at storage nodes {1, 10, 9, 8, 7, 6}. As shown inFIG.7B, the protection groups of List L are assigned to the spare cells of List S in order. After the rebuild, the system does not have spare capacity until the failed storage node is repaired or replaced, but RAID/EC protection prevents data loss from occurring due to another storage node failure. After the failed storage node is repaired or replaced, all rebuilt protection group members are moved back to their original (pre-rebuild) locations, and the spare capacity is restored. FIG.8illustrates addition of more spare capacity for greater resiliency. The system is organized as independent subsets of storage nodes, where each storage node subset has enough spare cells to recover from one storage node failure. In the illustrated example, a first subset includes storage nodes 1 through 10 and a second subset includes storage nodes 11 through 19. If two storage nodes of the same subset are in a failed state at the same time, then the unused spare cells of a different subset may be shared for use in rebuilding the second failed storage node. Each storage node failure will consume the spares of just one subset. FIG.9illustrates a method for transforming a system with W−1 protection groups into a “minimal configuration” with spares and adding new storage nodes individually. Step300is creating W same-size cells in W homogeneous, converged DAS storage nodes, where W=D+P. Step302is creating node and cell indices, thereby enabling the storage to be modeled as a W-by-W matrix of storage node and cell combinations. Step304is creating W−1 protection groups that are distributed vertically over the W storage nodes, where the last cell per node is unused. Step306is adding a new storage node N. Protection group members can then be selected and relocated. Step308is relocating the protection group members at storage node [N+1−i]. cell [i] to storage node [N]. cell [i] for incremental values of i that are less than W, starting with i=2. The result of the relocations is a group of diagonally oriented free cells. Step310is creating a new protection group in the diagonally-oriented free cells. The new protection group is assigned the next incremental index number, which will be W for the first new storage node. Step312is reserving the unused cells as spares. Steps306through312may be iterated for each new storage node but the addition of a single new storage node may suffice to transform a system with W−1 protection groups into a minimal configuration with spares. FIG.10illustrates a method for adding multiple new storage nodes. As indicated in step400, K new storage nodes are added to a system with M storage nodes. As indicated in step402, N is selected as the lesser of W−2 and K. Then, for incremental values of i less than N and incremental values of j less than W, starting with i=0 and j=A, the protection group member at storage node [M+K−i]. cell [j] is relocated to storage node [M+A−j]. cell [j]. The result of the relocations is adjacent groups of diagonally-oriented free cells. Step404is creating K new protection groups using the adjacent groups of diagonally-oriented free cells. The new protection groups, from upper to lower, are assigned the next incremental protection group numbers. Step406is reserving the unused cells as spares. Steps400through406may be iterated for additional new nodes. FIG.11illustrates a method for recovering from node failure. Failure of a storage node is detected by the SDCs and/or SDSs in step500. In the case in which there is only one failed storage node in the subset, as determined in step502, then the spare cells of that subset are used for rebuilding the protection group members of the failed storage node. In the case in which there is already an existing failed storage node in the subset, as determined in step502, then the spare cells of a different subset are borrowed to be used for rebuilding the protection group members of the failed storage node as indicated in step506. Step504is checking the protection group number (g) of each failed cell from left to right in the matrix model and adding that protection group to list A if g is less than W; otherwise adding the protection group to list Z. List A is naturally sorted with the protection groups in ascending order by index number. Step508is sorting list Z by protection group number in descending order. Step510is creating a list L by appending the members of list A to the end of list Z. Step512is creating a list S of spare cells in order beginning with the first spare cell from node 1 and proceeding with the spare cells of other storage nodes in descending order beginning with storage node N. The spare cells of list S are assigned to protection groups of list L in order by assigning the nth spare in list S to the nth protection group in list L. The protection groups are then rebuilt in the assigned spare cells. Eventually, the failed storage node is repaired or rebuilt as indicated in step516. Responsive to rebuild or repair of the failed storage node, the original protection group member locations and spares are restored by relocations as indicated in step518. Although no specific advantages should be viewed as limiting the inventive aspects, at least some of the disclosed aspects offer improvements such as efficient use of storage resources and reduction or minimization of data movement. Non-deterministic heuristics and metadata are often required to manage the space of a resilient data storage system with RAID or Erasure Coding groups. The disclosed aspects based on deterministic algorithms achieve full (100%) space efficiency with minimum metadata requirement, as the space allocation and assignment can be calculated. The cells are fully utilized for member cells of RAID or EC protection groups and spare capacity, without unusable “wasted” cells, because of deterministic algorithm-based space management. Specific examples have been presented to provide context and convey inventive concepts. The specific examples are not to be considered as limiting. A wide variety of modifications may be made without departing from the scope of the inventive concepts described herein. Moreover, the features, aspects, and implementations described herein may be combined in any technically possible way. Accordingly, modifications and combinations are within the scope of the following claims.
18,916
11860747
DETAILED DESCRIPTION In order to enable those skilled in the art to better understand the solution of the present application, the present application will be further described in detail below in conjunction with the accompanying drawings and specific embodiments. Apparently, only a part of the embodiments, not all the embodiments of the present application, are described. All other embodiments obtained, based on the embodiments described in the present application, by those skilled in the art without paying creative efforts shall fall within the protection scope of the present application. The terms “first”, “second”, “third” and “fourth” as used in the description, claims and the above drawings of the present disclosure are used to distinguish different objects, rather than indicating a specific order. Furthermore, the terms “comprising” and “having”, and any variations thereof, are intended to indicate a non-exclusive inclusion. For example, a process, method, system, product, or device comprising a series of steps or units is not limited to the listed steps or units, but may include unlisted steps or units. After having introduced the technical solutions of the embodiments of the present disclosure, various non-limiting implementations of the present application will be described in detail below. First, referring toFIG.2that is a schematic flow diagram illustrating a method for performing a power stress test on an FPGA acceleration card according to an embodiment of the present disclosure, the embodiment of the present disclosure may include contents described below. In S201, the FPGA acceleration card is divided, according to a partial reconfiguration (PR) method, into a static region and a dynamic PR region in advance, and FPGA firmware with a partial reconfiguration function is burned to a Flash memory. In this step, internal hardware resources of the FPGA are divided into the static region and the dynamic PR region through FPGA partial reconfiguration. FPGA partial reconfiguration (PR) is a loading technology that may be used for reconfiguring local regions in the FPGA dynamically. This technology allows to redownload profiles in a partial reconfiguration region without affecting normal operation of other regions, realizing the function of switching among different services. This technology is quite suitable for time division multiplexing of internal hardware resources of the FPGA to implement a complex system with different functions, and may effectively reduce the hardware resource overheads for system implementation. For example, the FPGA chip may be divided into a region A and a region B, where A is the FPGA static region, and B is the dynamic PR region that may be partially reconfigured. The region B may reload PR profile1and PR profile2without affecting the normal operation of modules in the region A, realizing the function of switching between services running in the region B. The static region of the present disclosure is a hardware logic implementation region for functional tests other than the power stress tests, for example normal functional tests such as the signal integrity test, the power integrity test and the like. The dynamic PR region may include a blank mode occupying no hardware resource and a power test mode for performing the power stress test, and is used as a hardware logic implementation region for the power stress test. The blank mode is used for performing tests other than the power stress test in conjunction with the static region. That is, in the overall server environment, when the power stress test, such as a heat dissipation test, a system stress test, and a safety test, needs to be executed for the FPGA, the dynamic PR region is reconfigured as power stress test modules of different levels, and operates in the power test mode, so as to meet the needs of different test items. In other test items that do not require an FPGA power stress function, such as the signal integrity test, the power integrity test and a basic board ex-factory test, profiles of dynamic PR region in a default operating status, that is, the blank mode, may be used, or the dynamic PR region is configured as the blank mode in response to an upper-level instruction, and the power stress test modules are removed so as to perform other test items in an environment other than the overall server environment. In this way, test firmware for the FPGA acceleration card is unified, and development and maintenance of the test firmware are simplified. In this step, the dynamic PR region includes two operating modes, and the blank mode may be set as the default mode. The so-called default mode refers to that the operating status of the dynamic PR region is the blank mode by default unless the dynamic PR region is set. Of course, the operating mode of the dynamic PR region may be set according to the upper-level instruction. The dynamic PR region in the blank mode in conjunction with the static region may be used to compile a FPGA project as a project version including various functional test modules, so as to generate profiles of the entire FPGA and profiles of the dynamic PR region in the blank mode for use in test fields where the power stress is not required. The profiles of the entire FPGA are burned to the Flash and cured, so that the dynamic PR region is in the blank mode by default after being powered on, the FPGA is not stressed, and all other functions are normal, which is for tests that do not require power stress. Initial FPGA firmware with the PR function is burned to the Flash, so that whether to stress the FPGA may be flexibly controlled by an upper layer, avoiding using a JTAG cable to load and switch FPGA firmware with different functions. In S202, in response to receiving a request for power stress test, the operating mode of the dynamic PR region is set as the power test mode, and dynamic PR profiles burned into the Flash are loaded to the dynamic PR region. In this step, driver software may be pre-installed on the HOST and served as upper application software, and the dynamic PR region is reconfigured through peripheral component interconnect express (PCIe). The upper application software on the HOST may be used to load PR profiles for different operating modes to the dynamic PR region of the FPGA through PCIe, so as to meet different test items. For tests that require power stress, the operating mode of the dynamic PR region may be reconfigured through PCIe based on the driver software on the HOST, and power stress files of different levels are loaded according to power stress test requirements of users. In S203, the power stress test is executed in the dynamic PR region by calling power stress test modules. The power stress test modules of the embodiment are fixed in the dynamic PR region. When a power stress test, such as the heat dissipation test, needs to be executed, after the dynamic PR region is configured in S202, the power stress test is executed in the dynamic PR region. In the technical solution provided by the embodiment of the present disclosure, the internal hardware resources of the FPGA are divided into the static region and the dynamic PR region through FPGA partial reconfiguration, and the power stress test modules are fixed in the dynamic PR region for implementation, facilitating replacement of power stress test modules of different levels. The FPGA static region may compile in conjunction with the power stress test modules of different levels, so as to generate power stress profiles of different levels. FPGA profiles compiled and generated by the dynamic PR region in the blank mode are applicable to test fields that do not require power stress, and the static power of a board will not be increased. A user only needs to burn the firmware once to the Flash, and later may flexibly control whether to stress the FPGA through an upper layer of the HOST, avoiding using a JTAG cable to load and switch FPGA firmware with different functions. Dynamic PR profiles jointly compiled and generated by the power stress test modules are applicable to test fields that require power stress. After the FPGA project is cured to the Flash, the dynamic PR region needs to be configured as the power stress mode through PCIe, realizing the purpose of applying unified test firmware to different test fields, shortening the test time of each test field, improving the efficiency of each test item, and simplifying the development and maintenance of the test firmware. It should be noted that the steps of the present application are not necessarily to be strictly performed in a sequential execution order, and these steps may be executed simultaneously or may be executed in a pre-set order, as long as they conform to the logical order.FIG.1is merely a schematic way, and does not represent only the steps can only be executed in such an execution order. In the above embodiment, a partial reconfiguration region is divided from the internal hardware resources of the FPGA through FPGA partial reconfiguration, and served as the hardware logic implementation region of the power stress test modules, PR is introduced into the power stress test, and the power stress test modules are fixed in the PR region for implementation, facilitating replacement of the power stress test modules of different levels. On the other hand, other test items such as the signal integrity test and the power integrity test are implemented in the FPGA static region. In an implementation, in order to further improve the flexibility of the FPGA power stress test, on the basis of the above embodiment, the present application may further include the following steps. In response to receiving an instruction for configuring power stress test modules transmitted from the HOST, a plurality of power stress test modules occupying different amounts of logic resources are generated for the dynamic PR region, and served as hardware logic for the dynamic PR region operating in the power test mode. After the plurality of power stress test modules of different levels are generated, corresponding stress parameters may be automatically configured for power stress test modules of the same level in response to receiving an instruction for configuring power stress parameters transmitted from the HOST, so as to control a power value of the FPGA acceleration card. The dynamic PR region operating in the blank mode and the static region are jointly compiled to generate FPGA profiles and profiles for dynamic PR region in the blank mode, which are served as the FPGA firmware with the partial reconfiguration function, and the FPGA firmware is burnt and cured to the Flash. The plurality of power stress test modules and the static region are jointly compiled to generate dynamic PR profiles of various power stress levels. The dynamic PR profiles may be burned and cured to the Flash in advance, or may be burned and cured to the Flash upon receiving the request for power stress test. In the embodiment, power stress test modules of different levels may be generated according to requirements, for example, the following two different levels are used: about 30% of hardware logic resources in the dynamic PR region; and about 80% of the hardware logic resources. The power stress test modules of the two different levels are used as hardware logic of the dynamic PR region, and are jointly compiled with the static region FPGA project to generate power stress profiles of two levels for the dynamic PR region. The power stress test modules of different levels may be flexibly configured through the upper driver software, and different stress parameters may be configured for power stress programs corresponding to the same level, so that FPGA power is more accurately controlled. In another implementation, the static region may include peripheral component interconnect express (PCIe), a double data rate (DDR) driver, an optical module driver, and a reconfiguration module. Only a connection interface is reserved between the static region and the dynamic PR region, and no other hardware logic resource in the regions is used. PCIe is used as a data communication interface and an instruction issuing interface between the HOST and the FPGA acceleration card. For example, the operating mode of the dynamic PR region may be configured, based on a mode adjustment instruction transmitted from the HOST, as the power test mode through PCIe. The DDR driver is configured to drive a double data rate memory. The optical module driver is configured to drive various optical modules in the FPGA acceleration card. The reconfiguration module is configured to cause the HOST, based on the partial reconfiguration method, to load the dynamic PR profiles to the dynamic PR region through PCIe. In this embodiment, the power stress program and a normal functional item test program are unified, the development and maintenance of FPGA firmware in different test fields during the development and test phase of FPGA boards is simplified, the FPGA power level are flexibly configured through the dynamic PR region, and the adjustable range of the FPGA power test is further expanded. By means of PR, the blank mode is regarded as power level 0, and may be used in other test fields that do not require FPGA power stress. An embodiment of the present disclosure further provides an apparatus corresponding to the method for performing the power stress test on the FPGA acceleration card, making the method more practical. The apparatus may be described in terms of functional modules and hardware. An apparatus for performing a power stress test on an FPGA acceleration card according to the embodiment of the present disclosure is described below. The apparatus for performing the power stress test on the FPGA acceleration card described below and the method for performing the power stress test on the FPGA acceleration card described above may be cross-referenced. In term of functional modules, referring toFIG.4, which is a structural diagram illustrating a specific implementation of the apparatus for performing the power stress test on the FPGA acceleration card provided by the embodiment of the present disclosure, the apparatus may include a region pre-division module401, a stress test configuration module402and a power stress test execution module403. The region pre-division module401is configured to: divide the FPGA acceleration card into a static region and a dynamic PR region according to a partial reconfiguration method; and burn FPGA firmware with a partial reconfiguration function to the Flash, the static region being a hardware logic implementation region for a functional test other than power stress test, the dynamic PR region including a blank mode occupying no hardware resource and a power test mode for performing the power stress test, the dynamic PR region in the blank mode being used for execute a test other than the power stress tests in conjunction with the static region. The stress test configuration module402is configured to: in response to receiving a request for power stress test, set an operating mode of the dynamic PR region as the power test mode; and load dynamic PR profiles burned into the Flash to the dynamic PR region. The power stress test execution module403is configured to call power stress test modules to execute the power stress test in the dynamic PR region. In some implementations of this embodiment, the region pre-division module401may include a power stress hardware logic setting sub-module and a firmware burning sub-module. The power stress hardware logic setting sub-module is configured to: in response to receiving an instruction for configuring power stress test modules transmitted from the HOST, generate a plurality of power stress test modules occupying different amounts of logic resources for the dynamic PR region, and the power stress test modules are used as hardware logic for the dynamic PR region operating in the power test mode. The firmware burning sub-module is configured to: jointly compile the dynamic PR region operating in the blank mode with the static region to generate an FPGA profile and a profile of the dynamic PR region in blank mode as the FPGA firmware with the partial reconfiguration function: and burn and cure the FPGA firmware to the Flash. In some implementations of the embodiment of the present disclosure, the region pre-division module401may further include a dynamic PR profile generation sub-module. The dynamic PR profile generation sub-module is configured to jointly compile the plurality of power stress test modules with the static region to generate dynamic PR profiles with various power stress levels. The dynamic PR profiles are burned and cured to the Flash in advance, or are burned and cured to the Flash upon receiving the request for power stress test. In other implementations of the embodiment of the present disclosure, the region pre-division module401may further include a stress parameter setting sub-module. The stress parameter setting sub-module is configured to automatically configure, in response to receiving an instruction for configuring power stress parameters transmitted from the HOST, corresponding stress parameters for power stress test modules of the same level, so as to control a power value of the FPGA acceleration card. In some other implementations of this embodiment, the static region may include PCIe, a DDR driver, an optical module driver, and a reconfiguration module. Only a connection interface is reserved between the static region and the dynamic PR region. The reconfiguration module is configured to cause the HOST to load, according to the partial reconfiguration method, the dynamic PR profiles to the dynamic PR region through PCIe. In some implementations of the embodiment of the present disclosure, the stress test configuration module402may also be a module that configures, based on a mode adjustment instruction transmitted from the HOST, the operating mode of the dynamic PR region as the power test mode through PCIe. The functions of each functional module of the apparatus for the power stress test of the FPGA acceleration card according to the embodiment of the present disclosure may be specifically implemented according to the method in the above method embodiment. For the specific implementation process, please refer to the relevant description of the above method embodiment, which will not be repeated herein. It can be seen from the above that, according to the embodiment of the present disclosure, the firmware with the power stress function may be directly used to perform the power stress test on the FPGA acceleration card without increasing the static power of the FPGA acceleration card. The above apparatus for performing the power stress test on the FPGA acceleration card is described in term of the functional modules. Further, the present application further provides an apparatus for performing a power stress test on an FPGA acceleration card, which is described in term of hardware.FIG.5is a structural diagram illustrating another apparatus for performing a power stress test on an FPGA acceleration card according to an embodiment of the present application. The apparatus includes: a memory, configured to store a computer program; and a processor, configured to, when performing the computer program, implement the steps of the method for the power stress test of the FPGA acceleration card according to any one of the above embodiments. The memory may include one or more computer-readable storage media which may be non-transitory. The memory may further include a high-speed random access memory, and a non-volatile memory, such as one or more disk storage devices and Flash storage devices. In this embodiment, the memory is configured to store at least the following computer program. The computer program, after being loaded and executed by the processor, is capable of implementing relevant steps of the method for performing the power stress test on the FPGA acceleration card according to any one of the foregoing embodiments. In addition, resources stored by the memory may also include operating systems, data, etc., and may be stored temporarily or permanently. The operating systems may include Windows, Unix, Linux, etc. The data may include, but is not limited to, data corresponding to test results, etc. In some embodiments, the apparatus for performing the power stress test on the FPGA acceleration card may further include an input/output interface, a communication interface, a power supply, and a communication bus, for example, may further include a sensor. The functions of each functional module of the apparatus for performing the power stress test on the FPGA acceleration card according to the embodiment of the present disclosure may be specifically implemented according to the method in the above method embodiment. For the specific implementation process, please refer to the relevant description of the above method embodiment, which will not be repeated herein. It can be seen from the above that, according to the embodiment of the present disclosure, the firmware with the power stress function may be directly used to perform the power stress test on the FPGA acceleration card without increasing the static power of the FPGA acceleration card. It is to be understood that, if the method for performing the power stress test on the FPGA acceleration card according to the above embodiment is implemented in the form of a software functional unit and sold or used as a separate product, the method may be stored in a computer-readable storage medium. Based on this understanding, that part of the technical solution of the present application that essentially contributes to the related art or all or part of this technical solution may be embodied in the form of a software product. The computer software product is stored in a storage medium, and performs all or part of the steps of the method according to various embodiments of the present application. The aforementioned storage medium includes: a USB flash drive, a removable hard disk, a read-only memory (ROM), a random access memory (RAM), an electrically erasable programmable ROM, a register, a hard disk, a removable disk, a CD-ROM, a diskette or a CD-ROM, and various other media that may store program codes. Based on the above, an embodiment of the present disclosure further provides a computer-readable storage medium storing a program for performing power stress test on an FPGA acceleration card. The program for performing power stress test on the FPGA acceleration card, when executed by a processor, implements the steps of the method for performing the power stress test on the FPGA acceleration card according to any one of the above embodiments. The functions of each functional module of the computer-readable storage medium according to the embodiment of the present disclosure may be specifically implemented according to the method in the above method embodiment. For the specific implementation process, please refer to the relevant description of the above method embodiment, which will not be repeated herein. The firmware with the power stress function may be directly used to perform the power stress test on the FPGA acceleration card without increasing the static power of the FPGA acceleration card. Various embodiments described in the description are described in a progressive manner, each embodiment focuses on the difference from other embodiments, and the same or similar parts of each embodiment can be referred to each other. The apparatus disclosed in the embodiment corresponds to the method disclosed in the embodiment, thus the description is relatively simple, and for the related information, please refer to the description of the method. Those skilled in the art can further realize that the exemplary units and algorithm steps described in conjunction with the embodiments disclosed herein can be implemented by electronic hardware, computer software, or a combination of the two. In order to clearly illustrate the interchangeability between hardware and software, the composition and steps of each example have been generally described according to their functions in the above description. Whether these functions are executed by hardware or software depends on the specific application and design constraints of the technical solution. Those skilled in the art may use different methods to implement the described functions for each specific application, but such implementation should not be regarded as exceeding the scope of the present disclosure. The method and apparatus for performing a power stress test on an FPGA acceleration card, and computer-readable storage medium provided in the present application have been introduced in detail above. Herein, specific examples are used to illustrate the principle and implementation of the present disclosure, and the descriptions of the above embodiments are only used to help understand the method and core idea of the present disclosure. It should be pointed out that for those skilled in the art, without departing from the principle of the present disclosure, some improvements and modifications can also be made to the application, and these improvements and modifications also fall within the protection scope of the claims of the application.
25,548
11860748
DETAILED DESCRIPTION The following clearly and completely describes the technical solutions in the embodiments of the present disclosure with reference to the accompanying drawings in the embodiments of the present disclosure. Apparently, the described embodiments are some but not all of the embodiments of the present disclosure. All other embodiments obtained by those skilled in the art based on the embodiments of the present disclosure without creative efforts should fall within the protection scope of the present disclosure. It should be noted that the embodiments in the present disclosure and features in the embodiments may be combined with each other in a non-conflicting manner. A test of an LPDDR SDRAM in a system segment can be divided into, for example, three levels. A first level is, for example, a module of a memory built-in self-test (MBIST) included in a memory controller on an SoC. The MBIST can make some state-level tests on internal memory, which is a test for a DRAM, solidified in the SoC, and poorly configurable. A second level is, for example, a universal boot loader (Uboot) level. Test software at this level has better scalability and can test an entire memory array space. However, such a test generally requires an original support of the SoC and a development degree of an SoC manufacturer, and because the test runs on the SoC, a processing speed of the test is limited by a size of a static random access memory (SRAM) of the SoC. A third level is, for example, a test in a base of an Android/Linux system. The test on this level is closer to an applicable manner of a user, and test software have very good scalability and diversity, but cannot cover an entire memory space, because the Android/Linux system needs to occupy a large part of the memory space. There may be two types of stress tests related to an Android/Linux base. One type is, for example, a test at an Android native layer, and the other type is, for example, a test made by image software such as 3DMark. Both of the two software test methods have certain defects. Input/Output (IO) access of the test at the native layer cannot be fully loaded, while the 3DMark software is used to test a GPU and is not configured to test the memory. Therefore, a related memory test algorithm cannot be implemented, and IO access of the memory cannot be fully loaded. The present disclosure provides a memory test method, a CPU is driven to run a test program based on an accessible space of a CPU, to access a memory to-be-tested through a bus of memory to-be-tested, and when the CPU runs the test program, the CPU controls a GPU to access the memory to-be-tested based on an accessible space of a GPU through the bus of memory to-be-tested, thereby implementing a stress test with high access load on the memory and enhancing an effect of the memory test. FIG.1shows an exemplary system architecture10to which a memory test method or a memory test apparatus of the present disclosure is applicable. As shown inFIG.1, the system architecture10may include a CPU102and a GPU104, and may further include a memory bus106and a memory controller108. The parts of the system architecture10are described below. As shown inFIG.1, the CPU102may be configured to run a test program based on an accessible capacity of an allocated CPU to access a memory to-be-tested, when the CPU runs the test program, the CPU controls the GPU104to access the memory to-be-tested based on an accessible capacity of an allocated GPU. The memory (such as internal memory) test method performed by the CPU is a test performed by directly accessing the memory, and the entire test program can be run directly on the CPU. The CPU can operate the GPU through a universal interface of Open Graphics Library (OpenGL) or Open Computing Language (OpenCL). Test code run by the CPU uses such a universal interface to control the GPU to access the internal memory, to translate an internal memory test pattern of the CPU into image processing logic of the GPU, so that a test pattern of the GPU is consistent with that of the CPU. In some embodiments, for example, items of the CPU test may be divided into three aspects. A first aspect is, for example, an array test; a second aspect is, for example, an IO test; and a third aspect is, for example, a state switching test. The array test mainly focuses on whether an array has some hardware errors (hard fail), such as detecting whether a bit flip occurs. The IO test may be divided into, for example, two types, a first type is, for example, a stress test of a command mode (Command line), such as detecting whether an input/output address flip occurs; and a second type is, for example, a stress test of a data bus mode (DQ bus line), such as detecting whether a transmit data flip occurs. The test of the DQ bus line is a main item of the stress test. The state switching test may be divided into two types of test items, where a first type is, for example, a switching suspend test, and a second type is, for example, a reboot test. As shown inFIG.1, the GPU104may be configured to access the memory to-be-tested based on the accessible capacity of the GPU. A GPU memory test mode is an indirect test mode. The GPU does not directly run test code. Instead, the test code is run on the CPU. The CPU runs to instruct the GPU to perform image processing, and the GPU accesses the memory to perform the related operation. As shown inFIG.1, the CPU102access a low-power internal memory through the bus of memory to-be-tested106by the memory controller; and the GPU104accesses the low-power internal memory through the bus of memory to-be-tested106by the memory controller. Regardless of whether the GPU or the CPU accesses the memory alone, there is a bus allocation problem, that is, a part is reserved for the other part and the memory bus106cannot be fully occupied. A joint test mode of the GPU and the CPU can enable stress on the memory test to reach a maximum value. As shown inFIG.1, the memory controller108may be, for example, an internal memory controller, configured to exchange data between the CPU and/or the GPU and memory to-be-tested, by which a maximum memory capacity, a type and speed of memory, a data width, and the like of the memory controlled by the controller can be obtained. A general memory controller can be used for DDR memory of different generations and models. For example, when the memory to-be-tested is a low-power internal memory (LPDDR), an enhanced general DDR memory controller can be used. FIG.2is a flowchart of a memory test method according to an exemplary embodiment. The memory test method shown inFIG.2, may be, for example, applied to the system10. As shown inFIG.2, the memory test method20provided in this embodiment of the present disclosure may include the following steps. As shown inFIG.2, in step S202, an accessible space of a CPU of a memory to-be-tested is obtained. The memory may include an internal memory and an external memory. The internal memory may include a register, a cache memory, and a main memory (usually referred to as internal memory). The external memory may include a hard disk, a floppy disk, an optical disk, or the like. The internal memory has a small capacity and a high speed, and is usually configured to temporarily store data and programs currently being executed. The external memory has a large capacity and a low speed, and is usually configured to store data and programs for a long time or permanently. The test method in the present disclosure can be used in various memories and is not limited herein. As shown inFIG.2, in step S204, an accessible space of a GPU of the memory to-be-tested is obtained. In some embodiments, for example, when the memory to-be-tested is internal memory, memory information such as a passable address, a data volume, or a rate may be obtained before a test program is run, the internal memory may be allocated to the CPU and the GPU, and the CPU and the GPU may be set to be accessible to a capacity and an address of the memory to-be-tested. A CPU accessible capacity can be obtained based on the capacity of the memory to-be-tested, a GPU accessible capacity can be obtained based on the capacity of the memory to-be-tested, and addresses of corresponding internal memory can be allocated respectively. As shown inFIG.2, in step S206, the CPU is driven to run a test program based on the accessible space of the CPU, to access the memory to-be-tested through a bus of memory to-be-tested, when the CPU runs the test program, the CPU controls the GPU to access the memory to-be-tested based on the accessible space of the GPU through the bus of memory to-be-tested. An address access mode of the CPU can be converted, through a conversion interface, into a large block address access mode mapped by direct memory access (DMA) of the GPU, to implement conversion of test logic. In some embodiments, for example, the CPU, based on the OpenCL, controls the GPU to access the memory to-be-tested in a predetermined access mode according to the test program. The CPU can control, based on OpenCL, the GPU to access the memory to-be-tested in the predetermined access mode according to the test program. OpenCL is a working standard for writing programs on heterogeneous parallel computing platforms, and can map heterogeneous computing to a CPU, a GPU, a field programmable gate array (FPGA), and other computing devices. OpenCL provides an abstract model of an underlying hardware structure and can provide a universal interface for developing an application. OpenCL can be used to write a general-purpose computing program that runs on the GPU, without mapping an algorithm thereof to an application programming interface of 3D graphics such as OpenGL or DirectX. In some other embodiments, for example, the CPU can control, through an OpenCL or an OpenGL interface, the GPU to access the memory to-be-tested in a predetermined access mode according to the test program. OpenGL is a graphics application programming interface, including a software library that can access a graphics hardware device such as a GPU, and can implement an OpenGL interface on various different graphics hardware systems completely by software. GPU hardware developers need to provide implementations that meet OpenGL specifications, and these implementations are usually referred to as “drive”, configured to translate OpenGL-defined application programming interface commands into GPU instructions. According to the memory test method provided in this embodiment of the present disclosure, the CPU and the GPU are used to access the internal memory simultaneously for a stress test, to fully occupy the bus of memory to-be-tested as much as possible, thereby implementing a stress test with high access load on the memory and enhancing an effect of the memory test. FIG.3is a flowchart of another memory test method according to an exemplary embodiment. The memory test method shown inFIG.3may be, for example, applied to the foregoing system10. As shown inFIG.3, the memory test method30provided in this embodiment of the present disclosure may include the following steps. As shown inFIG.3, in step S302, an accessible space of a CPU of a memory to-be-tested is obtained. As shown inFIG.3, in step S304, an accessible space of a GPU of the memory to-be-tested is obtained. For some specific implementations of steps S302and S304, reference may be made to steps S202and S204, and details are not described herein again. As shown inFIG.3, in step S306, the CPU is driven to run a test program based on the accessible space of the CPU, to access the memory to-be-tested through a bus of memory to-be-tested, when the CPU runs the test program, the CPU controls the GPU to access the memory to-be-tested based on the accessible space of the GPU through the bus of memory to-be-tested, and the CPU and the GPU serially access the memory to-be-tested through the bus of memory to-be-tested. The bus of memory to-be-tested may be, for example, an advanced extensible interface (AXI) bus including a predetermined transmission path, and the CPU and the GPU, according to the test program, serially access the memory to-be-tested through the predetermined transmission path. In some embodiments, for example, the AXI bus includes five independent transmission paths, that is, a read address path, a read data path, a write address path, a write data path, and a write reply path. The CPU and the GPU access the internal memory such as an LPDDR by the AXI bus in a serial manner, that is, transmission is performed in chronological order on a same transmission path. During firmware configuration of a memory chip, a part may be reserved for each of the CPU and the GPU, that is, if the internal memory is accessed by the CPU or the GPU separately for testing, a remaining part of the bus is not occupied. The CPU and the GPU simultaneously read and write to their respective memory spaces to access the internal memory at the same time, so that an AXI bus clock can be fully occupied to maximize an IO test of the internal memory. According to the memory test method provided in this embodiment of the present disclosure, the CPU and the GPU are used to access the internal memory simultaneously for a stress test, to fully occupy the bus of memory to-be-tested as much as possible, thereby maximizing the stress test of the memory and enhancing an effect of the memory test. FIG.4is a flowchart of a memory test according toFIG.1toFIG.3. As shown inFIG.4: After a procedure starts (S402), first obtain information of a memory (S404), then allocate a memory space for a CPU based on the information of the memory (S406), and allocate a memory space for a GPU (S408); after memory allocation is completed, trigger a joint test of the CPU and the GPU (S410), then check a test result (S412); and after the test result indicates that the test is completed, perform a next memory test (S414), and return to step S402. FIG.5is an architectural diagram of an implementation platform for a memory test according to an exemplary embodiment. As shown inFIG.5, using an Android base as an example, a memory test can be implemented by calling a drive at a native layer by an Android application (APP). The entire implementation is divided into three parts. A first part is a main control App502of an Android application layer. This part is the main control of the entire test. A second part is to implement conversion of a test program of a GPU, and GPU test conversion508converts a test mode of a CPU into an operation procedure of the GPU by an OpenGL interface510. A third part is that a CPU test engine506runs a test program of a CPU. Because both the second part and the third part are at the native layer of Android, both parts need external interfaces to be called by the main control of the first part. The main control App502, through related application programming interfaces, controls the second part and the third part to perform related operations. The native layer may further include a configuration file504, a recording engine512, and the like. Because debugging and test results need to be retained, a log of the test needs to be stored for the test, and the log can be retained in a/data file of Android by the recording engine512. The implementation inFIG.5can be implemented only when root user (root) permission of Android is granted, and a developer mode is enabled. As shown inFIG.5, using a Linux base as an example, a Linux APP may be directly used to access a virtual address space, and a page lock of a Linux page mapping516at a kernel layer is used for testing, or an APP may be used to call a Linux underlying drive (kernel mode drive)514to complete the memory test. FIG.6is a block diagram of a memory test apparatus according to an exemplary embodiment. The apparatus shown inFIG.6may be, for example, applied to the foregoing system10. As shown inFIG.6, the apparatus60provided in this embodiment of the present disclosure may include a test preparation module602and a test running module604. As shown inFIG.6, the test preparation module602may be configured to obtain an accessible space of a CPU of a memory to-be-tested; and obtain an accessible space of a GPU of the memory to-be-tested. As shown inFIG.6, the test running module604may be configured to drive a CPU to run a test program based on the accessible space of the CPU, to access the memory to-be-tested through a bus of memory to-be-tested, when the CPU runs the test program, the CPU controls a GPU to access the memory to-be-tested based on the accessible space of the GPU through the bus of memory to-be-tested. As shown inFIG.6, the test running module604may be further configured to drive the CPU and the GPU to serially access the memory to-be-tested through the bus of memory to-be-tested. The bus of memory to-be-tested is an AXI bus including a predetermined transmission path. As shown inFIG.6, the test running module604may be further configured to drive the CPU and the GPU, according to the test program, to serially access the memory to-be-tested through the predetermined transmission path. As shown inFIG.6, the test running module604may be further configured to drive the CPU to control, based on an OpenCL, the GPU to access the memory to-be-tested in a predetermined access mode according to the test program. As shown inFIG.6, the test running module604may be further configured to control, by the CPU through an OpenGL interface, the GPU to access the memory to-be-tested in a predetermined access mode according to the test program. The memory to-be-tested is, for example, a low-power internal memory. As shown inFIG.6, the test running module604may be further configured to drive the CPU to access the low-power internal memory through the bus of memory to-be-tested by a memory controller. When the CPU runs the test program, the CPU controls the GPU to access the low-power internal memory through the bus of memory to-be-tested by the memory controller. For a specific implementation of each module in the apparatus provided in this embodiment of the present disclosure, reference may be made to the content in the foregoing method, and details are not described herein again. FIG.7is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure. The device shown inFIG.7only uses a computer system as an example, and should not bring any limitation to a function and a scope of use of this embodiment of the present disclosure. As shown inFIG.7, the device700includes a CPU701, which can perform various appropriate actions and processing based on a program stored in a read-only memory (ROM)702or a program loaded from a storage portion708into a random access memory (RAM)703, for example, can execute a test program to test a connected LPDDR. A GPU712may be further included, and the CPU701may control the GPU to test a connected memory. The RAM703further stores various programs and data required for operations of the device700. The CPU701, the ROM702, and the RAM703are connected to one another by a bus704. An input/output (I/O) interface705is also connected to the bus704. The CPU701, the ROM702, the RAM703, the I/O705, the GPU712, and the like may be integrated on an SoC as needed. As shown inFIG.7, the following components are connected to the I/O interface705: an input part706including a keyboard and a mouse; an output part707including a cathode ray tube (CRT), a liquid crystal display (LCD), and a speaker; a storage part708including a hard disk; and a communication part709including a network interface card such as a LAN card and a modem. The communication part709performs communication processing by a network such as the Internet. A drive710is also connected to the I/O interface705as needed. A removable medium711, such as a magnetic disk, an optical disk, a magneto-optical disk, or a semiconductor memory, is mounted on the drive710as needed, so that a computer program read therefrom is installed into the storage part708as needed. As shown inFIG.7, according to this embodiment of the present disclosure, the process described above with reference to the flowchart may be implemented as a computer software program. For example, this embodiment of the present disclosure includes a computer program product, which includes a computer program carried on a computer readable medium, and the computer program includes program code configured to perform the memory test method shown in the flowchart. In such an embodiment, the computer program may be downloaded and installed from the network through the communication part709, and/or may be installed from the removable medium711. When the computer program is executed by the CPU701, the foregoing functions defined in the system of the present disclosure are performed. The computer readable medium shown in the present disclosure may be a computer readable signal medium or a computer readable storage medium or any combination thereof. The computer readable storage medium, may be, for example, but not limited to, electrical, magnetic, optical, electromagnetic, infrared, or semiconductor systems, apparatuses, or devices, or any combination thereof. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection with one or more conducting wires, a portable computer disk, a hard disk, a RAM, a ROM, an erasable programmable ROM (an EPROM or a flash memory), an optical fiber, a portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination thereof. In the present disclosure, the computer readable storage medium may be any tangible medium that contains or stores a program, and the program may be used by or in combination with an instruction execution system, apparatus, or device. In the present disclosure, the computer readable signal medium may include a data signal propagated in a baseband or as a part of a carrier, and computer readable program code is carried therein. The propagated data signal may be in various forms, including but not limited to an electromagnetic signal, an optical signal, or any suitable combination thereof. The computer readable signal medium may alternatively be any computer readable medium other than the computer readable storage medium. The computer readable medium may send, propagate, or transmit a program configured to be used by or in combination with an instruction execution system, apparatus, or device. The program code contained on the computer readable medium may be transmitted using any suitable medium, including but not limited to: wireless, wire, optical fiber, RF, or any suitable combination thereof. The flowcharts and block diagrams in the accompanying drawings illustrate architectures, functions, and operations of possible implementations of the system, method, and computer program product according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagram can represent one module, program segment, or part of code, and the module, program segment, or part of the code contains one or more executable instructions configured to implement a defined logical function. It should also be noted that, in some alternative implementations, the functions marked in the blocks may alternatively occur in a different order from that marked in the accompanying drawings. For example, two successively shown blocks actually may be executed in parallel substantially, or may be executed in reverse order sometimes, depending on the functions involved. It should also be noted that, each block in the block diagram or flowchart and the combination of the blocks in the block diagram or the flowchart may be implemented by a dedicated hardware-based system that performs a defined function or operation, or may be implemented by a combination of dedicated hardware and computer instructions. The modules described in the embodiments of the present disclosure may be implemented in a form of software or in a form of hardware. The describe modules may alternatively be disposed in the processor, and this, for example, may be described as follows: A processor includes a test preparation module and a test running module. Names of the modules do not constitute any limitation on the modules. For example, the test preparation module may be alternatively described as “a module that obtains a memory allocation parameter from a connected terminal”. In another aspect, the present disclosure further provides a computer readable medium. The computer readable medium may be included in the device described in the foregoing embodiment; or may exist along without being assembled into the device. The computer readable medium carries one or more programs, and when the one or more programs are executed by a device, the device is caused to include: obtaining an accessible space of a CPU of a memory to-be-tested; obtaining an accessible space of a GPU of the memory to-be-tested; and driving the CPU to run a test program based on the accessible space of the CPU, to access the memory to-be-tested through a bus of memory to-be-tested, when the CPU runs the test program, the CPU controls the GPU to access the memory to-be-tested based on the accessible space of the GPU through the bus of memory to-be-tested. In the description of the specification, the description with reference to terms such as “an embodiment”, “an illustrative embodiment”, “some implementations”, “an illustrative implementation” and “an example” means that the specific feature, structure, material or feature described in combination with the implementation(s) or example(s) is included in at least one implementation or example of the present disclosure. In this specification, the schematic expression of the above terms does not necessarily refer to the same implementation or example. Moreover, the described specific feature, structure, material or characteristic may be combined in an appropriate manner in any one or more implementations or examples. It should be noted that in the description of the present disclosure, the terms such as “center”, “top”, “bottom”, “left”, “right”, “vertical”, “horizontal”, “inner” and “outer” indicate the orientation or position relationships based on the drawings. These terms are merely intended to facilitate description of the present disclosure and simplify the description, rather than to indicate or imply that the mentioned device or element must have a specific orientation and must be constructed and operated in a specific orientation. Therefore, these terms should not be construed as a limitation to the present disclosure. It can be understood that the terms such as “first” and “second” used in the present disclosure can be used to describe various structures, but these structures are not limited by these terms. Instead, these terms are merely intended to distinguish one element from another. The same elements in one or more drawings are denoted by similar reference numerals. For the sake of clarity, various parts in the drawings are not drawn to scale. In addition, some well-known parts may not be shown. For the sake of brevity, the structure obtained by implementing multiple steps may be shown in one figure. In order to make the understanding of the present disclosure more clearly, many specific details of the present disclosure, such as the structure, material, size, processing process and technology of the device, are described below. However, as those skilled in the art can understand, the present disclosure may not be implemented according to these specific details. Finally, it should be noted that the above embodiments are merely intended to explain the technical solutions of the present disclosure, rather than to limit the present disclosure. Although the present disclosure is described in detail with reference to the above embodiments, those skilled in the art should understand that they may still modify the technical solutions described in the above embodiments, or make equivalent substitutions of some or all of the technical features recorded therein, without deviating the essence of the corresponding technical solutions from the scope of the technical solutions of the embodiments of the present disclosure. INDUSTRIAL APPLICABILITY The present disclosure discloses a memory test method, a memory test apparatus, a device and a storage medium. This method implements a stress test with high access load on a memory and enhances an effect of the memory test.
28,775
11860749
DETAILED DESCRIPTION OF EMBODIMENTS Example embodiments of the present disclosure are described below in combination with the accompanying drawings, and various details of embodiments of the present disclosure are included in the description to facilitate understanding, and should be considered as illustrative only. Accordingly, it should be recognized by one of the ordinary skilled in the art that various changes and modifications may be made to embodiments described herein without departing from the scope and spirit of the present disclosure. Also, for clarity and conciseness, descriptions for well-known functions and structures are omitted in the following description. It should also be noted that some embodiments in the present disclosure and some features in the disclosure may be combined with each other on a non-conflict basis. Features of the present disclosure will be described below in detail with reference to the accompanying drawings and in combination with embodiments. In embodiments of the present disclosure, after acquiring a debugging instruction sent by an operating terminal, a debugged terminal and a first edge communication node corresponding to the debugged terminal are determined according to the debugging instruction, and a debugging communication link between the first edge communication node and the debugged terminal is determined, the first edge communication node is determined based on first edge communication node information sent by the debugged terminal, and the first edge communication node information is determined and obtained based on an edge node computing application locally installed on the debugged terminal. In embodiments of the present disclosure, by determining the corresponding first edge communication node through the edge node computing application installed on the side of the debugged terminal, and determining the corresponding debugging communication link based on the first edge communication node, local resources of an executing body of the method for sending a debugging instruction may be saved, and the efficiency of sending a debugging instruction may be improved. FIG.1shows an example system architecture100in which a method and apparatus for sending a debugging instruction, an electronic device and a computer readable storage medium of embodiments of the present disclosure may be applied. As shown inFIG.1, the system architecture100may include an edge computing network102composed of a plurality of edge nodes, a first terminal device101and a second terminal device103. Both of the first terminal device101and the second terminal device103may implement data transmission with the edge computing network102in various methods such as wired or wireless communication links, or optic fibers. A user may use the first terminal device101to implement various management and control on the second terminal device103through the edge computing network102. For example, the user may send a debugging instruction to the edge computing network102using the first terminal device101to remotely debug the second terminal device103, so that the debugging instruction is sent to the second terminal device103by using characteristics of the edge computing network. The above operations may be implemented with the help of applications installed in the first terminal device101, installed in any computing node that forms the edge computing network102, and installed in the second terminal device103, such as remote monitoring applications, remote control applications, or applications for remote debugging of terminals. The terminal devices101and103are generally embodied as hardware in different forms, for example, electronic devices such as smart phones, tablet computers, laptop computers, and desktop computers. It should be noted that the method for sending a debugging instruction provided by embodiments of the present disclosure is generally performed by the edge computing network102. Accordingly, an apparatus for sending a debugging instruction may generally be represented as any edge computing node in the edge computing network102. It should be understood that numbers of the first terminal device and the second terminal device inFIG.1are merely illustrative. In addition, the first terminal device that initiates the debugging instruction may also be the terminal device that receives the debugging instruction. With further reference toFIG.2, a flow200of a method for sending a debugging instruction according to an embodiment of the present disclosure is illustrated. The method for sending a debugging instruction includes the following steps. Step201, acquiring a debugging instruction sent by an operating terminal, and determining a debugged terminal according to the debugging instruction. In the present embodiment, an executing body of the method for sending a debugging instruction (for example, the edge computing network102shown inFIG.1) may generally directly acquire the debugging instruction sent by a user using a non-local human-computer interaction device as the operating terminal from the non-local human-computer interaction device (for example, the first terminal device101shown inFIG.1). In addition, the user may also pre-determine the debugging instruction and a predetermined triggering rule through the operating terminal, and then send the debugging instruction and the triggering rule to the executing body of the method for sending a debugging instruction. The executing body acquires the debugging instruction when the predetermined triggering rule is satisfied, which is not limited in the present disclosure. The debugging instruction is generally sent through an existing communication link between the operating terminal and the executing body. The debugging instruction includes related information used to indicate the debugged terminal, for example, information such as a terminal name, a terminal version or a terminal location of a target debugged terminal. After acquiring the debugging instruction, the executing body may determine the target, that is, the debugged terminal, based on the related information of the debugged terminal in the debugging instruction. It should be understood that the debugging instruction includes at least a debugging operation that hopes to designate the debugged terminal to perform the corresponding operation. Step202, determining a first edge communication node corresponding to the debugged terminal, and determining a debugging communication link between the first edge communication node and the debugged terminal. In the present embodiment, generally, a plurality of communication nodes may be established locally on the executing body in advance, and then these communication nodes are used to represent different debugging terminals, respectively. The nodes established locally may include some system modules and user-specified functional modules by default for communication between the deployed debugging terminals and the executing body, communication between the modules on the terminal side, and realization of application functions, etc. Through online installation, offline installation package, image burning, etc., an edge node computing application is deployed in the debugged terminal that supports remote debugging in advance. After the edge node computing application is deployed in the debugged terminal, the computing application may automatically start to build system and non-system modules, where a system core module may automatically establish a communication link with the executing body, that is, the debugging communication link. An establishment method of the debugging communication link includes but not limited to websocket, http, mqtt, http3 and other methods. It should be understood that the debugged terminal may determine the edge node used for communication between the executing body and the debugged terminal, that is, the first edge communication node, based on the locally installed edge node computing application, and the debugged terminal may send the determined first edge communication node to the executing body, so that the executing body may determine the debugging communication link with the debugged terminal based on the first edge communication node. The executing body may communicate with the debugged terminal through the first communication node and the debugging communication link. Through the debugging communication link, the debugged terminal may periodically report local system information of the debugged terminal, operating status of the functional modules, etc. to the executing body, and the debugged terminal may also receive resource data, command information, etc. sent by the executing body through the debugging communication link. After the establishment of the debugging communication link between at least one debugged terminal and the executing body is completed, subsequently, the corresponding debugging communication link may be determined based on the debugged terminal determined in the debugging instruction to implement communication. Step203, sending a debugging operation included in the debugging instruction to the debugged terminal through the debugging communication link. In the present embodiment, after the debugging communication link with the debugged terminal is established in step202above, the debugging operation in the received debugging instruction is sent to the corresponding debugged terminal through the debugging communication link, to realize the corresponding debugging operation. The debugging operation may be an operation such as a remote login, real-time monitoring, online debugging, system or application log collection for the debugged terminal locally, or may be an operation instructing the debugged terminal to extract a log and collect data from another device controlled and monitored by the debugged terminal, or may be an operation instructing the debugged terminal to control another device, which is not limited in the present disclosure. According to the method for sending a debugging instruction provided by the present embodiment, after acquiring the debugging instruction sent by the operating terminal, the debugged terminal and the first edge communication node corresponding to the debugged terminal are determined according to the debugging instruction, and the debugging communication link between the first edge communication node and the debugged terminal is determined, the first edge communication node is determined based on the first edge communication node information sent by the debugged terminal, and the first edge communication node information is determined and obtained based on the edge node computing application locally installed on the debugged terminal, and the debugging operation included in the debugging instruction is sent to the debugged terminal through the debugging communication link. In embodiments of the present disclosure, by determining the corresponding first edge communication node through the edge node computing application installed on the side of the debugged terminal, and determining the corresponding debugging communication link based on the first edge communication node, local resources of the executing body of the method for sending a debugging instruction may be saved, and the efficiency of sending a debugging instruction may be improved. On the basis of the foregoing embodiment, another embodiment of the present disclosure also provides a flow300of the method for sending a debugging instruction throughFIG.3, which includes a specific step of determining the debugging communication link between the first edge communication node and the debugged terminal, including the following steps. Step301, acquiring a debugging instruction sent by an operating terminal, and determining a debugged terminal according to the debugging instruction. Step302, controlling the debugged terminal to determine first edge communication node information through an edge node computing application locally installed on the debugged terminal. After determining the debugged terminal according to the debugging instruction, the executing body generates corresponding control information for controlling the debugged terminal to execute the local edge node computing application of the debugged terminal, and sends the control information to the debugged terminal for controlling the debugged terminal to use the local edge node computing application of the debugged terminal to determine the first edge communication node and send the first edge communication node information to the executing body after generating the first edge communication node information. Step303, determining a corresponding first edge communication node locally, in response to receiving the first edge communication node information sent by the debugged terminal. Step304, determining a communication link between the first edge communication node and the debugged terminal, to obtain a debugging communication link. The operation of establishing the debugging communication link is performed in the above process to establish the debugging communication link for the communication between the executing body and the debugged terminal. Step305, sending a debugging operation included in the debugging instruction to the debugged terminal through the debugging communication link. In the present embodiment, a specific method for determining the debugging communication link is provided, which may establish the debugging communication link after receiving the debugging instruction and determining the corresponding debugged terminal, so as to avoid long-term vacant of the debugging communication link and waste of resources. On the basis of the above embodiment shown inFIG.2, yet another embodiment of the present disclosure also provides a flow400of the method for sending a debugging instruction throughFIG.4, to illustrate the method for sending a debugging instruction to the debugged terminal subsequently on the basis of the embodiment shown inFIG.2, which includes the following steps. Step401, acquiring a debugging instruction sent by an operating terminal, and determining a debugged terminal according to the debugging instruction. Step402, determining a first edge communication node corresponding to the debugged terminal, and determining a debugging communication link between the first edge communication node and the debugged terminal. Step403, sending a debugging operation included in the debugging instruction to the debugged terminal through the debugging communication link. It should be understood that in the present embodiment, there is a subsequent debugging instruction sent to the debugged terminal, so that the debugging instruction sent in this step may only be used to instruct the executing body to determine the debugged terminal, then establish the debugging communication link with the debugged terminal to facilitate sending the corresponding operation instruction based on the debugging communication link subsequently. In some embodiments, the debugging operation sent in step403may include a communication link establishment feedback operation, so that after the debugging communication link is established, the debugged terminal sends communication link establishment information for feedback. After receiving the feedback information, the executing body sends the feedback information to the operating terminal, so that the user may understand that the debugging communication link is successfully established and may issue the subsequent debugging instruction. Step404, determining an instruction communication link based on the debugging instruction. Acquiring the communication link for communication between the executing body and the operating terminal, that is, the instruction communication link currently used to acquire the debugging instruction, may be implemented based on technologies such as websocket, mqtt, or http3, and the instruction communication link is generally a bi-directional communication link. In addition, it is also necessary to acquire the communication node used by the executing body locally to realize communication with the operating terminal. The executing body and the operating terminal may perform content interaction using a method such as memory message queue, publish/subscribe mechanism based on Radis consistency, or message transfer mechanism based on mqtt. Step405, connecting the instruction communication link and the debugging communication link through a connection communication link to obtain a continuous communication link. The connection communication link between the communication node determined in step404used by the executing body locally to realize communication with the operating terminal and the first edge communication node determined in step402is established locally in the executing body, so as to realize the communication between the communication node that communicates with the operating terminal and the first edge communication node through the connection communication link, that is, the instruction communication link and the debugging communication link may be connected through the connection communication link to generate a continuous communication link that may be used to start from the operating terminal and pass through the executing body and finally arrive at the debugged terminal. It is also possible to set a keep-alive duration for the connection communication link to a preset duration, during which the debugging instruction received through the instruction communication link may be continuously sent to the debugging communication link through the connection communication link, or debugging result information received through the debugging communication link may be continuously sent to the instruction communication link. Step406, sending a new debugging instruction to the debugged terminal through the continuous communication link. That is, the executing body may receive the debugging instruction from the operating terminal through the instruction communication link in the continuous communication link generated in the above step405, and send the debugging instruction to the debugging communication link through the local connection communication link, and send the debugging instruction to the debugged terminal through the debugging communication link. In some embodiments, similarly, after the instruction communication link and the debugging communication link are connected by the connection communication link to form a complete information communication link, the debugged terminal may generate corresponding feedback information and returns the feedback information to the operating terminal, in order to facilitate the user to issue the subsequent debugging instruction. It should be understood that the above steps401-403are the same as steps201-203shown inFIG.2. For content of the same part, reference may be made to the corresponding part of the embodiment inFIG.2, and detailed description thereof will be omitted. In addition, the implementation provided in steps404-406in the present embodiment may also be combined with the content in the embodiment shown inFIG.3to obtain a further implementation that includes the corresponding effects of the embodiment inFIG.3and the embodiment inFIG.4, respectively. In the method for sending a debugging instruction provided in the present embodiment, the continuous communication link between the operating terminal, the executing body and the debugged terminal may be established, so that through the continuous communication link, the debugging instruction sent by the operating terminal may be quickly sent to the debugged terminal through the executing body, improving the efficiency of issuing a debugging instruction. In some alternative implementations of the present embodiment, the determining an instruction communication link based on the debugging instruction includes: determining a second edge communication node for establishing the instruction communication link locally, in response to receiving instruction communication link establishment information sent by the operating terminal; where the second edge communication node is determined based on second edge communication node information sent by the operating terminal, and the second edge communication node information is determined based on an edge node computing application locally installed on the operating terminal; and establishing a communication link between the second edge communication node and the operating terminal to obtain the instruction communication link. Similarly, when establishing the instruction communication link between the executing body and the operating terminal, it may refer to the method for establishing the debugging communication link between the executing body and the debugged terminal, that is, using the edge node computing application locally installed on the operating terminal to determine the corresponding second edge communication node, and then determine a new instruction communication link based on the second edge communication node, so as to update the original instruction communication link between the executing body and the operating terminal, to improve the efficiency of sending a debugging instruction between the executing body and the operating terminal. On the basis of any one of the foregoing embodiments, considering that after the debugged terminal completes the corresponding debugging operation, in order to better understand an execution status and/or debugging result of the debugging operation, the debugged terminal may feedback the debugging result information to the executing body. In this case, a method for sending a debugging instruction may further include: controlling the debugged terminal to return corresponding debugging result information after executing the debugging instruction; and returning the debugging result information to the operating terminal, in response to receiving the debugging result information returned by the debugged terminal through the debugging communication link. The debugging operation may include: instructing the debugged terminal to generate the corresponding debugging result information and feedback the corresponding debugging result information to the executing body after executing the debugging instruction, or determining a feedback time condition according to a predetermined time rule or a time rule recorded in the debugging instruction, etc., controlling the debugged terminal to feedback the corresponding debugging result information in response to determining that the feedback time condition is satisfied, and returning the debugging result information to the operating terminal through the communication link with the operating terminal after receiving the debugging result information, so that the operating terminal acquires the debugging result information, and then formulates a corresponding strategy based on the debugging result information. In addition, on the basis of any one of the foregoing embodiments, when considering improving the method for sending a debugging instruction from the perspective of the debugging instruction and the debugging result information, a method for sending a debugging instruction further includes: performing at least one information processing operation of validity authentication processing, message caching processing, or orderliness guarantee processing on an acquired content, in response to acquiring any one of the debugging instruction or the debugging result information. After receiving the debugging instruction sent through the instruction communication link and/or the debugging result information fed back through the debugging communication link, the executing body may perform at least one information processing operation of validity authentication processing, message caching processing, or orderliness guarantee processing on the received debugging instruction or the debugging result information, to achieve the ability to provide guarantees for the integrity and orderliness of the content. In order to deepen understanding, an embodiment of the present disclosure further provides a specific implementation in combination with a specific application scenario. In the specific application scenario, an operating terminal A sends a debugging instruction to a cloud server B, to realize remote debugging of a debugged terminal C through the cloud server. Referring toFIG.5, the specific process is as follows. Step501, a user may send the debugging instruction to the cloud server B through the operating terminal A, in order to debug the debugged terminal C. Step502, after receiving the debugging instruction sent by the operating terminal A, the cloud server B determines the debugged terminal C. Step503, a first edge communication node is determined based on the debugged terminal C. Step504, a debugging communication link is determined between the cloud server B and the debugged terminal C. Step505, the cloud server B sends a debugging operation in the debugging instruction to the debugged terminal through the debugging communication link. Step506, after the debugged terminal C is debugged, corresponding debugging result information is generated. Step507, after generating the debugging result information, the debugged terminal C sends the debugging result information to the cloud server B. Step508, after receiving the debugging result information sent by the debugged terminal C, the cloud server B returns the debugging result information to the operating terminal A. In addition, it may also include step509, the cloud server B generates a connection communication link locally, and connects an instruction communication link between the operating terminal A and the cloud server B and the debugging communication link between the cloud server B and the debugged terminal C through the connection communication link, to obtain a continuous communication link. Step510, the operating terminal A may send a new debugging instruction to the debugged terminal C through the continuous communication link generated in the above step509. It can be seen from this application scenario that in the method for sending a debugging instruction provided in the present embodiment, after acquiring the debugging instruction sent by the operating terminal, the debugged terminal and the first edge communication node corresponding to the debugged terminal are determined according to the debugging instruction, and the debugging communication link between the first edge communication node and the debugged terminal is determined, the first edge communication node is determined based on the first edge communication node information sent by the debugged terminal, and the first edge communication node information is determined and obtained based on the edge node computing application locally installed on the debugged terminal, and the debugging operation included in the debugging instruction is sent to the debugged terminal through the debugging communication link. The communication link is determined based on the edge node computing application installed on the debugged terminal to improve the efficiency of sending a debugging instruction. As shown inFIG.6, an apparatus600for sending a debugging instruction of the present embodiment may include: a debugged terminal determination unit601, configured to acquire a debugging instruction sent by an operating terminal, and determine a debugged terminal according to the debugging instruction; a communication link determination unit602, configured to determine a first edge communication node corresponding to the debugged terminal, and determine a debugging communication link between the first edge communication node and the debugged terminal; the first edge communication node being determined based on first edge communication node information sent by the debugged terminal, and the first edge communication node information being determined and obtained based on an edge node computing application locally installed on the debugged terminal; and a debugging operation sending unit603, configured to send an debugging operation included in the debugging instruction to the debugged terminal through the debugging communication link. In some alternative implementations of the present embodiment, the communication link determination unit602further includes: a first communication node determination subunit, configured to control the debugged terminal to determine the first edge communication node information through the edge node computing application locally installed on the debugged terminal; a first communication node determination subunit, configured to determine the corresponding first edge communication node locally, in response to receiving the first edge communication node information sent by the debugged terminal; and a first communication link determination subunit, configured to determine a communication link between the first edge communication node and the debugged terminal, to obtain the debugging communication link. In some alternative implementations of the present embodiment, the apparatus for sending a debugging instruction further includes: a connection communication link establishment unit; and the communication link establishment unit includes: an instruction communication link acquisition subunit, configured to determine an instruction communication link based on the debugging instruction; a continuous communication link generation subunit, configured to connect the instruction communication link and the debugging communication link through a connection communication link to obtain a continuous communication link; and the debugging operation sending unit is further configured to send a new debugging instruction to the debugged terminal through the continuous communication link. In some alternative implementations of the present embodiment, the continuous communication link generation subunit includes: a second communication node establishment submodule, configured to determine a second edge communication node for establishing the instruction communication link locally, in response to receiving instruction communication link establishment information sent by the operating terminal; where, the second edge communication node is determined based on second edge communication node information sent by the operating terminal, and the second edge communication node information is determined based on an edge node computing application locally installed on the operating terminal; and a second communication link establishment submodule, configured to establish a communication link between the second edge communication node and the operating terminal to obtain the instruction communication link. In some alternative implementations of the present embodiment, the apparatus for sending a debugging instruction further includes: a debugged terminal controlling unit, configured to control the debugged terminal to return corresponding debugging result information after executing the debugging instruction; and a debugging result returning unit, configured to return the debugging result information to the operating terminal, in response to receiving the debugging result information returned by the debugged terminal through the debugging communication link. In some alternative implementations of the present embodiment, the apparatus for sending a debugging instruction further includes: an information processing unit, configured to perform at least one information processing operation of validity authentication processing, message caching processing, or orderliness guarantee processing on an acquired content, in response to acquiring any one of the debugging instruction or the debugging result information. The present embodiment serves as an apparatus embodiment corresponding to the foregoing method embodiments, and for the same content, reference may be made to the description of the foregoing method embodiments, detailed description thereof will be omitted. Through the apparatus for sending a debugging instruction provided in embodiments of the present disclosure, the communication link is determined based on the edge node computing application installed on the debugged terminal to improve the efficiency of sending a debugging instruction. As shown inFIG.7, a block diagram of an electronic device for implementing the method for sending a debugging instruction according to an embodiment of the present disclosure is illustrated. The electronic device is intended to represent various forms of digital computers, such as laptop computers, desktop computers, workbenches, personal digital assistants, servers, blade servers, mainframe computers, and other suitable computers. The electronic device may also represent various forms of mobile apparatuses, such as personal digital processors, cellular phones, smart phones, wearable devices, and other similar computing apparatuses. The components shown herein, their connections and relationships, and their functions are merely examples, and are not intended to limit the implementation of the present disclosure described and/or claimed herein. As shown inFIG.7, the electronic device includes: one or more processors701, a memory702, and interfaces for connecting various components, including high-speed interfaces and low-speed interfaces. The various components are connected to each other using different buses, and may be installed on a common motherboard or in other methods as needed. The processor may process instructions executed within the electronic device, including instructions stored in or on the memory to display graphic information of GUI on an external input/output apparatus (such as a display device coupled to the interface). In other embodiments, a plurality of processors and/or a plurality of buses may be used together with a plurality of memories if desired. Similarly, a plurality of electronic devices may be connected, and the devices provide some necessary operations (for example, as a server array, a set of blade servers, or a multi-processor system). InFIG.7, one processor701is used as an example. The memory702is a non-transitory computer readable storage medium provided by embodiments of the present disclosure. The memory stores instructions executable by at least one processor, so that the at least one processor performs the method for sending a debugging instruction provided by embodiments of the present disclosure. The non-transitory computer readable storage medium of embodiments of the present disclosure stores computer instructions for causing a computer to perform the method for sending a debugging instruction provided by embodiments of the present disclosure. The memory702, as a non-transitory computer readable storage medium, may be used to store non-transitory software programs, non-transitory computer executable programs and modules, such as program instructions/modules corresponding to the method for sending a debugging instruction in embodiments of the present disclosure (for example, the debugged terminal determination unit601, the communication link determination unit602and the debugging operation sending unit603as shown inFIG.6). The processor701executes the non-transitory software programs, instructions, and modules stored in the memory702to execute various functional applications and data processing of the server, that is, to implement the method for sending a debugging instruction in the foregoing method embodiments. The memory702may include a storage program area and a storage data area, where the storage program area may store an operating system and an application program required by at least one function; and the storage data area may store data created by the use of the electronic device for sending a debugging instruction, etc. In addition, the memory702may include a high-speed random access memory, and may also include a non-transitory memory, such as at least one magnetic disk storage device, a flash memory device, or other non-transitory solid-state storage devices. In some embodiments, the memory702may optionally include memories remotely provided with respect to the processor701, and these remote memories may be connected to the electronic device for sending a debugging instruction through a network. Examples of the above network include but are not limited to the Internet, intranet, local area network, mobile communication network, and combinations thereof. The electronic device for performing the method for sending a debugging instruction may further include: an input apparatus703and an output apparatus704. The processor701, the memory702, the input apparatus703, and the output apparatus704may be connected through a bus or in other methods. InFIG.7, connection through a bus is used as an example. The input apparatus703may receive input digital or character information, and generate key signal inputs related to user settings and function control of the electronic device for sending a debugging instruction, such as touch screen, keypad, mouse, trackpad, touchpad, pointing stick, one or more mouse buttons, trackball, joystick and other input apparatuses. The output apparatus704may include a display device, an auxiliary lighting apparatus (for example, LED), a tactile feedback apparatus (for example, a vibration motor), and the like. The display device may include, but is not limited to, a liquid crystal display (LCD), a light emitting diode (LED) display, and a plasma display. In some embodiments, the display device may be a touch screen. Various implementations of the systems and techniques described herein may be implemented in a digital electronic circuit system, an integrated circuit system, an application specific integrated circuit (ASIC), computer hardware, firmware, software, and/or combinations thereof. These various implementations may include the implementation in one or more computer programs. The one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, and the programmable processor may be a dedicated or general-purpose programmable processor, may receive data and instructions from a storage system, at least one input apparatus and at least one output apparatus, and transmit the data and the instructions to the storage system, the at least one input apparatus and the at least one output apparatus. These computing programs, also referred to as programs, software, software applications or codes, include a machine instruction of the programmable processor, and may be implemented using a high-level procedural and/or an object-oriented programming language, and/or an assembly/machine language. As used herein, the terms “machine readable medium” and “computer readable medium” refer to any computer program product, device and/or apparatus (e.g., a magnetic disk, an optical disk, a storage device and a programmable logic device (PLD)) used to provide a machine instruction and/or data to the programmable processor, and include a machine readable medium that receives the machine instruction as a machine readable signal. The term “machine readable signal” refers to any signal used to provide the machine instruction and/or data to the programmable processor. To provide an interaction with a user, the systems and techniques described here may be implemented on a computer having a display apparatus (e.g., a cathode ray tube (CRT)) or an LCD monitor) for displaying information to the user, and a keyboard and a pointing apparatus (e.g., a mouse or a track ball) by which the user may provide the input to the computer. Other kinds of apparatuses may also be used to provide the interaction with the user. For example, a feedback provided to the user may be any form of sensory feedback (e.g., a visual feedback, an auditory feedback, or a tactile feedback); and an input from the user may be received in any form, including acoustic, speech, or tactile input. The systems and techniques described here may be implemented in a computing system (e.g., as a data server) that includes a backend part, implemented in a computing system (e.g., an application server) that includes a middleware part, implemented in a computing system (e.g., a user computer having a graphical user interface or a Web browser through which the user may interact with an implementation of the systems and techniques described here) that includes a frontend part, or implemented in a computing system that includes any combination of the backend part, the middleware part or the frontend part. The parts of the system may be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of the communication network include a local area network (LAN), a wide area network (WAN) and Internet. The computer system may include a client and a server. The client and the server are generally remote from each other and typically interact through the communication network. The relationship between the client and the server is generated through computer programs running on the respective computers and having a client-server relationship to each other. Cloud computing refers to the elastic and scalable shared physical or virtual resource pool accessed through the network. Resources may include servers, operating systems, networks, software, applications, storage devices, etc., and technical systems that may deploy and manage resources in an on-demand, self-service way. Using cloud computing technology, it may provide efficient and powerful data processing capabilities for artificial intelligence, blockchain and other technology applications and model training. According to the technical solution of embodiments of the present disclosure, after acquiring the debugging instruction sent by the operating terminal, the debugged terminal and the first edge communication node corresponding to the debugged terminal are determined according to the debugging instruction, and the debugging communication link between the first edge communication node and the debugged terminal is determined, the first edge communication node is determined based on the first edge communication node information sent by the debugged terminal, and the first edge communication node information is determined and obtained based on the edge node computing application locally installed on the debugged terminal, and the debugging operation included in the debugging instruction is sent to the debugged terminal through the debugging communication link. The communication link is determined based on the edge node computing application installed on the debugged terminal to improve the efficiency of sending a debugging instruction. It should be understood that the various forms of processes shown above may be used to resort, add or delete steps. For example, the steps described in embodiments of the present disclosure may be performed in parallel, sequentially, or in a different order. As long as the desired result of the technical solution disclosed in embodiments of the present disclosure can be achieved, no limitation is made herein. Embodiments do not constitute a limitation to the scope of protection of the present disclosure. It should be appreciated by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made depending on design requirements and other factors. Any modifications, equivalents and replacements, and improvements falling within the spirit and the principle of embodiments of the present disclosure should be included within the scope of protection of the present disclosure.
43,993
11860750
DETAILED DESCRIPTION OF EMBODIMENTS Various embodiments of the present disclosure relate generally to stress testing of payment processing systems using multiple autonomous simulated clients. The terminology used below may be interpreted in its broadest reasonable manner, even though it is being used in conjunction with a detailed description of certain specific examples of the present disclosure. Indeed, certain terms may even be emphasized below; however, any terminology intended to be interpreted in any restricted manner will be overtly and specifically defined as such in this Detailed Description section. As described above, a merchant may provide systems and infrastructure for processing electronic payment requests related to the purchase of goods and services. The proper functioning of these systems, especially at times of high transaction volume or other circumstances that put a strain on those systems, may be important to the merchant in ensuring customer satisfaction and avoiding costs associated with failed transactions. Proper functioning of these systems may be ensured by thorough testing prior to and during deployment.FIGS.1and2depict an exemplary electronic payment transactions processing system and an exemplary merchant payment processing infrastructure that may be subject to such testing.FIGS.3-5,6A, and6Bdepict systems and methods for cloud-based testing of an electronic merchant payment processing infrastructure. As shown inFIG.1, in an electronic payment processing system, a consumer102, during the checkout process with a merchant110, pays for goods or services from merchant110through a POS terminal112, such as, for example, at a PIN pad114associated with POS terminal112. Consumer102may use a payment card as payment and the transaction is processed through a payment environment100. Because merchant110generally can use a different bank or financial institution140than consumer102, an acquirer processor130handles the financial transactions that transfer payment between the financial institution140of consumer102and that of merchant110. If required, consumer102may submit payment information at the PIN pad114associated with POS terminal112of merchant110, such as by swiping his or her payment card, inserting his or her chip-based payment card, through wireless near field communication (NFC), etc., or by any other suitable means. Merchant110sends a payment request by way of a computer network125to an acquirer processor130. Such a payment request may be sent by PIN pad114or POS terminal112. Alternatively, such a request may be sent by a component that controls a flow of a transaction, such as point of sale (POS) engine250depicted inFIG.2. Acquirer processor130requests, by way of payment network120, an electronic transfer of funds from the received funds to the financial institution140associated with merchant110. Merchant110may provide an infrastructure for processing electronic payment requests.FIGS.1and2depict an exemplary system infrastructure for payment processing within a merchant environment, according to one or more embodiments. FIG.2depicts an exemplary system infrastructure for payment processing using a payment network environment, according to one or more embodiments. As shown inFIG.2, payment network environment200for processing electronic payment requests between one or more POS devices112and one or more PIN pad devices114via network communications260. In general, network environment200may include a PIN pad database235, a PIN pad registry230, a configuration service200, a PIN pad API270, a POS engine250, a PIN pad actor240, and a socket gateway210. In one embodiment, one or more point of sale (POS) devices112may be in communication with the POS engine250of network environment200via network260, and one or more PIN pads114may be in communication with the socket gateway210of network environment200via network260. POS engine250may be embodied, for example, as middleware that may command and control each PIN pad and may send a payment request from a POS device112to a PIN pad114. POS engine250may be embodied as a semi-integrated solution and may further control each PIN pad114on behalf of the POS device software. Such control may include controlling a transaction flow or sequence including, for example, prompting for payment card swipe or insert, sending a transaction request for authorization, prompting for a consumer signature, etc. As described above, network environment200may further include PIN pad actor240, configuration service220, PIN pad registry230and PIN pad database235. Socket gateway210may send commands to one or more PIN pad(s)114and may receive responses from the PIN pad(s)114. PIN pad actor240may provide a virtual representation of the PIN pad114and may maintain a current state of the PIN pad114. Configuration service220may, if necessary, configure the PIN pad114upon connection of the PIN pad114to the infrastructure. PIN pad registry230and PIN pad database235may maintain configuration data for each PIN pad114. According to one or more embodiments, the components of network environment200may be connected by the computer network260, such as, for example, a local area network (LAN) or a wireless network, such as, for example, a Wi-Fi network. However, other network connections among the components of network environment200may be used, such as, for example, a wide area network (WAN), the internet, or the cloud. According to one or more embodiments, the components of network environment200may be tested to determine the limitations and strengths of the infrastructure in providing a good customer experience. Methods of cloud-based testing the components of network environment200according to one or more embodiments will be discussed with respect toFIGS.3-5,6A, and6Bbelow. FIG.3depicts a system for cloud-based testing of a payment network, according to one or more embodiments. As shown inFIG.3, a system300for testing of a network environment200may include a plurality of simulated workers, or “sim workers,”320, one or more sim worker generators or “hives”310, a queen340, a command bank330, and a results gallery350, which may store results of the testing. Sim workers320may simulate commands and messages generated by a client device according to a programmed command. Each of hives310may generate a plurality of simulated workers320. Queen340may be the manager and coordinator of the swarm of sim workers320, and may instruct hives310on creating sim workers320and manage the swarm of sim workers320. Command bank330may store commands to be executed by a sim worker320to generate synthetic transaction requests submitted to network environment200. Results gallery350may store results of the testing. The swarm of sim workers320may simulate the traffic of many clients simultaneously “attacking” the network environment200to ensure that the network environment200can handle the stress without unexpected consequences. Queen340may operate based on a plan or schedule of testing parameters to coordinate the testing of network environment200. The plan or schedule of testing parameters may be stored as a configuration file in a format readable by queen340(plain text, XML, etc.), may be directly initialized by an operator, or may be driven through an application programming interface (API), such as a RESTful API. Based on the plan or schedule of testing parameters, queen340may obtain appropriate commands to implement the plan or schedule of testing parameters from command bank330, so as to initialize hives310to create sim workers320with the desired client commands from command bank330. Command bank330may store commands to be executed by sim workers320according to the type of merchant infrastructure the sim worker320is to emulate For example, sim worker320may emulate different types of PIN pad114and/or different POS terminals112. Each PIN pad114or POS terminal112may have a different hardware and software configuration and may communicate by a different set of communications protocols and messages. Accordingly, each sim worker320may operate according to the hardware and software parameters of the merchant infrastructure to be simulated by the sim worker320. Commands stored in command bank330may be in a client-specific language. New client types of PIN pad114and POS terminal112, including associated hardware, software, and communication parameters, may be added to command bank330as needed. Once hives310are initialized, queen340may start the swarm of sim workers320to begin testing network environment200. The commands to be executed by sim workers320may be specified to apply differing loads to different network environments200, PIN pads114, and POS terminals112to be tested, and may be specified to apply differing loads over time. In addition, the mix of commands directed to these components may be designed to apply stress to an expected processing bottlenecks or to determine the effect of contradictory or conflicting commands. Each hive310may operate as a “factory” to generate sim workers320as needed according to the quantity of sim workers320, PIN pad and POS terminal type(s) to be simulated by sim workers320, an identity, URL address, or other identifier of the network environment200to be tested, and the commands each sim worker320is to simulate. Sim workers320may submit transaction requests and other commands to network environment200according to the configuration of each sim worker320. Each sim worker320may receive responses from network environment200and may submit additional transaction requests or other commands according to the response from network environment200. Once the testing is complete, queen340may capture the results from sim workers320and send the results to results gallery350. Commands stored in command bank330may include, for example, a command ID, device ID, a command name, a command request, command fixed parameters, command variable parameters, a command response, command response fixed parameters, and command response variable parameters, etc. Commands that are necessary for each transaction used in testing may be stored in command bank330. When sim workers320are created, all commands may be stored in the local memory associated with each sim worker320. Each sim worker may act autonomously such as an actual PIN Pad would. Command request and command response may include triggers to determine what command is being received by sim worker320and what response sim worker320may send. Queen340may receive commands to implement testing of network environment200through a plan or schedule of testing parameters or by way of an API published by queen340. The commands received by queen340may include commands to, for example: configure hives310according to a number of sim workers320to generate, a type of PIN pad or POS terminal to be simulated by each sim worker320(e.g., Verifone PIN pad, Ingenico PIN Pad, etc.), a command bank key identifying a command stored in command bank330, and an identity, URL address, or other identifier of the network environment200to be tested; configure the swarm of sim workers320according to a test duration, a number of transactions to be simulated by each sim worker320, and an increase in load on network environment200which is continuous, random, or linear over time; start, stop, or reset a current test of network environment200; read or receive by way of the API a new plan or schedule of testing parameters; read results of a test from sim workers320; or terminate. System300for testing of a network environment200may be employed to implement a method for cloud-based testing of a payment network.FIGS.4and5depict flowcharts of methods for cloud-based testing of a payment network, according to one or more embodiments. As shown inFIG.4, queen340may implement a method for cloud-based testing of a payment network, according to one or more embodiments. In operation405, queen340may read or receive a test configuration including a plan or schedule of testing parameters. The test configuration may be read, for example from a configuration file on disk or a database, such as command bank330depicted inFIG.3, or may be received directly through operator inputs or the API of queen340. In operation410, queen340may configure hive310for generating sim workers320according to the test configuration. The configuration may include, for example, quantity of sim workers320, PIN pad type(s) to be simulated by sim workers320, an identity, URL address, or other identifier of the network environment200to be tested, and the commands each sim worker320is to simulate. In operation415, queen340may read commands from command bank330that are to be used in implementing the test plan. In operation420, queen340may configure swarm of sim workers320for testing parameters. The configuration of the swarm may include, for example, the commands read from command bank330and parameters including a test duration, a number of transactions to be simulated by each sim worker320, and an increase in load on the payment network which may be continuous, random, or linear over time. In operation425, queen340may start testing. In operation430, queen340may stop testing. In operation435, queen340may read test results from swarm by, for example, obtaining the results of test commands and transaction from sim workers320or hives310. In operation440, queen340may save the test results to results gallery. The test results may be saved to the test gallery in any suitable form including, for example, raw data such as spreadsheets, comma separated value (CSV) files, databases such as SQL databases, or other raw data file formats, or as aggregated data such as charts and graphs, etc. In operation445, queen340may terminate. As shown inFIG.5, sim worker320may implement a method for cloud-based testing of a payment network, according to one or more embodiments. In operation505, sim worker320may receive testing commands from hive310. Commands may include an identity or address of a payment network to be tested, parameters of one or more test transactions to be submitted to the payment network. In operation510, sim worker320may receive a testing configuration from queen340. In operation515, sim worker320may generate a synthetic transaction request based on the test configuration. In operation520, sim worker320may submit the synthetic transaction request to the payment network. In operation525, sim worker320may receive a result of the synthetic transaction request from the payment network. For example, the result may be an acceptance of the transaction request, a denial of the transaction request, or another response requiring further actions by sim worker320. In operation530, sim worker320may generate an additional command based on transaction result. In operation535, sim worker320may receive a result of the additional command from the payment network. In operation540, sim worker320may determine whether testing has been completed based on the testing configuration. If testing has not been completed, then sim worker320may return to operation515to generate an additional synthetic transaction request based on the test configuration. If testing has been completed, then sim worker320may continue with operation545and return test results to queen340. FIGS.6A and6Bdepict a communication diagram in a method for cloud-based testing of a payment network, according to one or more embodiments. As shown inFIGS.6A and6B, in operation605, queen340may read or obtain a test configuration. In operation610, queen may send the configuration command and test parameters to hive310for generating sim workers320. In operation615, hive310may read test commands from command bank330. In operation620, hive310may initialize a number of sim workers320. In operation625, queen340may configure the swarm of sim workers320for testing parameters. In operation630, queen340may start testing by sim workers320. In operation625, sim worker320may submit a synthetic transaction request to network environment200. In operation630, network environment200may process the synthetic transaction request submitted by sim worker320. In operation635, network environment200may send a result message to sim worker320. In operation640, sim worker320may submit an additional command to network environment200based on the processing result of the synthetic transaction request. In operation645, network environment200may process the additional command submitted by sim worker320. In operation650, network environment200may send a result message to sim worker320. In operation655, queen340may stop testing by sim workers320. In operation660, queen340may read test results from swarm of sim workers320. In operation665, queen340may save test results of testing to results gallery350. In operation670, queen340may terminate. Any suitable system infrastructure may be put into place for cloud-based testing of a payment network.FIGS.1-3and the discussion above provide a brief, general description of a suitable computing environment in which the present disclosure may be implemented. In one embodiment, any of the disclosed systems, methods, and/or graphical user interfaces may be executed by or implemented by a computing system consistent with or similar to that depicted inFIGS.1-3. Although not required, aspects of the present disclosure are described in the context of computer-executable instructions, such as routines executed by a data processing device, e.g., a server computer, wireless device, and/or personal computer. Those skilled in the relevant art will appreciate that aspects of the present disclosure can be practiced with other communications, data processing, or computer system configurations, including: Internet appliances, hand-held devices (including personal digital assistants (“PDAs”)), wearable computers, all manner of cellular or mobile phones (including Voice over IP (“VoIP”) phones), dumb terminals, media players, gaming devices, virtual reality devices, multi-processor systems, microprocessor-based or programmable consumer electronics, set-top boxes, network PCs, mini-computers, mainframe computers, and the like. Indeed, the terms “computer,” “server,” and the like, are generally used interchangeably herein, and refer to any of the above devices and systems, as well as any data processor. Aspects of the present disclosure may be embodied in a special purpose computer and/or data processor that is specifically programmed, configured, and/or constructed to perform one or more of the computer-executable instructions explained in detail herein. While aspects of the present disclosure, such as certain functions, are described as being performed exclusively on a single device, the present disclosure may also be practiced in distributed environments where functions or modules are shared among disparate processing devices, which are linked through a communications network, such as a Local Area Network (“LAN”), Wide Area Network (“WAN”), and/or the Internet. Similarly, techniques presented herein as involving multiple devices may be implemented in a single device. In a distributed computing environment, program modules may be located in both local and/or remote memory storage devices. Aspects of the present disclosure may be stored and/or distributed on non-transitory computer-readable media, including magnetically or optically readable computer discs, hard-wired or preprogrammed chips (e.g., EEPROM semiconductor chips), nanotechnology memory, biological memory, or other data storage media. Alternatively, computer implemented instructions, data structures, screen displays, and other data under aspects of the present disclosure may be distributed over the Internet and/or over other networks (including wireless networks), on a propagated signal on a propagation medium (e.g., an electromagnetic wave(s), a sound wave, etc.) over a period of time, and/or they may be provided on any analog or digital network (packet switched, circuit switched, or other scheme). For example, the systems and processes described above may be performed on or between one or more computing devices, e.g. configuration service.FIG.7illustrates an example computing device. A computing device700may be a server, a computing device that is integrated with other systems or subsystems, a mobile computing device such as a smart phone, a cloud-based computing ability, and so forth. The computing device700may be any suitable computing device as would be understood in the art, including without limitation, a custom chip, and embedded processing device, a tablet computing device, a POS terminal associated with the merchant110, a back-office system of a merchant110, a personal data assistant (PDA), a desktop, laptop, microcomputer, and minicomputer, a server, a mainframe, or any other suitable programmable device. In various embodiments disclosed herein, a single component may be replaced by multiple components and multiple components may be replaced by single component to perform a given function or functions. Except where such substitution would not be operative, such substitution is within the intended scope of the embodiments. The computing device700may include a processor710that may be any suitable type of processing unit, for example a general-purpose central processing unit (CPU), a reduced instruction set computer (RISC), a processor that has a pipeline or multiple processing capability including having multiple cores, a complex instruction set computer (CISC), a digital signal processor (DSP), application specific integrated circuits (ASIC), a programmable logic devices (PLD), and a field programmable gate array (FPGA), among others. The computing resources may also include distributed computing devices, cloud computing resources, and virtual computing resources in general. The computing device700may also include one or more memories730, for example read-only memory (ROM), random access memory (RAM), cache memory associated with the processor710, or other memory such as dynamic RAM (DRAM), static RAM (SRAM), programmable ROM (PROM), electrically erasable PROM (EEPROM), flash memory, a removable memory card or disc, a solid-state drive, and so forth. The computing device700also includes storage media such as a storage device that may be configured to have multiple modules, such as magnetic disk drives, floppy drives, tape drives, hard drives, optical drives and media, magneto-optical drives and media, compact disk drives, Compact Disc Read Only Memory (CD-ROM), compact disc recordable (CD-R), Compact Disk Rewritable (CD-RW), a suitable type of Digital Versatile Disc (DVD) or BluRay disc, and so forth. Storage media such as flash drives, solid-state hard drives, redundant array of individual discs (RAID), virtual drives, networked drives and other memory means including storage media on the processor710, or memories730are also contemplated as storage devices. It may be appreciated that such memory may be internal or external with respect to operation of the disclosed embodiments. It may be appreciated that certain portions of the processes described herein may be performed using instructions stored on a computer readable medium or media that direct computer system to perform the process steps. Non-transitory computable-readable media, as used herein, comprises all computer-readable media except for transitory, propagating signals. Networking communication interfaces740may be configured to transmit to, or receive data from, other computing devices700across a network760. The network and communication interfaces740may be, for example, an Ethernet interface, a radio interface, a Universal Serial Bus (USB) interface, or any other suitable communications interface and may include receivers, transmitter, and transceivers. For purposes of clarity, a transceiver may be referred to as a receiver or a transmitter when referring to only the input or only the output functionality of the transceiver. Example communication interfaces740may include wire data transmission links such as Ethernet and TCP/IP. The communication interfaces740may include wireless protocols for interfacing with private or public networks760. For example, the network and communication interfaces740and protocols may include interfaces for communicating with private wireless networks such as Wi-Fi network, one of the IEEE 802.11x family of networks, or another suitable wireless network. The network and communication interfaces740may include interfaces and protocols for communicating with public wireless networks760, using for example wireless protocols used by cellular network providers, including Code Division Multiple Access (CDMA) and Global System for Mobile Communications (GSM). A computing device700may use network and communication interfaces740to communicate with hardware modules such as a database or data store, or one or more servers or other networked computing resources. Data may be encrypted or protected from unauthorized access. In various configurations, the computing device700may include a system bus750for interconnecting the various components of the computing device700, or the computing device700may be integrated into one or more chips such as programmable logic device or application specific integrated circuit (ASIC). The system bus770may include a memory controller, a local bus, or a peripheral bus for supporting input and output devices720, and communication interfaces760. Example input and output devices720include keyboards, keypads, gesture or graphical input devices, motion input devices, touchscreen interfaces, one or more displays, audio units, voice recognition units, vibratory devices, computer mice, and any other suitable user interface. The processor710and memory730may include nonvolatile memory for storing computable-readable instructions, data, data structures, program modules, code, microcode, and other software components for storing the computer-readable instructions in non-transitory computable-readable mediums in connection with the other hardware components for carrying out the methodologies described herein. Software components may include source code, compiled code, interpreted code, executable code, static code, dynamic code, encrypted code, or any other suitable type of code or computer instructions implemented using any suitable high-level, low-level, object-oriented, visual, compiled, or interpreted programming language. Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims.
26,864
11860751
DETAILED DESCRIPTION A DUT which includes DFT circuitry may be tested by applying test patterns (which may be generated by using Automatic Test Pattern Generation (ATPG)) to the DUT, and comparing responses generated by the DUT with expected responses. A test fails when a response generated by the DUT does not match an expected response. Some ATE may drive test patterns into the DUT and sample the responses via general purpose I/O (GPIO) pins. The responses may be sampled and compared at specific cycles. One or more cycles can have one or more bits of the response value as “don't care,” i.e., these bits are not compared. The frequency of operation of the GPIOs may be low (e.g., in the order of 100 megabits/sec). The size of a test pattern may refer to the number of bits in the test pattern. With growing chip sizes and complexity of the DUTs, the size of test patterns is increasing. An ATE that uses GPIOs may need a large number of GPIO pins to handle large-sized test patterns and keep the test time reasonable. However, a limited number of GPIO pins may be available on the chip. Thus, an ATE may not be able to test a large and complex DUT in a reasonable amount of time by driving large-sized patterns via GPIO pins. An alternative to using GPIO pins is to use high speed I/Os, such as SERDES based I/O, which may already be part of the functional operation of the chip. Using a SERDES-based I/O, an ATE can drive signals at high speeds (e.g., gigabits/sec). However, a SERDES-based I/O may operate on packetized data. The term “packetized data” may refer to data that is represented and/or stored using a packet format. In addition, the data may be encoded to achieve transition density and electrical balance (e.g., using encoding schemes such as 8b/10b). A SERDES-based I/O may use encoding and packetizing circuitry at the transmitter, and decoding and depacketization circuitry at the receiver. Packet-based protocols used by a SERDES-based I/O may inherently cause non-deterministic flow of packets across the SERDES-based link between the ATE and the DUT because of built-in mechanisms in the packet-based protocols to handle traffic conditions. Thus, some ATE systems that use SERDES-based I/O may use specialized circuitry at transmitters and receivers and may generate responses to test stimuli that are non-deterministic in time (due to packet-based protocols) and value (due to encoding schemes that make it impossible or difficult to use “don't care” bits). Embodiments described herein feature circuitry to provide deterministic data latency in SERDES-based DFT systems. Embodiments described herein may use a combination of techniques which include, but are not limited to, (1) an encoding technique which eliminates the need for decoding circuitry, and (2) a packet processing technique to achieve deterministic timing of response packets. Advantages of embodiments described herein include, but are not limited to, (1) allowing a DUT to be tested using large test patterns via a high speed I/O interface that uses a small number of pins on the DUT, and (2) not requiring the ATE to include packet processing circuitry. The number of pins on the DUT is limited. Thus, using fewer pins on the DUT for testing is advantageous because it allows more pins to be used for other purposes. Requiring ATEs to use packet processing circuitry increases the cost and complexity of the ATE and prevents ATEs that do not include packet processing circuitry from being used. Thus, it is advantageous to enable the ATE to test DUTs via a high speed I/O interface without requiring the ATE to use packet processing circuitry because it allows existing ATEs (which may not include packet processing circuitry) to be used and avoids increasing the cost and complexity of ATEs. FIG.1illustrates a SERDES-based DFT system in accordance with some embodiments described herein. SERDES-based DFT system100may include ATE102that is used to test DUT104. The connection between ATE102and DUT104may include a set of high speed I/O lanes106, which may be connected to SERDES-based physical layer (PHY) circuit108in DUT104. Set of high speed I/O lanes106may include one or more connections that can be operated at a high speed (e.g., gigabits/sec). Controller110may interface SERDES-based PHY108and scan chain circuit112. Scan chain circuit112may include, among other things, a set of scan chains, circuitry to scan-in data into the set of scan chains, and circuitry to scan-out data from the set of scan chains. ATE102may provide packetized test patterns to DUT104and receive packetized test responses from DUT104over high speed I/O lanes106. FIG.2illustrates a packet format in accordance with some embodiments described herein. Packet200may generally include control section202(which may also be referred to as a “header”) and data payload section204. Control section202may include information that may be used for configuring controller110or scan chain circuit112, and data payload section204may include the test patterns or test responses. The sizes of control section202and data payload section204and the interpretation of the bits in control section202may depend on the packetization format. Although not shown inFIG.2, a packet may also include an integrity section (e.g., cyclic redundancy check bits) and packet delineation sections, e.g., start of packet (SOP) bytes to indicate the beginning of a packet and end of packet (EOP) bytes to indicate the end of the packet. Packets may be of varying sizes and the gap between packets (inter-packet gap or IPG) may also vary. In some embodiments described herein, the packet format may conform to the Institute of Electrical and Electronic Engineers (IEEE) 1149.10 standard. InFIG.1, the interface circuit in ATE102that drives high speed I/O lanes106may not include packet processing circuitry. Instead, the interface circuit in ATE102may be unaware of the packetization format and treat the packetized data as a set of bits. Packetized test patterns sent by ATE102to DUT104may be unpacked by controller110. Controller110may use the control information (which may be in control section202) in the packet to determine the type of packet, and perform one or more configuration operations and/or provide the unpacked test patterns (which may be in the data payload section204) to scan chain circuit112. Next, controller110may receive test responses from scan chain circuit112. Controller110may then packetize the test responses and send the packetized test responses to ATE102over high speed I/O lanes106. FIG.3illustrates a controller circuit in accordance with some embodiments described herein. Controller circuit300(which may correspond to controller110) may include physical layer interface circuit338, packet processing circuit340, and scan controller circuit342. On the inbound path, controller circuit300may receive packetized data302from a PHY circuit (e.g., PHY circuit108inFIG.1) and use control and data signals322to provide data to a scan chain circuit (e.g., scan chain circuit112). Control and data signals322may include, but are not limited to, a clock signal and test pattern data signals that are scanned into the scan chains. On the outbound path, controller circuit300may use control and data signals324to receive data from the scan chain circuit (e.g., scan chain circuit112) and provide packetized data336to the PHY circuit (e.g., PHY circuit108inFIG.1). Control and data signals324may include, but are not limited to, a clock signal and test response data signals that are scanned out of the scan chains. On the inbound path, physical layer interface circuit338may include symbol detection circuit304, decoder306, and lane alignment circuit308. Packetized data302may be received from a PHY as a voltage signal. Symbol detection circuit304may detect symbols in the voltage signal and provide a sequence of symbols (e.g., bits) to decoder306. Decoder306may decode the symbols to undo any encoding that was performed by an ATE (e.g., ATE102). The symbols may be received on a set of parallel lanes, and lane alignment circuit308may synchronize the symbols across the set of parallel lanes. Lane alignment circuit308may provide packetized data to packet processing circuit340for further processing. Packet processing circuit340may include input first-in-first-out buffer (FIFO)310to store packetized data received from physical layer interface circuit338(e.g., from lane alignment circuit308). Packetized data stored in input FIFO310may be processed by packet processing controller circuit312. Specifically, packet processing controller circuit312may perform integrity checks on the packetized data (e.g., a cyclic redundancy checks) and extract information about the packet contents. Packet processing controller circuit312may use a control section in the packet (e.g., control section202inFIG.2) to determine the type of the packet. Depending on the type of packet, the packet may be consumed by packet processing controller circuit312or forwarded to scan controller circuit342. For example, packet processing controller circuit312may determine whether the packet is a “CONFIG/TARGET” packet (e.g., as defined in the IEEE 1149.10 standard). If packet processing controller circuit312determines that the packet is a “CONFIG/TARGET” packet, then packet processing controller circuit312may consume the packet, i.e., the contents of the packet (e.g., content section202and/or data payload section204) may be used by packet processing controller circuit312to configure one or more registers in controller circuit300. For such packets, an appropriate response packet may be generated and sent back to the ATE via the outbound path. On the other hand, if packet processing controller circuit312determines that the packet includes information for the scan controller circuit342, then the contents of the packet may be extracted and provided to write FIFO controller314, which may write the packet contents to input FIFO316in scan controller circuit342. Scan controller circuit342may include control logic318which may use the packet contents stored in input FIFO316to operate the scan chain circuit (e.g., scan chain circuit112inFIG.1) via scan bridge320. The packet contents may include scan data (e.g., data that is desired to be scanned into the scan cells) and scan control and clocking information (e.g., information about how many shift cycles to use for scanning in the data). Specifically, control logic318may read data from input FIFO316and generate scan data, control signals, clock signals. The rate at which control logic318reads out data from input FIFO316may depend on the rate at which the scan chain circuit (e.g., scan chain circuit112inFIG.1) can consume the data. Control logic318may insert no operation (NOP) or stall cycles if the scan chain circuit (e.g., scan chain circuit112inFIG.1) does not consume the scan content quickly enough. In some embodiments, control logic318may use tap bridge326to sample pattern responses (which may be referred to “scan results”) from control and data signals324. In other embodiments, control logic318may directly sample pattern responses from control and data signals324. The scan result data may be stored in output FIFO328. Depending on the scan configuration, each shift cycle may cause one or more scan data items to be written into output FIFO328. Specifically, in some configurations, multiple scan outputs may be combined into a single scan data item which may be written to the output FIFO328. Thus, different amounts of time may be used to sample and store scan results in output FIFO328. In other words, the scan result data may become available in output FIFO328with non-deterministic timing (which is shown inFIG.3with the label “non-deterministic timing”). Some ATEs may require that a fixed time duration exist between the time when a test pattern is provided to a DUT and a test response is received from the DUT. For example, inFIG.1, ATE102may begin transmitting a test pattern to DUT at clock cycle C1, and expect to receive a test response in cycle C2(multiple clock cycles may exist between C1and C2). ATE102may require cycle-level synchronization between the test pattern and test response to ensure that the test response is compared with the corresponding expected response. ATE102may not be able to test DUT104if the timing of the test response is off by even one cycle. In a SERDES-based DFT system, scan data is communicated and processed using packets. Packet-based protocols may not rely on packets arriving at fixed time intervals. Instead, packet-based protocols may expect packets to arrive within a predictable time window. Thus, packet-based protocols may inherently create non-deterministic packet arrival times. Other factors may also cause the packet arrival times to be non-deterministic. For example, the process of recovering the data from a SERDES interface may involve clock recovery. The data received from the SERDES interface may be interpreted using the recovered clock and may be further processed using an internal clock which may be different from recovered clock. Thus, the time between when an ATE transmits a test pattern to a DUT over an SERDES interface and when the DUT recovers the received data may not be fixed. Thus, an ATE may not be able to use packet-based protocols if the ATE requires a fixed time duration (e.g., at a cycle-level resolution) between the time when a test pattern is provided to a DUT and a test response is received from the DUT. Additionally, SERDES interfaces may encode data transmitted over a high speed I/O lane to maintain desirable electrical characteristics on the lines (e.g., DC balance and noise suppression) and to ensure that enough transitions occur within a given duration to facilitate clock and data recovery at the receiver. A side benefit of these encoding schemes is to provide special control characters, which may be used to demarcate packet boundaries (e.g., start of packet and end of packet). Some special control characters may be used to implement specific SERDES functionality (e.g., during an initial training phase in a SERDES protocol, special characters may be used as markers for lane alignment). Additionally, the special characters may also be used for clock correction and synchronization. If data is desired to be communicated between the ATE and the DUT using such encoding, then both the ATE and the DUT may require encoding and decoding circuitry. For example, a popular encoding scheme called “8b/10b” maps each 8-bit data word to a corresponding 10-bit word. In this scheme, modifying a single bit in an 8-bit word may cause the corresponding 10-bit word to be completely different. The scan result may include one or more bits which are “don't care” or “X” bits. The scan result in don't care bits are ignored when the scan result is compared with the expected response. In other words, the scan result is valid regardless of whether a don't care bit is a 0 or a 1. However, because 8b/10b encoding can completely transform the bit values, the 10b words corresponding to a don't care being a 0 or a 1 can be completely different. Thus, an ATE may require the test response (which may be received as a 10b word encoded according to the 8b/10b encoding scheme) to be decoded back into an 8b word before the ATE can compare the test response with an expected response. However, this approach requires the ATE to include 8b/10b encoding and decoding circuitry, which is undesirable. Some embodiments described herein use a data encoding technique which allows don't care bits to be compared using the encoded word (e.g., a 10-bit word), i.e., the data encoding technique does not require the encoded word to be decoded (e.g., without decoding the 10-bit word into an 8-bit word) for comparison with an expected response. In other words, the encoded test response (e.g., 10-bit word test response) may be directly compared (i.e., without performing a decoding operation) with an expected response (e.g., a 10-bit word that represents an 8-bit expected response), and the comparison allows don't care bits to be used. These embodiments do not require a decoding circuit in the ATE because the encoded test responses can be directly compared with corresponding expected responses. The data encoding technique may support a desired transition density to allow clock-and-data recovery to operate (e.g., the data encoding technique may limit the run-lengths to be less than a threshold value). Specifically, some embodiments described herein may perform the encoding as follows. Let an 8-bit word be represented by the letters “abcdefgh,” where each letter corresponds to a bit. Specifically, the letter “a” may represent the first bit (which may be the most significant bit) and the letter “h” may represent the eighth bit (which may be the least significant bit). The encoded 10-bit word may be generated by inverting the first bit and the fifth bit. Thus, the 10-bit word may be represented by “āabcdēefgh,” where “ā” may refer to the inverse of bit “a” and “ē” may refer to the inverse of bit “e.” In this encoding technique, special symbols may be generated by not inverting the first and fifth bits. Thus, special symbols may have the format “aabcdeefgh.” For example, let the 8-bit word “00010001” represent a test pattern and the 8-bit word “X000X000” represent the expected response. Using the above-described technique, the corresponding 10-bit encoded words for the test pattern and the expected response are “1000110001” and “XX000XX000,” respectively. An ATE (e.g., ATE102inFIG.1) may send the 10-bit word “1000110001” to a DUT (e.g., DUT104inFIG.1), and may receive a 10-bit test response, which may be directly compared with the expected response “XX000XX000.” In general, embodiments described herein may use an encoding technique that maps each bit in an m-bit word to one or more corresponding bits in an n-bit word (where n is greater than m), so that changing the value of a single bit in the m-bit word only changes the values of the corresponding bits in the n-bit word. The encoding technique ensures that a don't care bit in the m-bit word (which corresponds to a single bit in the m-bit word having either a 0 or a 1 value) directly maps to the corresponding bits in the n-bit word which may be interpreted as don't care bits. Thus, don't care bits in the test response may be directly compared with the expected response without decoding the test response. InFIG.3, on the outbound path, the scan results stored in output FIFO328may be read by read FIFO control circuit330and provided to packet processing and control circuit312for packetization, which may store the packet in output FIFO332. Encoding circuit334may read the packet from output FIFO332, encode the packet, and provide packetized data336to the PHY circuit (e.g., PHY circuit108inFIG.1). On the outbound path, the delay from the output of read FIFO control circuit330to PHY circuit is deterministic. In some embodiments described herein, read FIFO control circuit330reads a specific amount of data (e.g., the amount of data required for creating a single scan response packet) from output FIFO328at fixed intervals, thereby causing the scan response packets to be sent back to the ATE with deterministic timing, i.e., at fixed intervals. Thus, read FIFO control circuit330ensures that packets are sent to the PHY circuit with deterministic timing even though the scan response data may arrive at output FIFO328with non-deterministic timing. In some embodiments described herein, read FIFO control circuit330may include a timer circuit to control when data is read from output FIFO328. The timer circuit may be programmable. Specifically, a value stored in a register may control the duration of the timer, and the register value may be programmed as part of the initial setup. FIGS.4-5illustrate a timing diagram with deterministic latency in accordance with some embodiments described herein. The X-axis inFIGS.4-5corresponds to time. InFIG.4, at time T1, a timer circuit may be started, and inbound and scan packet processing402may be performed, which may complete at time T2. Safety margin404may be a time duration between the end of inbound and scan packet processing402(at time T2) and the beginning of writing scan response data to output FIFO328(at406). InFIG.4, a small delay is shown between the time the scan response packet starts to be written to output FIFO328and time T3when the scan response packet begins transmission to the ATE (at408). The response packet may end transmission before the next response packet is transmitted, and the gap between two response packets may be filled using filler data410(which is interpreted by the ATE as such). Safety margin404may be adjusted by increasing or decreasing the timer duration. Specifically, the system may determine the desired safety margin404and set the timer register value accordingly. Deterministic latency is achieved because read FIFO control circuit330guarantees that the ATE receives response packets at fixed intervals. Specifically, read FIFO control circuit330achieves this by reading data from output FIFO328at fixed intervals. During a test run, the ATE (e.g., ATE102) may execute a training procedure during which the high speed I/O interface with the DUT (e.g., DUT104) is initialized. After training, the ATE may begin transmitting test pattern packets to the DUT. Meanwhile, the ATE may monitor the data received from the DUT to detect the first response packet. After the first response packet has been received, the ATE can expect to receive every subsequent response packet in the test run at guaranteed and fixed time intervals. In some embodiments described herein, the reference clock for the ATE and the DUT may be the same. The amount of data that is read from the output FIFO328may be programmable. For example, the amount of data may be determined by a value stored in a register which may be initialized during an initial setup phase. The outbound path may be isolated from the inbound path. Any bubbles (e.g., absence of useful data) on the inbound path may not affect timing if the output FIFO328in the outbound path is never empty. In general, safety margin404may be increased to ensure that output FIFO328is never empty and decreased to improve performance (reducing safety margin404increases the packet throughput of the system). At time T3, the timer circuit may be restarted, and inbound and scan packet processing412for the next test pattern packet may be performed. The resulting scan response packet may begin transmission at time T4when the timer expires and is restarted again. FIG.5illustrates how variations in the inbound traffic processing times does not affect the deterministic transmission of response packets in the outbound path. Specifically, inFIG.5, scan response data is written to output FIFO328(at406) sooner than it was inFIG.4. However, the response packet transmission still begins at a fixed time, e.g., time T3. Additionally, inFIG.5, the time used to process the first test pattern packet (e.g., inbound and scan packet processing402) is less than the time used to process the second test pattern packet (e.g., inbound and scan packet processing412). However, this difference does not impact the deterministic response packet transmission schedule. Specifically, the first response packet is transmitted at time T3and the second response packet is transmitted at time T4, where the time duration between time T1and T3is equal to the time duration between time T3and T4. In some embodiments described herein, test packets may be received from an ATE, where the test packets may include test pattern data to test a DUT. Specifically, the test packets may be received over a SERDES connection between the ATE and the DUT, where the SERDES connection may include one or more high speed I/O lanes. For example, inFIG.1, test packets may be received from ATE102over a SERDES connection, which may include high speed I/O lanes106. A circuit in the DUT may apply the test pattern data to the DUT using a set of scan chains to obtain test response data corresponding to the test pattern data. For example, inFIG.1, controller110may apply the test pattern data to DUT104by using scan chain circuit112. Scan chain circuit112may capture response data corresponding to the test pattern data (the test response data may be received by controller110from scan chain circuit112). The test response data may be received by controller110at irregular time intervals. For example, inFIG.3, controller circuit300may correspond to controller110, and the test response data may be received at output FIFO328at irregular intervals. Next, the circuit in the DUT may send the response packets (which include a portion of the test response data) to the ATE at regular time intervals, where the response packets may be sent to the ATE over the SERDES connection. The term “portion of the test response data” may include the entire test response data or only a subset of the test response data. For example, at regular time intervals, read FIFO control circuit330inFIG.3may read a portion of test response data from output FIFO328, packetize the portion of the test response data, and send the packetized portion of the test response data (e.g., the response packets) to the ATE. In some embodiments described herein, the response packets may include encoded test response data which is obtained by encoding the portion of the test response data. For example, inFIG.3, encoding circuit334may encode the portion of the test response data. The encoding may map a first set of bits to a second set of bits, where the number of bits in the second set of bits is greater than the number of bits in the first set of bits, where each bit in the first set of bits is mapped to one or more corresponding bits in the second set of bits, and where a don't care bit in the first set of bits is mapped to corresponding one or more don't care bits in the second set of bits. In these embodiments, the ATE may directly compare the encoded test response data with a corresponding encoded expected response data. In some embodiments described herein, the regular time intervals are determined based on a timer circuit in the DUT. For example, inFIG.3, read FIFO control circuit330may use a timer circuit to determine when to generate response packets, which then deterministically determines when the response packet begins transmission to the ATE. As shown inFIGS.4-5, transmission of response packets may begin at times T3and T4based on the timer circuit. In some embodiments described herein, a configuration packet may be received from the ATE, where the configuration packet may include a timer value. The timer value may be used to configure and/or program the timer circuit (e.g., by storing the timer value in a register associated with the timer circuit). For example, packet processing and control circuit312may determine that a received packet is a configuration packet. Next, packet processing and control circuit312may extract the timer value from configuration packet and configure and/or program the timer circuit based on the extracted timer value (e.g., by storing the extracted timer value in a register associated with the timer circuit). The timer value may include a safety margin (e.g., safety margin440shown inFIGS.4-5) which may be based on an amount of irregularity of the irregular time intervals. Specifically, if the test response data is expected to be received at very irregular time intervals, then a larger safety margin may be used. A configuration packet may also include a response data size value. The portion of the test response data may be determined based on the response data size value. Specifically, inFIG.3, test response data may be dumped into output FIFO328at irregular intervals. However, only the portion of test response data that is to be included in the next response packet may be read by read FIFO control circuit330. The amount of data that is to be read may be configured based on the response data size value received in a configuration packet. FIG.6illustrates an example flow for the design, verification, and fabrication of an integrated circuit in accordance with some embodiments described herein. EDA processes612(the acronym “EDA” refers to “Electronic Design Automation”) can be used to transform and verify design data and instructions that represent the integrated circuit. Each of these processes can be structured and enabled as multiple modules or operations. Flow600can start with the creation of a product idea610with information supplied by a designer, information which is transformed and verified by using EDA processes612. When the design is finalized, the design is taped-out634, which is when artwork (e.g., geometric patterns) for the integrated circuit is sent to a fabrication facility to manufacture the mask set, which is then used to manufacture the integrated circuit. After tape-out, a semiconductor die is fabricated 636 and packaging and assembly638are performed to produce the manufactured IC chip640. Specifications for a circuit or electronic structure may range from low-level transistor material layouts to high-level description languages. A high-level of abstraction may be used to design circuits and systems, using a hardware description language (“HDL”) such as VHDL, Verilog, SystemVerilog, SystemC, MyHDL or OpenVera. The HDL description can be transformed to a logic-level register transfer level (“RTL”) description, a gate-level description, a layout-level description, or a mask-level description. Each lower abstraction level that is a less abstract description adds more detail into the design description. The lower levels of abstraction that are less abstract descriptions can be generated by a computer, derived from a design library, or created by another design automation process. An example of a specification language at a lower level of abstraction language for specifying more detailed descriptions is SPICE (which stands for “Simulation Program with Integrated Circuit Emphasis”). Descriptions at each level of abstraction contain details that are sufficient for use by the corresponding tools of that layer (e.g., a formal verification tool). During system design614, functionality of an integrated circuit to be manufactured is specified. The design may be optimized for desired characteristics such as power consumption, performance, area (physical and/or lines of code), and reduction of costs, etc. Partitioning of the design into different types of modules or components can occur at this stage. During logic design and functional verification616, modules or components in the circuit are specified in one or more description languages and the specification is checked for functional accuracy. For example, the components of the circuit may be verified to generate outputs that match the requirements of the specification of the circuit or system being designed. Functional verification may use simulators and other programs such as test-bench generators, static HDL checkers, and formal verifiers. In some embodiments, special systems of components referred to as ‘emulators’ or ‘prototyping systems’ are used to speed up the functional verification. During synthesis and design for test618, HDL code is transformed to a netlist. In some embodiments, a netlist may be a graph structure where edges of the graph structure represent components of a circuit and where the nodes of the graph structure represent how the components are interconnected. Both the HDL code and the netlist are hierarchical articles of manufacture that can be used by an EDA product to verify that the integrated circuit, when manufactured, performs according to the specified design. The netlist can be optimized for a target semiconductor manufacturing technology. Additionally, the finished integrated circuit may be tested to verify that the integrated circuit satisfies the requirements of the specification. During netlist verification620, the netlist is checked for compliance with timing constraints and for correspondence with the HDL code. During design planning622, an overall floor plan for the integrated circuit is constructed and analyzed for timing and top-level routing. During layout or physical implementation624, physical placement (positioning of circuit components such as transistors or capacitors) and routing (connection of the circuit components by multiple conductors) occurs, and the selection of cells from a library to enable specific logic functions can be performed. As used herein, the term ‘cell’ may specify a set of transistors, other components, and interconnections that provides a Boolean logic function (e.g., AND, OR, NOT, XOR) or a storage function (such as a flip-flop or latch). As used herein, a circuit ‘block’ may refer to two or more cells. Both a cell and a circuit block can be referred to as a module or component and are enabled as both physical structures and in simulations. Parameters are specified for selected cells (based on ‘standard cells’) such as size and made accessible in a database for use by EDA products. During analysis and extraction626, the circuit function is verified at the layout level, which permits refinement of the layout design. During physical verification628, the layout design is checked to ensure that manufacturing constraints are correct, such as DRC constraints, electrical constraints, lithographic constraints, and that circuitry function matches the HDL design specification. During resolution enhancement630, the geometry of the layout is transformed to improve how the circuit design is manufactured. During tape-out, data is created to be used (after lithographic enhancements are applied if appropriate) for production of lithography masks. During mask data preparation632, the ‘tape-out’ data is used to produce lithography masks that are used to produce finished integrated circuits. A storage subsystem of a computer system (such as computer system700inFIG.7) may be used to store the programs and data structures that are used by some or all of the EDA products described herein, and products used for development of cells for the library and for physical and logical design that use the library. FIG.7illustrates an example machine of a computer system700within which a set of instructions, for causing the machine to perform any one or more of the methodologies discussed herein, may be executed. In alternative implementations, the machine may be connected (e.g., networked) to other machines in a LAN, an intranet, an extranet, and/or the Internet. The machine may operate in the capacity of a server or a client machine in client-server network environment, as a peer machine in a peer-to-peer (or distributed) network environment, or as a server or a client machine in a cloud computing infrastructure or environment. The machine may be a personal computer (PC), a tablet PC, a set-top box (STB), a Personal Digital Assistant (PDA), a cellular telephone, a web appliance, a server, a network router, a switch or bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine. Further, while a single machine is illustrated, the term “machine” shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein. The example computer system700includes a processing device702, a main memory704(e.g., read-only memory (ROM), flash memory, dynamic random access memory (DRAM) such as synchronous DRAM (SDRAM), a static memory706(e.g., flash memory, static random access memory (SRAM), etc.), and a data storage device718, which communicate with each other via a bus730. Processing device702represents one or more processors such as a microprocessor, a central processing unit, or the like. More particularly, the processing device may be complex instruction set computing (CISC) microprocessor, reduced instruction set computing (RISC) microprocessor, very long instruction word (VLIW) microprocessor, or a processor implementing other instruction sets, or processors implementing a combination of instruction sets. Processing device702may also be one or more special-purpose processing devices such as an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), network processor, or the like. The processing device702may be configured to execute instructions726for performing the operations and steps described herein. The computer system700may further include a network interface device708to communicate over the network720. The computer system700also may include a video display unit710(e.g., a liquid crystal display (LCD) or a cathode ray tube (CRT)), an alphanumeric input device712(e.g., a keyboard), a cursor control device714(e.g., a mouse), a graphics processing unit722, a signal generation device716(e.g., a speaker), graphics processing unit722, video processing unit728, and audio processing unit732. The data storage device718may include a machine-readable storage medium724(also known as a non-transitory computer-readable medium) on which is stored one or more sets of instructions726or software embodying any one or more of the methodologies or functions described herein. The instructions726may also reside, completely or at least partially, within the main memory704and/or within the processing device702during execution thereof by the computer system700, the main memory704and the processing device702also constituting machine-readable storage media. In some implementations, the instructions726include instructions to implement functionality corresponding to the present disclosure. While the machine-readable storage medium724is shown in an example implementation to be a single medium, the term “machine-readable storage medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions. The term “machine-readable storage medium” shall also be taken to include any medium that is capable of storing or encoding a set of instructions for execution by the machine and that cause the machine and the processing device702to perform any one or more of the methodologies of the present disclosure. The term “machine-readable storage medium” shall accordingly be taken to include, but not be limited to, solid-state memories, optical media, and magnetic media. Some portions of the preceding detailed descriptions have been presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the ways used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm may be a sequence of operations leading to a desired result. The operations are those requiring physical manipulations of physical quantities. Such quantities may take the form of electrical or magnetic signals capable of being stored, combined, compared, and otherwise manipulated. Such signals may be referred to as bits, values, elements, symbols, characters, terms, numbers, or the like. It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the present disclosure, it is appreciated that throughout the description, certain terms refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage devices. The present disclosure also relates to an apparatus for performing the operations herein. This apparatus may be specially constructed for the intended purposes, or it may include a computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a computer readable storage medium, such as, but not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, and magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, or any type of media suitable for storing electronic instructions, each coupled to a computer system bus. The algorithms and displays presented herein are not inherently related to any particular computer or other apparatus. Various other systems may be used with programs in accordance with the teachings herein, or it may prove convenient to construct a more specialized apparatus to perform the method. In addition, the present disclosure is not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of the disclosure as described herein. The present disclosure may be provided as a computer program product, or software, that may include a machine-readable medium having stored thereon instructions, which may be used to program a computer system (or other electronic devices) to perform a process according to the present disclosure. A machine-readable medium includes any mechanism for storing information in a form readable by a machine (e.g., a computer). For example, a machine-readable (e.g., computer-readable) medium includes a machine (e.g., a computer) readable storage medium such as a read only memory (“ROM”), random access memory (“RAM”), magnetic disk storage media, optical storage media, flash memory devices, etc. In the foregoing disclosure, implementations of the disclosure have been described with reference to specific example implementations thereof. It will be evident that various design modifications may be made thereto without departing from the broader spirit and scope of implementations of the disclosure as set forth in the following claims. Where the disclosure refers to some elements in the singular tense, more than one element can be depicted in the figures and like elements are labeled with like numerals. The disclosure and drawings are, accordingly, to be regarded in an illustrative sense rather than a restrictive sense.
42,927
11860752
DETAILED DESCRIPTION It is important to note that the embodiments disclosed herein are only examples of the many advantageous uses of the innovative teachings herein. In general, statements made in the specification of the present application do not necessarily limit any of the various claimed embodiments. Moreover, some statements may apply to some inventive features but not to others. In general, unless otherwise indicated, singular elements may be in plural and vice versa with no loss of generality. In the drawings, like numerals refer to like parts through several views. A system and method for agentless discovery and inspection of applications and services in a compute environment includes establishing a connection with a first workload in a first compute environment. The first compute environment includes a plurality of workloads. The system causes installation of a collector on the first workload, wherein the collector, when executed, is configured to collect data from the first workload. Collected data is received from the collector, and the collector is configured to remove the executable code of the collector upon completing data collection. Access is provided to the collected data for an asset monitoring system, wherein the asset monitoring system is configured to discover from the collected data at least an application executed on the first workload. FIG.1is an example of a schematic illustration of an asset monitoring system100monitoring various compute environments, implemented according to an embodiment. The asset monitoring system110(hereinafter ‘system110’) is described in more detail in U.S. Non-Provisional application Ser. No. 17/513,117 filed on Oct. 28, 2021, the contents of which are hereby incorporated by reference. The system110is communicatively coupled with a plurality of compute environments, such as first compute environment120and to a cloud computing environment130. While a plurality of compute environments are described in this embodiment, it should be readily apparent that the system110may be implemented to communicate with a single compute environment, multiple systems110may each be configured to communicate with a single type of compute environment, a system may be implemented in a compute environment, or any combination thereof. A compute environment, such as compute environment120may be a cloud computing environment, or other networked environment in which a plurality of workloads, computer devices, such as servers, and the like, may communicate with each other. In certain embodiments, the system110may connect to a compute environment via a relay115, which is used to direct network traffic. The system110may be implemented as a virtual appliance, for example an Open Virtualization Appliance (OVA) in VMware®. Such a virtual appliance may be deployed in a cloud environment, such as the cloud environments described below. In an embodiment the system110may be executed on a container running in a Kubernetes® cluster. For example, the system110is connected to a first compute environment120through a relay115. The first compute environment120is a closed network, and includes a plurality of computer servers125, or individually server125-1through125-N, where ‘N’ is an integer having a value of ‘2’ or more. Each server125is a computer, and includes at least a processing circuitry, memory, and network interface. Some compute environments similar to the first compute environment120may require an asset monitoring system110to be installed physically in the compute environment120. In an embodiment, the system110may communicate with a server125utilizing a secure network protocol, such as SSH (secure shell), using account login information provided to the system110. The login information may include a username and password, wherein the account is authorized to install executable code files on the server125. The system110is further communicatively connected to a cloud computing environment130. The cloud computing environment130may be, or may be implemented using, for example, Amazon® Web Services (AWS), Microsoft® Azure, Google® Cloud Platform (GCP), and the like. For example, the cloud computing environment130may be a virtual private cloud (VPC) environment, implemented on top of infrastructure provided by AWS or Azure. The asset monitoring system110is operative for collecting data from compute environments, analyzing the collected data, and discovering applications and communications between such applications. An example implementation of such a system is further discuss in U.S. patent application Ser. No. 17/513,117, the entire contents of which are incorporated herein by reference. The cloud computing environment130may include various workloads. A workload may be a virtual machine132, a container cluster134, a serverless function136, and the like. Virtual machines may be implemented, for example utilizing VMware®. Container clusters can be implemented utilizing Kubernetes®. A serverless function can be implemented, for example using Amazon® Lambda. The cloud computing environment130further includes an application programming interface (API) through which various functions of the cloud computing environment130may be accessed or requested. The system110may further be communicatively connected to an orchestrator140, and a server manager150. The orchestrator140is a component of a cloud computing environment. An orchestrator may be, for example, Amazon® Elastic Container Service (ECS), or Azure App Service. A server manager (or server management system) may be, for example, Chef® EAS, Puppet®, Ansible®, Azure® Automation, and the like. The asset monitoring system110is configured to communicate with each compute environment and extract data from the workloads thereon, for example using collector applications. For example, the system110may initiate an SSH connection to a server125, and cause the server125to install a collector application (not shown). The collector application (or simply “collector”) is programmed to open a communication channel to the system110and provide over the communication channel data collected from the server125on which it is installed. When the collector has finished sending the required data, the collector is configured to remove itself from the server125. Different methods of operating collectors are discussed below. FIG.2is an example diagram of a container cluster134utilizing a collector, implemented in accordance with an embodiment. A container cluster134may include a plurality of nodes220, individually referenced as nodes220-1through220-L, where ‘L’ is an integer having a value of ‘2’ or greater. Each node220includes a daemonset pod, such as daemonset pod222-1, and a plurality of pods224, such as pod224-1through224-M, where ‘M is an integer having a value of ‘2’ or greater. A daemonset collector210is configured, for example by the system110ofFIG.1, to install a collector on each node in the cluster. In an embodiment, collector installation may occur periodically. The daemonset collector210may further configure the collector for each node to delete itself once collection of data is complete. In an embodiment, the daemonset collector210may install a collector on a node, wherein the collector is configured to collect application data. Application data may be collected from multiple sources, utilized to execute, develop, or otherwise deploy in the compute environment. The collected data may include, but is not limited to, the binary code of each application, software libraries, error logs, script code, configuration files (environment variables, command line, etc.), credentials, and the like. Binary code, or any code, may be collected by collectors executed on the servers hosting an application. The data is collected per each application and may include binary code and configurations. The collection of such data can be triggered at predefined time intervals, or upon receiving an event from a software deployment tool (e.g., a CI/CD tool). For example, using Harness® or Jenkins™ to deploy an application in a cloud environment may trigger a webhook in an asset monitoring system to begin collection. In certain embodiments the daemonset collector210is configured to communicate with the system110, for example in order to receive version updates. This is discussed in more detail inFIG.4below. FIG.3is an example of a flowchart300of a method for collecting data artifacts from compute environments, implemented in accordance with an embodiment. At S310, a connection is established between an asset monitoring system and a workload. Establishing a connection may include, for example, opening an SSH communication between a workload and the asset monitoring system. A workload may be a physical computer device, such as server125-1ofFIG.1above, or a virtual workload. A virtual workload may be a virtual machine, container, serverless function, and the like. In some embodiments, a connection may be established from the workload to the asset monitoring system. For example, a daemonset collector may install a collector on a node, whereby the collector then establishes a communication path to the asset monitoring system based on predetermined instructions provided by the daemonset collector. At S320, a collector is installed on the workload, wherein installation is under the control of the asset monitoring system. In an embodiment, the collector is provided as executable code and the workload is configured to execute the code as it is received, or at a future time. Installing the collector may further include downloading the executable code, for example from the asset monitoring system. At S330data is received from the collector. In an embodiment the collector is configured to collect application data. Application data may be collected from multiple sources, utilized to execute, develop, or otherwise deploy in the compute environment. The collected data may include, but is not limited to, the binary code of each application, software libraries, error logs, script code, configuration files (environment variables, command line, etc.), credentials, and the like. Binary code, or any code, may be collected by collectors executed on the servers hosting an application. The data is collected per each application and may include binary code and configurations. The collection of such data can be triggered at predefined time intervals, or upon receiving an event from a software deployment tool (e.g., a CI/CD tool). At S340, the collector is removed from the workload, wherein removal (or uninstallation) is performed under control of the asset monitoring system. In an embodiment, the collector may be predefined with instructions to remove the executable code once collection has concluded. In an embodiment, collection may be determined to be concluded once certain predetermined searches are performed, once all the collected data has been sent to the asset monitoring system, or a combination thereof. In all use cases, the collector is not persistent. Having a non-persistent application is beneficial, as deployment does not require complex integration. For example, in agent-based systems, it is typically required that the agent be installed in all instances in the compute environment, which requires R&D integration, and each update to the agent again requires integration. Contrasted, a non-persistent collector which is deployed on an as-needed basis requires little to no integration and may be updated frequently without involving R&D or making changes to the CI/CD stage. Additionally, having a non-persistent application provides increased security, as the most up to date version will always be provided from the asset monitoring system. Certain embodiments may include collecting data from serverless functions, such as Amazon® Lambda. A collector for a serverless function may reside as an application in the asset monitoring system (e.g., system110,FIG.1), and collect data artifacts from the serverless function by requesting data from an API of the serverless function, for example the AWS API which can receive custom HTTP requests through which data can be received. Certain other embodiments may include platform as a service (PaaS) instance, which may be accessed similarly utilizing an API of the cloud computing environment. FIG.4is an example flowchart400of a method for updating a collector deployed in a compute environment, implemented in accordance with an embodiment. In this example the compute environment is a cloud computing environment implementing container clusters. The method may be performed by an asset monitoring system, which installs a daemonset collector in the container cluster in order to manage and install collectors in each node of the cluster. At S410a collector is scheduled to collect data from a workload. For example, a daemonset collector as described above may be configured to generate a schedule, which includes at least one future time point, at which a collector will be installed on a node in the cluster in which the daemonset collector is operative. The daemonset collector is always present in the cluster, and installs collector applications on the nodes on a predefined basis. In an embodiment, the collector may be installed by the daemonset collector in response to receiving a collection request, for example from an asset monitoring system. As the collector is removed from the node upon completing collection, the collector is a non-persistent application. At S420, a check is performed to determine if a version of the executable collector application present thereon is a current version. S420may be performed by the daemonset collector. In an embodiment, the check may be performed by querying the asset monitoring system to determine what a current version of the collector executable code is. If the versions do not match, a request to download the current version of the collector executable code is sent to the asset monitoring system. A version is generally a unique identifier of the application, and typically version numbers ascend, so that if the application version of the daemonset collector is lower than the current version, the daemonset collector is configured to request a download of the current version. If a newer version is available execution continues at S430, if a newer version is not available execution continues at S440. At S430, the collector version is updated. Updating the collector version may include sending a request to a server, such as the asset monitoring system, to receive a current version of the collector executable code. A connection, such as SSH (secure shell) or HTTPS (hypertext transfer protocol secure) may be established in order to transfer the file from the server to the cluster over a network. In an embodiment, the daemonset collector may retain one or more older versions of the collector application, which is useful if a rollback is required. A rollback is when a current software version is found to be lacking or defective in some way, and therefore an older version, which is proven to be stable, is regressed to while the current version undergoes correction. In certain embodiments the daemonset collector stores only the current version of the collector application. At S440, data is collected from the workload. The data is collected by the collector, which is installed on the workload, in this example a Kubernetes® node, by a daemonset collector. Collected data is sent to the asset monitoring system for further processing. Once the data has been collected, the daemonset collector configures the node to remove the collector application. If a communication channel is open to the asset monitoring system the communication channel is closed. Collected data may include binary code of an application on the workload, a software library, an error log, a script code, a configuration file, credentials, and the like. FIG.5is an example schematic diagram of an asset monitoring system500according to an embodiment. The system500includes a processing circuitry510coupled to a memory520, a storage530, and a network interface540. In an embodiment, the components of the system500may be communicatively connected via a bus550. The processing circuitry510may be realized as one or more hardware logic components and circuits. For example, and without limitation, illustrative types of hardware logic components that can be used include field programmable gate arrays (FPGAs), application-specific integrated circuits (ASICs), Application-specific standard products (ASSPs), system-on-a-chip systems (SOCs), graphics processing units (GPUs), tensor processing units (TPUs), general-purpose microprocessors, microcontrollers, digital signal processors (DSPs), and the like, or any other hardware logic components that can perform calculations or other manipulations of information. The memory520may be volatile (e.g., random access memory, etc.), non-volatile (e.g., read only memory, flash memory, etc.), or a combination thereof. In one configuration, software for implementing one or more embodiments disclosed herein may be stored in the storage530. In another configuration, the memory520is configured to store such software. Software shall be construed broadly to mean any type of instructions, whether referred to as software, firmware, middleware, microcode, hardware description language, or otherwise. Instructions may include code (e.g., in source code format, binary code format, executable code format, or any other suitable format of code). The instructions, when executed by the processing circuitry510, cause the processing circuitry510to perform the various processes described herein. The storage530may be magnetic storage, optical storage, and the like, and may be realized, for example, as flash memory or other memory technology, or any other medium which can be used to store the desired information. The network interface540allows the system500to communicate with, for example, various workloads, and collectors installed on the various workloads across different compute environments. It should be understood that the embodiments described herein are not limited to the specific architecture illustrated inFIG.5, and other architectures may be equally used without departing from the scope of the disclosed embodiments. The various embodiments disclosed herein can be implemented as hardware, firmware, software, or any combination thereof. Moreover, the software is preferably implemented as an application program tangibly embodied on a program storage unit or computer readable medium consisting of parts, or of certain devices and/or a combination of devices. The application program may be uploaded to, and executed by, a machine comprising any suitable architecture. Preferably, the machine is implemented on a computer platform having hardware such as one or more central processing units (“CPUs”), a memory, and input/output interfaces. The computer platform may also include an operating system and microinstruction code. The various processes and functions described herein may be either part of the microinstruction code or part of the application program, or any combination thereof, which may be executed by a CPU, whether or not such a computer or processor is explicitly shown. In addition, various other peripheral units may be connected to the computer platform such as an additional data storage unit and a printing unit. Furthermore, a non-transitory computer readable medium is any computer readable medium except for a transitory propagating signal. All examples and conditional language recited herein are intended for pedagogical purposes to aid the reader in understanding the principles of the disclosed embodiment and the concepts contributed by the inventor to furthering the art, and are to be construed as being without limitation to such specifically recited examples and conditions. Moreover, all statements herein reciting principles, aspects, and embodiments of the disclosed embodiments, as well as specific examples thereof, are intended to encompass both structural and functional equivalents thereof. Additionally, it is intended that such equivalents include both currently known equivalents as well as equivalents developed in the future, i.e., any elements developed that perform the same function, regardless of structure. It should be understood that any reference to an element herein using a designation such as “first,” “second,” and so forth does not generally limit the quantity or order of those elements. Rather, these designations are generally used herein as a convenient method of distinguishing between two or more elements or instances of an element. Thus, a reference to first and second elements does not mean that only two elements may be employed there or that the first element must precede the second element in some manner. Also, unless stated otherwise, a set of elements comprises one or more elements. As used herein, the phrase “at least one of” followed by a listing of items means that any of the listed items can be utilized individually, or any combination of two or more of the listed items can be utilized. For example, if a system is described as including “at least one of A, B, and C,” the system can include A alone; B alone; C alone; 2A; 2B; 2C; 3A; A and B in combination; B and C in combination; A and C in combination; A, B, and C in combination; 2A and C in combination; A, 3B, and 2C in combination; and the like.
21,603
11860753
DETAILED DESCRIPTION Various embodiments of the present disclosure now will be described more fully hereinafter with reference to the accompanying drawings, in which some, but not all, embodiments of the disclosures are shown. Indeed, these disclosures may be embodied in many different forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will satisfy applicable legal requirements. The term “or” is used herein in both the alternative and conjunctive sense, unless otherwise indicated. The terms “illustrative” and “exemplary” are used to be examples with no indication of quality level. Like numbers refer to like elements throughout. Moreover, while certain embodiments of the present disclosure are described with reference to predictive data analysis, one of ordinary skill in the art will recognize that the disclosed concepts can be used to perform other types of data analysis. I. Overview and Technical Improvements Various embodiments of the present disclosure introduce techniques for efficient and scalable monitoring of a multi-node system by using a distributed ledger to describe candidate events associated with a monitored node and a hierarchical validation workflow to validate candidate events (e.g., obtain consensus) associated with ledger entry requests. Using a distributed ledger to describe candidate events associated with a monitored node provides a scalable and efficient way to store candidate event data associated with the monitored node. In some embodiments, one or more monitoring node computing entities (e.g., miner node computing entities and validator node computing entities) can execute operations associated with a ledger entry request to add a candidate event to a distributed ledger that is done in a distributed manner. This means that the distributed ledger system computing entity can maintain the distributed ledger without the need to request data from the monitored node computing entities, the miner node computing entities, or the validator node computing entities, as in turn the miner node computing entities initiate ledger entries into distributed ledgers upon detecting candidate events associated with a monitored node that is associated with the multi-node system (e.g., distributed ledger system), and in turn, the one or more validator nodes can initiate a validation request to validate the candidate events. In some embodiments, using a distributed ledger to maintain candidate events associated with a monitored node and using a hierarchical validation workflow that is based at least in part on a validator effect measure associated with the validator nodes lead to more efficient use of computational and network resources because it requires fewer number of network transmissions between the distributed ledger system computing entity and monitoring node computing entities (e.g., miner node computing entities and validator node computing entities). In some embodiments, without using the ledger-based techniques described herein, establishing occurrence of candidate events associated with a particular monitored node would require significant network transmissions in multiple request network transmissions and multiple response network transmissions, where each request network transmission is from a central computing entity to a respective monitored node computing entity to inquire about the occurrence and/or details of candidate events associated with the particular monitored node, and where each response network transmission is a response by a monitored node computing entity to a request network transmission that is received by the central computing entity and describes data regarding occurrence and/or details of candidate events associated with the particular monitored node. In contrast, using various ledger-based techniques described herein eliminates the need for the noted request network transmissions and response network transmission to establish the occurrence of a candidate event of the particular monitored node and would require less network transmission, where each network transmission is a ledger entry request for a candidate event associated with a monitored node that is transmitted by a miner node computing entity to a central computing entity, such as the distributed ledger system computing entity or a validation request to validate a candidate event that is transmitted by a validator node computing entity to the central computing entity. Which means that, by using the various ledger-based techniques described herein, the number of network transmissions needed to determine/maintain data about candidate events associated with a monitored node is decreased, which in turn means using the noted ledger-based techniques leads to more efficient computational/networking resources. Moreover, using the hierarchical validation workflow techniques described herein reduces computational complexity associated with performing effective monitoring using a set of operations that have less computational complexity. II. Definitions A “distributed ledger” may refer to a data construct that describes a digital ledger that comprise a growing list of data object in form of records or blocks, that may be used to record events across multiple computing entities. According to various embodiments of the present disclosure, a distributed ledger may be a blockchain that comprises a growing list of blocks that may be linked together based at least in part on cryptography. Each block, for example, may include a cryptographic hash of the previous block, a timestamp, and event data. In some embodiments, a distributed ledger may be maintained by a distributed ledger system computing entity that is associated with miner node computing entities and validator node computing entities associated with a plurality of nodes adhering to a consensus protocol for committing candidate events associated with monitored nodes to a distributed ledger. In some embodiments, an example distributed ledger may be configured to maintain/store candidate events. In some embodiments, a candidate event maintained/stored in an example distributed ledger may represent a block in a blockchain and may establish proof of occurrence of the candidate event. The term “distributed ledger system” may refer to a data construct that describes, for a monitored node associated with the distributed ledger system, one or more events associated with the monitored node that is committed to a distributed ledger maintained by a distributed ledger system computing entity associated with the distributed ledger system. In some embodiments, the one or more events may include one or more candidate events associated with a monitored node, where each committed candidate event (e.g., candidate event added to a distributed ledger) may be indicative of proof of occurrence of the candidate event. As an example, in some operational implementations, where candidate events may comprise health-related events, each committed candidate event to a distributed ledger associated with a monitored node may be indicative of proof of occurrence of the candidate event, and individually or collectively indicative of proof of health with respect to the entity associated with the monitored node. As another example, in some operational implementations, where candidate events may comprise E-commerce-related events, each committed candidate event to a distributed ledger associated with a monitored node may be indicative of proof of occurrence of the candidate event, and individually or collectively indicative of proof of occurrence of Business-to-Business (B2B) transaction, Business-to-Customer (B2C) transaction, Customer-to-Business (C2B) transaction, Customer-to-Customer (C2C) transaction, and/or the like. A distributed ledger system may comprise a plurality of nodes that includes one or more miner nodes and one or more validator nodes adhering to a consensus protocol for committing candidate events associated with monitored nodes to corresponding distributed ledgers. In some embodiments, the plurality of nodes comprises at least some nodes being miner nodes and validator nodes. In some embodiments, the distributed ledger system computing entity106may maintain a distributed ledger associated with a plurality of monitored nodes. In some embodiments, the distributed ledger system computing entity106may maintain a plurality of distributed ledgers, where each distributed ledger is associated with a monitored node of a plurality of monitored nodes. The term “monitored node” may refer to a data construct that describes a digital identifier associated with a computing entity (e.g., also referred to herein as monitored node computing entity) representative of a real-world entity or virtual entity whose candidate events may be committed to a distributed ledger that is maintained by a distributed ledger system computing entity. An example of a monitored node is a digital identifier associated with a computing entity representative of an individual, a group of individuals, an organization, a device, and/or the like. In an operational example, each monitored node may comprise a digital identifier associated with a computing entity representative of a health insurance member of a health insurance provider (e.g., an individual enrolled in a health insurance plan that is offered by the health insurance provider), where for each digital identifier, each candidate event committed to a distributed ledger may represent proof of occurrence of a health event with respect to the health insurance member, and individually or collectively represent proof of health for the health insurance member. A monitored node may be monitored by one or more miner nodes and/or validator nodes. In some embodiments, a monitored node may be monitored by a miner node and/or validator node in that a miner node and/or validator node (via associated computing entity) may collect data associated with (or otherwise that describes) candidate events. Accordingly, a monitored node may also describe a digital identifier associated with a computing entity representative of a real-world entity and/or virtual entity, where the monitored node is associated with one or more miner nodes and/or validator nodes that can collect (and/or otherwise access), via one or more computing entities associated with the monitored node, data associated with (or otherwise that describes) candidate events associated with the monitored node. A monitored node, for example, may be associated with one or more computing entities that may in turn be associated with a miner node and/or validator node, wherein the miner node and/or validator node can collect data associated with the monitored node based at least in part on the one or more computing entities associated with the monitored node and the miner node and/or validator node. The one or more computing entities, for example, may include a smart phone, a smart watch, a smart refrigerator, Internet of Things device, a smart home device, a global positioning system (GPS) device and/or the like. A monitored node, for example, may be associated with one or more computing entities that may enable a miner node and/or validator node to collect candidate event data associated with the monitored node via one or more software applications, software programs and/or the like installed therein, where the noted one or more software applications, software programs and/or the like may be associated with the monitoring entity (e.g., an entity associated with a miner node or a validator node, for example, may comprise a software provider of the noted software application, software program, and/or the like, and/or a provider of the computing entity itself.). The term “miner node” may refer to a data construct that describes a digital identifier associated with a computing entity representative (also referred to herein as miner node computing entity) of a real-world entity and/or a virtual entity that can initiate/propose a ledger entry for a candidate event associated with a monitored node. A miner node may be associated with one or more monitored nodes in that the miner node can collect, via one or more computing entities, associated with the miner node, miner event data for candidate events associated with the one or more monitored nodes. In some embodiments, when a miner node computing entity can initiate a ledger entry, a successful ledger entry request by the miner node computing entity may cause a distributed ledger system computing entity to generate a validation score that comprises a baseline score, where, for example, the baseline score may be generated based at least in part on a miner effect measure for the miner node. The term “validator node” may refer to a data construct that describes a digital identifier associated with a computing entity representative (also referred to herein as validator node computing entity) of a real-world entity and/or a virtual entity that can contribute to validation (e.g., endorsement) of a candidate event associated with a monitored node, in order to commit the candidate event to a distributed ledger maintained by a distributed ledger system computing entity. A validator node may be associated with one or more monitored nodes in that the validator node can collect, via one or more computing entities, associated with the validator node, validator event data for candidate events associated with the one or more monitored nodes. In some embodiments, when a validator node computing entity can contribute to a validation score for a candidate event, a distributed ledger system computing entity can apply a validator effect measure associated with the validator node computing entity to the validation score generated for the candidate event. In some embodiments, applying a validator effect measure to a validation score for a candidate event comprise increasing the validation score for the candidate event based at least in part on the validator effect measure, where the validator effect measure, for example, may comprise and/or may be translated to a numerical value/score. The term “miner event data” may refer to data (e.g., that may include metadata) that describes attributes of a candidate event that is proposed by a miner node to commit to a distributed ledger. Miner event data, for example, may comprise data collected by the miner node during occurrence of an event (e.g., candidate event) associated with a monitored node, such as data collected during a physical activity event that describes attributes of the event. The term “validator event data” may refer to a data (e.g., that may include metadata) that describes attributes of a candidate event that is used by a validator node in a validation iteration to validate the candidate event. Validator event data, for example, may describes attributes of a candidate event presented by a validator node (e.g., via associated computing entity) as representative of a candidate event, and that can be used in a validation iteration for validating the candidate event. The term “validator effect measure” may refer to a data construct that describes an estimated measure, such as a score, assigned to a validator node, that can be applied to a validation score for a candidate event. For example, a validation score associated with a candidate event may be updated based at least in part on a validator effect measure of a validator node associated with a validation iteration that satisfies one or more criteria. In some embodiments, a validator node may be associated with a validator effect measure for each event type of a plurality of event types. In some embodiments, a validator effect measure for a given validator node may be indicative of a trust level and/or confidence level associated with the validator node based at least in part on one more validation effect measure criteria. The term “miner effect measure” may refer to a data construct that describes an estimated measure, such as a score, assigned to a miner node, that may be used by a distributed ledger system to initialize a validation score for a candidate event. For example, in some embodiments, a distributed ledger system may generate a validation score for a candidate event based at least in part a baseline score, where the baseline score may be generated based at least in part on the miner effect measure associated with the miner node for the ledger entry request. In some embodiments, a miner node may be associated with a miner effect measure for each event type of a plurality of event types. In some embodiments, where a node may represent a miner node and a validator node, a miner effect measure for a miner node may comprise a validator effect measure for the miner node. The term “validation score” may refer to a data construct that describes a score assigned to a candidate event proposed (e.g., by a miner node computing entity associated with a miner node), to be added to a distributed ledger, and that is used by a distributed ledger system to determine if the candidate event should be added to the distributed ledger based at least in part on comparing the validation score with a consensus threshold. In various embodiments, in response to a ledger entry request for a candidate event, a validator score is generated (e.g., initialized) for the candidate event, where the validator score comprises a baseline that may be incrementally updated based at least in part on applying a validator effect measure of one or more validator nodes that satisfy one or more criteria (e.g., having monitoring data that matches attributes of the candidate event). The term “consensus threshold” may refer to a data construct that describes criteria, such as a predetermined score, that may be used by a distributed ledger system to determine if a proposed candidate event should be committed to a distributed ledger associated with the distributed ledger system. A distributed ledger system may compare output (e.g., updated validation score) of each validation iteration of a hierarchical validation workflow to determine if a candidate event (e.g., described by miner event data) should be committed to (e.g., added to) a distributed ledger. The term “candidate event” may refer to a data construct that describes an event (e.g., activity, incident, interaction, and/or the like) having a designation, such as an event type designation that is within an event type space associated with a distributed ledger system, where an event type space associated with a distributed ledger system may comprise a plurality of event types, and may indicate events that may be committed to a distributed ledger associated with the distributed ledger system. In some embodiments, an event type space may be associated with a category of events. For example, a distributed ledger system may be associated with an event type space, where each event type within the event type space comprises a health-related event type (e.g., physical activity event type, medical visit event type, laboratory visit event type, and/or the like). In the noted example, each candidate event committed to a distributed ledger may be used to establish proof of health for the associated monitored node. As another example, a distributed ledger system may be associated with an event type space, where each event type within the event type space comprises an E-commerce-related event type (e.g., a purchase event type, a supply event type, a delivery event type, and/or the like). In the noted example, each candidate event committed to a distributed ledger may be used to establish proof of transaction. In some embodiments, each event type of a plurality of event types may be associated with a consensus threshold. In some embodiments, various event types may be associated with different consensus threshold. For example, in some embodiments, a candidate event may be recorded in a distributed ledger based at least in part on associated validation score satisfying (e.g., exceeding) a consensus threshold for the event type of the candidate event. The term “ledger entry request” may refer to a data construct that describes a request that is transmitted, to a distributed ledger system computing entity, by a miner node computing entity associated with a miner node, where the request may comprise a request to commit a candidate event to a distributed ledger associated with a monitored node. In some embodiments, a ledger entry request comprises one or more identifying fields associated with the corresponding monitored node associated with the ledger entry request. For example, where the monitored node is associated with a computing entity representative of an individual, the ledger entry request may include identifying data (e.g., digital identifier) configured to uniquely identify the individual. Additionally, and/or alternatively, in some embodiments, the ledger entry request may include identifying data (e.g., digital identifier) configured to uniquely identify the miner node. Additionally, and/or alternatively, in some embodiments, the ledger entry request may include authentication data (e.g., a password, a passcode, a pin number, and/or the like) that is configured to, if verified, demonstrate that the miner node is authorized to commit candidate events to a distributed ledger associated with the monitored node. In some embodiments, each ledger entry request may be broadcast (e.g., made accessible) to one or more other miner nodes and/or validator nodes via one or more validation queues (e.g., a primary validation queue, a secondary validation queue, a tertiary validation queue, and or the like). The term “hierarchical validation workflow” may refer to a process comprising an ordered sequence of validation iterations that may be executed in order to determine if a proposed candidate event should be recorded in a distributed ledger maintained by a distributed ledger system computing entity. In some embodiments, a hierarchical validation workflow may be executed in response to a ledger entry request by a miner node computing entity associated with a miner node. For example, a hierarchical validation workflow may be triggered in response to a validation request, wherein the validation request is triggered based at least in part in response to the ledger entry request. In some embodiments, a hierarchical validation workflow may comprise an ordered sequence of L validation iterations, where each of one or more ith non-initial validation iterations is performed based at least in part on an association of an (i−1)th validation iteration with a non-affirmative validation determination. For example, consider where a first validation iteration is associated with a non-affirmative validation determination output, a second validation iteration may be performed based at least in part on association of the first validation iteration with a non-affirmative validation determination output for the first validation iteration, a third validation iteration may be performed based at least in part on association of the second validation iteration with a non-affirmative validation determination output for the second validation iteration, a forth validation iteration may be performed based at least in part on association of the third validation iteration with a non-affirmative validation determination output for the third validation iteration, and a fifth validation iteration may not be performed based at least in part on association of the fourth validation iteration with an affirmative validation determination output for the fourth validation iteration. As another example, consider where a first validation iteration is associated with a non-affirmative validation determination output, a second validation iteration may be performed based at least in part on association of the first validation iteration with a non-affirmative validation determination output for the first validation iteration, a third validation iteration may be performed based at least in part on association of the second validation iteration with a non-affirmative validation determination output for the second validation iteration, and a fourth validation iteration may not be performed based at least in part on association of the third validation iteration with an affirmative validation determination output. In some embodiments, each validation iteration may be associated with a unique validator node. The term “validation iteration” may refer to a validation operation (e.g., computer executed operation) that may be performed in response to a ledger entry request to commit a candidate event to a distributed ledger for a monitored node associated with a distributed ledger system. In some embodiments, a validation iteration is configured to incrementally update a validation score for a candidate event based at least in part on miner event data reported by a miner node computing entity and validator event data reported by a validator node, and that may be used by a distributed ledger system computing entity to determine a ledger entry determination for the candidate event. In some embodiments, a validation iteration, for example, may include comparing miner event data with validator event data to generate a matching score, that may in turn be used to determine if a validator effect measure associated with the validator node should be applied. A validation iteration, for example, may include determining if the validator node is associated with monitoring data (e.g., miner event data) indicative of the occurrence of the candidate event, and in response to determining that the validator node is associated with such data, comparing the monitoring data (e.g., validator event data) with the miner event data to determine if the miner event data and validator event data match. In some embodiments, the validation score for a candidate event can be incrementally updated (e.g., increased) based at least in part on a validator effect measure associated with the validator node if it is determined that the validator event data matches the miner event data. Examples of attributes of a candidate event may include an environmental attribute, such as a location attribute (e.g., location of occurrence of the candidate event), temporal data (e.g., date and/or time of occurrence of the candidate event), duration of the candidate event, and/or the like. The term “non-affirmative validation determination” may refer to a data construct that describes output of a validation iteration performed with respect to a candidate event, and that indicates a validation score that fails to satisfy the consensus threshold for the candidate event. The term “affirmative validation determination” may refer to a data construct that describes output of a validation iteration performed with respect to a candidate event, and that indicates a validation score for the candidate event that satisfies the consensus threshold for the candidate event. The term “ledger entry determination” may refer to a data construct that describes a result of a ledger entry request based at least in part on a validation score for the candidate event associated with the ledger entry request. In some embodiments, an affirmative ledger entry determination may describe a ledger entry request for a candidate event associated with a validation score that satisfies the consensus threshold for the candidate event, and a non-affirmative ledger entry determination may describe a ledger entry request for a candidate event associated with a validation score that fails to satisfy a consensus threshold for the candidate event. In some embodiments, in response to an affirmative ledger entry determination, a distributed ledger system computing entity may initiate execution of a write command to commit the miner event data for the candidate event to a corresponding distributed ledger. III. Computer Program Products, Methods, and Computing Entities Embodiments of the present disclosure may be implemented in various ways, including as computer program products that comprise articles of manufacture. Such computer program products may include one or more software components including, for example, software objects, methods, data structures, or the like. A software component may be coded in any of a variety of programming languages. An illustrative programming language may be a lower-level programming language such as an assembly language associated with a particular hardware architecture and/or operating system platform. A software component comprising assembly language instructions may require conversion into executable machine code by an assembler prior to execution by the hardware architecture and/or platform. Another example programming language may be a higher-level programming language that may be portable across multiple architectures. A software component comprising higher-level programming language instructions may require conversion to an intermediate representation by an interpreter or a compiler prior to execution. Other examples of programming languages include, but are not limited to, a macro language, a shell or command language, a job control language, a script language, a database query or search language, and/or a report writing language. In one or more example embodiments, a software component comprising instructions in one of the foregoing examples of programming languages may be executed directly by an operating system or other software component without having to be first transformed into another form. A software component may be stored as a file or other data storage construct. Software components of a similar type or functionally related may be stored together such as, for example, in a particular directory, folder, or library. Software components may be static (e.g., pre-established or fixed) or dynamic (e.g., created or modified at the time of execution). A computer program product may include a non-transitory computer-readable storage medium storing applications, programs, program modules, scripts, source code, program code, object code, byte code, compiled code, interpreted code, machine code, executable instructions, and/or the like (also referred to herein as executable instructions, instructions for execution, computer program products, program code, and/or similar terms used herein interchangeably). Such non-transitory computer-readable storage media include all computer-readable media (including volatile and non-volatile media). In one embodiment, a non-volatile computer-readable storage medium may include a floppy disk, flexible disk, hard disk, solid-state storage (SSS) (e.g., a solid state drive (SSD), solid state card (SSC), solid state module (SSM), enterprise flash drive, magnetic tape, or any other non-transitory magnetic medium, and/or the like. A non-volatile computer-readable storage medium may also include a punch card, paper tape, optical mark sheet (or any other physical medium with patterns of holes or other optically recognizable indicia), compact disc read only memory (CD-ROM), compact disc-rewritable (CD-RW), digital versatile disc (DVD), Blu-ray disc (BD), any other non-transitory optical medium, and/or the like. Such a non-volatile computer-readable storage medium may also include read-only memory (ROM), programmable read-only memory (PROM), erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), flash memory (e.g., Serial, NAND, NOR, and/or the like), multimedia memory cards (MMC), secure digital (SD) memory cards, SmartMedia cards, CompactFlash (CF) cards, Memory Sticks, and/or the like. Further, a non-volatile computer-readable storage medium may also include conductive-bridging random access memory (CBRAM), phase-change random access memory (PRAM), ferroelectric random-access memory (FeRAM), non-volatile random-access memory (NVRAM), magnetoresistive random-access memory (MRAM), resistive random-access memory (RRAM), Silicon-Oxide-Nitride-Oxide-Silicon memory (SONOS), floating junction gate random access memory (FJG RAM), Millipede memory, racetrack memory, and/or the like. In one embodiment, a volatile computer-readable storage medium may include random access memory (RAM), dynamic random access memory (DRAM), static random access memory (SRAM), fast page mode dynamic random access memory (FPM DRAM), extended data-out dynamic random access memory (EDO DRAM), synchronous dynamic random access memory (SDRAM), double data rate synchronous dynamic random access memory (DDR SDRAM), double data rate type two synchronous dynamic random access memory (DDR2 SDRAM), double data rate type three synchronous dynamic random access memory (DDR3 SDRAM), Rambus dynamic random access memory (RDRAM), Twin Transistor RAM (TTRAM), Thyristor RAM (T-RAM), Zero-capacitor (Z-RAM), Rambus in-line memory module (RIMM), dual in-line memory module (DIMM), single in-line memory module (SIMM), video random access memory (VRAM), cache memory (including various levels), flash memory, register memory, and/or the like. It will be appreciated that where embodiments are described to use a computer-readable storage medium, other types of computer-readable storage media may be substituted for or used in addition to the computer-readable storage media described above. As should be appreciated, various embodiments of the present disclosure may also be implemented as methods, apparatus, systems, computing devices, computing entities, and/or the like. As such, embodiments of the present disclosure may take the form of an apparatus, system, computing device, computing entity, and/or the like executing instructions stored on a computer-readable storage medium to perform certain steps or operations. Thus, embodiments of the present disclosure may also take the form of an entirely hardware embodiment, an entirely computer program product embodiment, and/or an embodiment that comprises combination of computer program products and hardware performing certain steps or operations. Embodiments of the present disclosure are described below with reference to block diagrams and flowchart illustrations. Thus, it should be understood that each block of the block diagrams and flowchart illustrations may be implemented in the form of a computer program product, an entirely hardware embodiment, a combination of hardware and computer program products, and/or apparatus, systems, computing devices, computing entities, and/or the like carrying out instructions, operations, steps, and similar words used interchangeably (e.g., the executable instructions, instructions for execution, program code, and/or the like) on a computer-readable storage medium for execution. For example, retrieval, loading, and execution of code may be performed sequentially such that one instruction is retrieved, loaded, and executed at a time. In some exemplary embodiments, retrieval, loading, and/or execution may be performed in parallel such that multiple instructions are retrieved, loaded, and/or executed together. Thus, such embodiments can produce specifically-configured machines performing the steps or operations specified in the block diagrams and flowchart illustrations. Accordingly, the block diagrams and flowchart illustrations support various combinations of embodiments for performing the specified instructions, operations, or steps. IV. Exemplary System Architecture FIG.1is a schematic diagram of an example architecture100for a distributed ledger system. As shown inFIG.1, the distributed ledger system100comprises a plurality of monitored node computing entities102, a plurality of monitoring node computing entities104(each monitoring node computing entity104may represent a miner node computing entity and/or a validator node computing entity), and a distributed ledger system computing entity106. The distributed ledger system computing entity106may be configured to communicate with the plurality of monitoring node computing entities104. In some embodiments, the distributed ledger system computing entity106can communicate with the plurality of monitored node computing entities102. As further described below, a miner node may describe a digital identifier associated with a computing entity (e.g., miner node computing entity) representative of a real-world entity and/or a virtual entity that can initiate (e.g., propose) data entry of a candidate event associated with a monitored node in a distributed ledger maintained by the distributed ledger system computing entity106. As further described below, a validator node may describe a digital identifier associated with a computing entity (e.g., validator node computing entity) representative of a real-world entity and/or a virtual entity that can contribute to validation (e.g., confirmation, endorsement, and/or the like) of initiated/proposed data entry of a candidate event in a distributed ledger maintained by the distributed ledger system computing entity106. As further described below, a monitored node may describe a digital identifier associated with a computing entity (e.g., monitoring node computing entity) representative of a real-world entity or virtual entity whose candidate events may be committed to a distributed ledger that is maintained by the distributed ledger system computing entity106. The distributed ledger system computing entity106may be configured to, for a monitored node, maintain in a distributed ledger associated with the monitored node, one or more candidate events associated with the monitored node that satisfies corresponding consensus thresholds for the respective candidate events. The distributed ledger system computing entity106may be associated with one or more validation queues (e.g., a primary validation queue, a secondary validation queue, a tertiary validation queue, and/or the like) and may be configured to enable access to one or more validator nodes in order to execute a validation iteration. In some embodiments, the distributed ledger system computing entity106may maintain in a repository, system data associated with the distributed ledger system computing entity106. In some embodiments, the system data may include: data that describes a plurality of event types; data that describes, for each event type, a consensus threshold; data that describes, for each miner node and validator node, a corresponding miner effect measure and/or validator effect measure; data that describes, for each miner node and validator node, permissions associated with the miner node and validator node, that is representative of types of access within the distributed ledger system that a respective miner node and respective validator node is associated with; data that describes, for a monitored node, permissions associated with the monitored node, that is representative of types of access within the distributed ledger system that a respective monitored node is associated with. For example, a given miner node and/or validator node may be associated with a particular level of access with respect to a given distributed ledger, whereby the miner node and/or validator node may be granted access to certain data but may not be granted access to some data. A. Exemplary Distributed Ledger System Computing Entity FIG.2provides a schematic of a distributed ledger system computing entity106according to one embodiment of the present disclosure. In general, the terms computing entity, computer, entity, device, system, and/or similar words used herein interchangeably may refer to, for example, one or more computers, computing entities, desktops, mobile phones, tablets, phablets, notebooks, laptops, distributed systems, kiosks, input terminals, servers or server networks, blades, gateways, switches, processing devices, processing entities, set-top boxes, relays, routers, network access points, base stations, the like, and/or any combination of devices or entities adapted to perform the functions, operations, and/or processes described herein. Such functions, operations, and/or processes may include, for example, transmitting, receiving, operating on, processing, displaying, storing, determining, creating/generating, monitoring, evaluating, comparing, and/or similar terms used herein interchangeably. In one embodiment, these functions, operations, and/or processes can be performed on data, content, information, and/or similar terms used herein interchangeably. As indicated, in one embodiment, the distributed ledger system computing entity106may also include one or more communications interfaces220for communicating with various computing entities, such as by communicating data, content, information, and/or similar terms used herein interchangeably that can be transmitted, received, operated on, processed, displayed, stored, and/or the like. As shown inFIG.2, in one embodiment, the distributed ledger system computing entity106may include, or be in communication with, one or more processing elements205(also referred to as processors, processing circuitry, and/or similar terms used herein interchangeably) that communicate with other elements within the distributed ledger system computing entity106via a bus, for example. As will be understood, the processing element205may be embodied in a number of different ways. For example, the processing element205may be embodied as one or more complex programmable logic devices (CPLDs), microprocessors, multi-core processors, coprocessing entities, application-specific instruction-set processors (ASIPs), microcontrollers, and/or controllers. Further, the processing element205may be embodied as one or more other processing devices or circuitry. The term circuitry may refer to an entirely hardware embodiment or a combination of hardware and computer program products. Thus, the processing element205may be embodied as integrated circuits, application-specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), programmable logic arrays (PLAs), hardware accelerators, other circuitry, and/or the like. As will therefore be understood, the processing element205may be configured for a particular use or configured to execute instructions stored in volatile or non-volatile media or otherwise accessible to the processing element205. As such, whether configured by hardware or computer program products, or by a combination thereof, the processing element205may be capable of performing steps or operations according to embodiments of the present disclosure when configured accordingly. In one embodiment, the distributed ledger system computing entity106may further include, or be in communication with, non-volatile media (also referred to as non-volatile storage, memory, memory storage, memory circuitry and/or similar terms used herein interchangeably). In one embodiment, the non-volatile storage or memory may include one or more non-volatile storage or memory media210, including, but not limited to, hard disks, ROM, PROM, EPROM, EEPROM, flash memory, MMCs, SD memory cards, Memory Sticks, CBRAM, PRAM, FeRAM, NVRAM, MRAM, RRAM, SONOS, FJG RAM, Millipede memory, racetrack memory, and/or the like. As will be recognized, the non-volatile storage or memory media may store databases, database instances, database management systems, data, applications, programs, program modules, scripts, source code, object code, byte code, compiled code, interpreted code, machine code, executable instructions, and/or the like. The term database, database instance, database management system, and/or similar terms used herein interchangeably may refer to a collection of records or data that is stored in a computer-readable storage medium using one or more database models, such as a hierarchical database model, network model, relational model, entity-relationship model, object model, document model, semantic model, graph model, and/or the like. In one embodiment, the distributed ledger system computing entity106may further include, or be in communication with, volatile media (also referred to as volatile storage, memory, memory storage, memory circuitry and/or similar terms used herein interchangeably). In one embodiment, the volatile storage or memory may also include one or more volatile storage or memory media215, including, but not limited to, RAM, DRAM, SRAM, FPM DRAM, EDO DRAM, SDRAM, DDR SDRAM, DDR2 SDRAM, DDR3 SDRAM, RDRAM, TTRAM, T-RAM, Z-RAM, RIMM, DIMM, SIMM, VRAM, cache memory, register memory, and/or the like. As will be recognized, the volatile storage or memory media may be used to store at least portions of the databases, database instances, database management systems, data, applications, programs, program modules, scripts, source code, object code, byte code, compiled code, interpreted code, machine code, executable instructions, and/or the like being executed by, for example, the processing element205. Thus, the databases, database instances, database management systems, data, applications, programs, program modules, scripts, source code, object code, byte code, compiled code, interpreted code, machine code, executable instructions, and/or the like may be used to control certain aspects of the operation of the distributed ledger system computing entity106with the assistance of the processing element205and operating system. As indicated, in one embodiment, the distributed ledger system computing entity106may also include one or more communications interfaces220for communicating with various computing entities, such as by communicating data, content, information, and/or similar terms used herein interchangeably that can be transmitted, received, operated on, processed, displayed, stored, and/or the like. Such communication may be executed using a wired data transmission protocol, such as fiber distributed data interface (FDDI), digital subscriber line (DSL), Ethernet, asynchronous transfer mode (ATM), frame relay, data over cable service interface specification (DOCSIS), or any other wired transmission protocol. Similarly, the distributed ledger system computing entity106may be configured to communicate via wireless external communication networks using any of a variety of protocols, such as general packet radio service (GPRS), Universal Mobile Telecommunications System (UMTS), Code Division Multiple Access 2000 (CDMA2000), CDMA2000 1× (1×RTT), Wideband Code Division Multiple Access (WCDMA), Global System for Mobile Communications (GSM), Enhanced Data rates for GSM Evolution (EDGE), Time Division-Synchronous Code Division Multiple Access (TD-SCDMA), Long Term Evolution (LTE), Evolved Universal Terrestrial Radio Access Network (E-UTRAN), Evolution-Data Optimized (EVDO), High Speed Packet Access (HSPA), High-Speed Downlink Packet Access (HSDPA), IEEE 802.11 (Wi-Fi), Wi-Fi Direct, 802.16 (WiMAX), ultra-wideband (UWB), infrared (IR) protocols, near field communication (NFC) protocols, Wibree, Bluetooth protocols, wireless universal serial bus (USB) protocols, and/or any other wireless protocol. Although not shown, the distributed ledger system computing entity106may include, or be in communication with, one or more input elements, such as a keyboard input, a mouse input, a touch screen/display input, motion input, movement input, audio input, pointing device input, joystick input, keypad input, and/or the like. The distributed ledger system computing entity106may also include, or be in communication with, one or more output elements (not shown), such as audio output, video output, screen/display output, motion output, movement output, and/or the like. B. Exemplary Monitored Node Computing Entity FIG.3provides an illustrative schematic representative of a monitored node computing entity102that can be used in conjunction with embodiments of the present disclosure. In general, the terms device, system, computing entity, entity, and/or similar words used herein interchangeably may refer to, for example, one or more computers, computing entities, desktops, mobile phones, tablets, phablets, notebooks, laptops, distributed systems, kiosks, input terminals, servers or server networks, blades, gateways, switches, processing devices, processing entities, set-top boxes, relays, routers, network access points, base stations, the like, and/or any combination of devices or entities adapted to perform the functions, operations, and/or processes described herein. Monitored node computing entities102can be operated by various parties. As shown inFIG.3, the monitored node computing entity102can include an antenna312, a transmitter304(e.g., radio), a receiver306(e.g., radio), and a processing element308(e.g., CPLDs, microprocessors, multi-core processors, coprocessing entities, ASIPs, microcontrollers, and/or controllers) that provides signals to and receives signals from the transmitter304and receiver306, correspondingly. The signals provided to and received from the transmitter304and the receiver306, correspondingly, may include signaling information/data in accordance with air interface standards of applicable wireless systems. In this regard, the monitored node computing entity102may be capable of operating with one or more air interface standards, communication protocols, modulation types, and access types. More particularly, the monitored node computing entity102may operate in accordance with any of a number of wireless communication standards and protocols, such as those described above with regard to the distributed ledger system computing entity106. In a particular embodiment, the monitored node computing entity102may operate in accordance with multiple wireless communication standards and protocols, such as UMTS, CDMA2000, 1×RTT, WCDMA, GSM, EDGE, TD-SCDMA, LTE, E-UTRAN, EVDO, HSPA, HSDPA, Wi-Fi, Wi-Fi Direct, WiMAX, UWB, IR, NFC, Bluetooth, USB, and/or the like. Similarly, the monitored node computing entity102may operate in accordance with multiple wired communication standards and protocols, such as those described above with regard to the distributed ledger system computing entity106via a network interface320. Via these communication standards and protocols, the monitored node computing entity102can communicate with various other entities using concepts such as Unstructured Supplementary Service Data (USSD), Short Message Service (SMS), Multimedia Messaging Service (MMS), Dual-Tone Multi-Frequency Signaling (DTMF), and/or Subscriber Identity Module Dialer (SIM dialer). The monitored node computing entity102can also download changes, add-ons, and updates, for instance, to its firmware, software (e.g., including executable instructions, applications, program modules), and operating system. According to one embodiment, the monitored node computing entity102may include location determining aspects, devices, modules, functionalities, and/or similar words used herein interchangeably. For example, the monitored node computing entity102may include outdoor positioning aspects, such as a location module adapted to acquire, for example, latitude, longitude, altitude, geocode, course, direction, heading, speed, universal time (UTC), date, and/or various other information/data. In one embodiment, the location module can acquire data, sometimes known as ephemeris data, by identifying the number of satellites in view and the relative positions of those satellites (e.g., using global positioning systems (GPS)). The satellites may be a variety of different satellites, including Low Earth Orbit (LEO) satellite systems, Department of Defense (DOD) satellite systems, the European Union Galileo positioning systems, the Chinese Compass navigation systems, Indian Regional Navigational satellite systems, and/or the like. This data can be collected using a variety of coordinate systems, such as the Decimal Degrees (DD); Degrees, Minutes, Seconds (DMS); Universal Transverse Mercator (UTM); Universal Polar Stereographic (UPS) coordinate systems; and/or the like. Alternatively, the location information/data can be determined by triangulating the monitored node computing entity's102position in connection with a variety of other systems, including cellular towers, Wi-Fi access points, and/or the like. Similarly, the monitored node computing entity102may include indoor positioning aspects, such as a location module adapted to acquire, for example, latitude, longitude, altitude, geocode, course, direction, heading, speed, time, date, and/or various other information/data. Some of the indoor systems may use various position or location technologies including RFID tags, indoor beacons or transmitters, Wi-Fi access points, cellular towers, nearby computing entities (e.g., smartphones, laptops) and/or the like. For instance, such technologies may include the iBeacons, Gimbal proximity beacons, Bluetooth Low Energy (BLE) transmitters, NFC transmitters, and/or the like. These indoor positioning aspects can be used in a variety of settings to determine the location of someone or something to within inches or centimeters. The monitored node computing entity102may also comprise a user interface (that can include a display316coupled to a processing element308) and/or a user input interface (coupled to a processing element308). For example, the user interface may be a user application, browser, user interface, and/or similar words used herein interchangeably executing on and/or accessible via the monitored node computing entity102to interact with and/or cause display of information/data from the distributed ledger system computing entity106, as described herein. The user input interface can comprise any of a number of devices or interfaces allowing the monitored node computing entity102to receive data, such as a keypad318(hard or soft), a touch display, voice/speech or motion interfaces, or other input device. In embodiments including a keypad318, the keypad318can include (or cause display of) the conventional numeric (0-9) and related keys (#, *), and other keys used for operating the monitored node computing entity102and may include a full set of alphabetic keys or set of keys that may be activated to provide a full set of alphanumeric keys. In addition to providing input, the user input interface can be used, for example, to activate or deactivate certain functions, such as screen savers and/or sleep modes. The monitored node computing entity102can also include volatile storage or memory322and/or non-volatile storage or memory324, which can be embedded and/or may be removable. For example, the non-volatile memory may be ROM, PROM, EPROM, EEPROM, flash memory, MMCs, SD memory cards, Memory Sticks, CBRAM, PRAM, FeRAM, NVRAM, MRAM, RRAM, SONOS, FJG RAM, Millipede memory, racetrack memory, and/or the like. The volatile memory may be RAM, DRAM, SRAM, FPM DRAM, EDO DRAM, SDRAM, DDR SDRAM, DDR2 SDRAM, DDR3 SDRAM, RDRAM, TTRAM, T-RAM, Z-RAM, RIMM, DIMM, SIMM, VRAM, cache memory, register memory, and/or the like. The volatile and non-volatile storage or memory can store databases, database instances, database management systems, data, applications, programs, program modules, scripts, source code, object code, byte code, compiled code, interpreted code, machine code, executable instructions, and/or the like to implement the functions of the monitored node computing entity102. As indicated, this may include a user application that is resident on the entity or accessible through a browser or other user interface for communicating with the distributed ledger system computing entity106and/or various other computing entities. In another embodiment, the monitored node computing entity102may include one or more components or functionality that are the same or similar to those of the distributed ledger system computing entity106, as described in greater detail above. As will be recognized, these architectures and descriptions are provided for exemplary purposes only and are not limiting to the various embodiments. In various embodiments, the monitored node computing entity102may be embodied as an artificial intelligence (AI) computing entity, such as an Amazon Echo, Amazon Echo Dot, Amazon Show, Google Home, and/or the like. Accordingly, the monitored node computing entity102may be configured to provide and/or receive information/data from a user via an input/output mechanism, such as a display, a camera, a speaker, a voice-activated input, and/or the like. In certain embodiments, an AI computing entity may comprise one or more predefined and executable program algorithms stored within an onboard memory storage module, and/or accessible over a network. In various embodiments, the AI computing entity may be configured to retrieve and/or execute one or more of the predefined program algorithms upon the occurrence of a predefined trigger event. C. Exemplary Monitoring Node Computing Entity FIG.4provides a schematic of a monitoring node computing entity104according to one embodiment of the present disclosure. As previously noted, each monitoring node computing entity104may represent a miner node computing entity and/or a validator node computing entity. In general, the terms computing entity, computer, entity, device, system, and/or similar words used herein interchangeably may refer to, for example, one or more computers, computing entities, desktops, mobile phones, tablets, phablets, notebooks, laptops, distributed systems, kiosks, input terminals, servers or server networks, blades, gateways, switches, processing devices, processing entities, set-top boxes, relays, routers, network access points, base stations, the like, and/or any combination of devices or entities adapted to perform the functions, operations, and/or processes described herein. Such functions, operations, and/or processes may include, for example, transmitting, receiving, operating on, processing, displaying, storing, determining, creating/generating, monitoring, evaluating, comparing, and/or similar terms used herein interchangeably. In one embodiment, these functions, operations, and/or processes can be performed on data, content, information, and/or similar terms used herein interchangeably. As indicated, in one embodiment, the monitoring node computing entity104may also include one or more communications interfaces420for communicating with various computing entities, such as by communicating data, content, information, and/or similar terms used herein interchangeably that can be transmitted, received, operated on, processed, displayed, stored, and/or the like. As shown inFIG.4, in one embodiment, the monitoring node computing entity104may include, or be in communication with, one or more processing elements405(also referred to as processors, processing circuitry, and/or similar terms used herein interchangeably) that communicate with other elements within the monitoring node computing entity104via a bus, for example. As will be understood, the processing element405may be embodied in a number of different ways. For example, the processing element405may be embodied as one or more complex programmable logic devices (CPLDs), microprocessors, multi-core processors, coprocessing entities, application-specific instruction-set processors (ASIPs), microcontrollers, and/or controllers. Further, the processing element405may be embodied as one or more other processing devices or circuitry. The term circuitry may refer to an entirely hardware embodiment or a combination of hardware and computer program products. Thus, the processing element405may be embodied as integrated circuits, application-specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), programmable logic arrays (PLAs), hardware accelerators, other circuitry, and/or the like. As will therefore be understood, the processing element405may be configured for a particular use or configured to execute instructions stored in volatile or non-volatile media or otherwise accessible to the processing element405. As such, whether configured by hardware or computer program products, or by a combination thereof, the processing element405may be capable of performing steps or operations according to embodiments of the present disclosure when configured accordingly. In one embodiment, the monitoring node computing entity104may further include, or be in communication with, non-volatile media (also referred to as non-volatile storage, memory, memory storage, memory circuitry and/or similar terms used herein interchangeably). In one embodiment, the non-volatile storage or memory may include one or more non-volatile storage or memory media410, including, but not limited to, hard disks, ROM, PROM, EPROM, EEPROM, flash memory, MMCs, SD memory cards, Memory Sticks, CBRAM, PRAM, FeRAM, NVRAM, MRAM, RRAM, SONOS, FJG RAM, Millipede memory, racetrack memory, and/or the like. As will be recognized, the non-volatile storage or memory media may store databases, database instances, database management systems, data, applications, programs, program modules, scripts, source code, object code, byte code, compiled code, interpreted code, machine code, executable instructions, and/or the like. The term database, database instance, database management system, and/or similar terms used herein interchangeably may refer to a collection of records or data that is stored in a computer-readable storage medium using one or more database models, such as a hierarchical database model, network model, relational model, entity-relationship model, object model, document model, semantic model, graph model, and/or the like. In one embodiment, the monitoring node computing entity104may further include, or be in communication with, volatile media (also referred to as volatile storage, memory, memory storage, memory circuitry and/or similar terms used herein interchangeably). In one embodiment, the volatile storage or memory may also include one or more volatile storage or memory media415, including, but not limited to, RAM, DRAM, SRAM, FPM DRAM, EDO DRAM, SDRAM, DDR SDRAM, DDR2 SDRAM, DDR3 SDRAM, RDRAM, TTRAM, T-RAM, Z-RAM, DIMM, SIMM, VRAM, cache memory, register memory, and/or the like. As will be recognized, the volatile storage or memory media may be used to store at least portions of the databases, database instances, database management systems, data, applications, programs, program modules, scripts, source code, object code, byte code, compiled code, interpreted code, machine code, executable instructions, and/or the like being executed by, for example, the processing element405. Thus, the databases, database instances, database management systems, data, applications, programs, program modules, scripts, source code, object code, byte code, compiled code, interpreted code, machine code, executable instructions, and/or the like may be used to control certain aspects of the operation of the monitoring node computing entity104with the assistance of the processing element405and operating system. As indicated, in one embodiment, the monitoring node computing entity104may also include one or more communications interfaces420for communicating with various computing entities, such as by communicating data, content, information, and/or similar terms used herein interchangeably that can be transmitted, received, operated on, processed, displayed, stored, and/or the like. Such communication may be executed using a wired data transmission protocol, such as fiber distributed data interface (FDDI), digital subscriber line (DSL), Ethernet, asynchronous transfer mode (ATM), frame relay, data over cable service interface specification (DOCSIS), or any other wired transmission protocol. Similarly, the monitoring node computing entity104may be configured to communicate via wireless external communication networks using any of a variety of protocols, such as general packet radio service (GPRS), Universal Mobile Telecommunications System (UMTS), Code Division Multiple Access 2000 (CDMA2000), CDMA2000 1× (1×RTT), Wideband Code Division Multiple Access (WCDMA), Global System for Mobile Communications (GSM), Enhanced Data rates for GSM Evolution (EDGE), Time Division-Synchronous Code Division Multiple Access (TD-SCDMA), Long Term Evolution (LTE), Evolved Universal Terrestrial Radio Access Network (E-UTRAN), Evolution-Data Optimized (EVDO), High Speed Packet Access (HSPA), High-Speed Downlink Packet Access (HSDPA), IEEE 802.11 (Wi-Fi), Wi-Fi Direct, 802.16 (WiMAX), ultra-wideband (UWB), infrared (IR) protocols, near field communication (NFC) protocols, Wibree, Bluetooth protocols, wireless universal serial bus (USB) protocols, and/or any other wireless protocol. Although not shown, the monitoring node computing entity104may include, or be in communication with, one or more input elements, such as a keyboard input, a mouse input, a touch screen/display input, motion input, movement input, audio input, pointing device input, joystick input, keypad input, and/or the like. The monitoring node computing entity104may also include, or be in communication with, one or more output elements (not shown), such as audio output, video output, screen/display output, motion output, movement output, and/or the like. V. Exemplary System Operations As described below, various embodiments of the present disclosure introduce techniques for efficient and scalable monitoring of a multi-node system by using a distributed ledger to describe candidate events associated with a monitored node and a hierarchical validation workflow to validate candidate events associated with ledger entry requests. Using a distributed ledger to describe candidate events associated with a monitored node provides a scalable and efficient way to store candidate event data associated with the monitored node. In some embodiments, one or more miner node computing entities and validator node computing entities can execute operations associated with a ledger entry requests to add a candidate event to a distributed ledger that is done in a distributed manner. This means that the distributed ledger system computing entity can maintain the distributed ledger without the need to request data from the monitored node computing entities and/or the one or more miner node computing entities and validator node computing entities, as in turn the miner node computing entities initiate ledger entries into distributed ledgers upon detecting candidate events associated with a monitored node that is associated with the multi-node system, and in turn, the one or more validator nodes can initiate a validation request to validate the candidate event. In some embodiments, using a distributed ledger to maintain candidate events associated with a monitored node and using a hierarchical validation workflow that is based at least in part on a validator effect measure associated with the validator nodes lead to more efficient use of computational and network resources because it requires fewer number of network transmissions between the distributed ledger system computing entity and monitoring node computing entities (e.g., miner node computing entities and validator node computing entities). In some embodiments, without using the ledger-based techniques described herein, establishing occurrence of candidate events associated with the particular monitored node would require significant network transmissions in multiple request network transmissions and multiple response network transmissions, where each request network transmission is from a central computing entity to a respective monitored node computing entity to inquire about the occurrence and/or details of candidate events associated with the particular monitored node, and where each response network transmission is a response by a monitored node computing entity to a request network transmission that is received by the central computing entity and describes data regarding occurrence and/or details of candidate events associated with the particular monitored node. In contrast, using various ledger-based techniques described herein, eliminates the need for the noted request network transmissions and response network transmission to establish the occurrence of a candidate event of a particular monitored node and would require less network transmission, where each network transmission is a ledger entry request for a candidate event for a monitored node that is transmitted by a miner node computing entity to a central computing entity, such as the distributed ledger system computing entity or a validation request to validate the candidate event that is transmitted by a validator node computing entity to the central computing entity, such as the distributed ledger system computing entity. Which means that, by using the various ledger-based techniques described herein, the number of network transmissions needed to determine/maintain data about candidate events associated with a monitored node is decreased, which in turn means using the noted ledger-based techniques leads to more efficient computational/networking resources. Moreover, using the hierarchical validation workflow techniques described herein reduces computational complexity associated with performing effective monitoring using a set of operations that have less computational complexity. FIG.5is an example flowchart diagram of an example process500for performing ledger-based monitoring of a multi-node system using a hierarchical validation workflow. Via the various steps/operations of the process500, the distributed ledger system computing entity106can execute a hierarchical validation workflow to establish proof of occurrence of a candidate event associated with a monitored node. The process500begins at step/operation501when the distributed ledger system computing entity106receives, from a miner node computing entity associated with a miner node, a ledger entry request for a candidate event, where the candidate event is associated with a monitored entity. The ledger entry request may comprise a request to commit the candidate event to a distributed ledger maintained by the distributed ledger system computing entity106(e.g., a request to add a data entry for the candidate event in associated distributed ledger). The candidate event may describe an event (e.g., activity, incident, interaction, and/or the like) having a designation, such as an event type designation that is within an event type space associated with the distributed ledger system computing entity106. In some embodiments, a candidate event may be used to establish a proof of an object of interest for an associated monitored node. For example, where an event type space comprises a health-related event type as described above, each candidate event committed to a distributed ledger may be used to establish proof of health for the associated monitored node or otherwise represents proof of health for the associated monitored node. As another example, where an event type space comprises an E-commerce-related event type as described above each candidate event committed to a distributed ledger may be used to establish proof of transaction. The distributed ledger system computing entity106may be associated with a distributed ledger system comprising at least one or more miner nodes and one or more validator nodes, where a miner node may describe a digital identifier associated with a computing entity representative of a real-world entity and/or a virtual entity that can initiate/propose entry of a candidate event associated with a monitored node to a corresponding distributed ledger. A miner node may be associated with one or more computing entities that may in turn be associated with a monitored node, and that may enable the miner node to collect data (such as event data) associated with the monitored node. An example of a miner node may include digital identifier associated with computing entity representative of a software provider. Another example of a miner node may include a digital identifier associated with computing entity representative of a provider of an electronic device such as a smart watch, a smart phone, Internet of Things device (IoT) device, and/or the like. A monitored node may describe a digital identifier associated with a computing entity (e.g., monitored node computing entity102) associated with a real-world entity or virtual entity whose candidate events may be monitored by one or more miner nodes and or validator nodes, and whose noted candidate events may be committed to a distributed ledger maintained by the distributed ledger system computing entity106. In an operational example, each monitored node may comprise a digital identifier associated with a computing entity representative of a health insurance member of a health insurance provider (e.g., an individual enrolled in a health insurance plan that is offered by the health insurance provider), where for each digital identifier, each candidate event committed to a distributed ledger may represent proof of occurrence of a health event with respect to the health insurance member, and individually or collectively represent proof of health for the health insurance member. A monitored node may be monitored by one or more miner nodes and/or validator nodes. In some embodiments, a monitored node may be monitored by a miner node and/or validator node in that a miner node and/or validator node can collect data associated with certain events associated with the monitored node, where the certain events may include candidate events. A monitored node, for example, may be associated with one or more computing entities that may in turn be associated with a miner node and/or validator node, wherein the miner node and/or validator node (e.g., associated entity) can collect data associated with the monitored node based at least in part on the one or more computing entities associated with the monitored node and the miner node and/or validator node. The one or more computing entities, for example, may include a smart phone, a smart watch, a smart refrigerator, IoT device, a smart home device, a global positioning system (GPS) device and/or the like. A monitored node, for example, may be associated with one or more computing entities that may enable a miner node and/or validator node to collect data (e.g., miner event data describing a candidate event) associated with the monitored node via one or more software applications, software programs and/or the like associated with the one or more computing entities (e.g., installed on the one or more computing entities), where the one or more software applications, software programs and/or the like may be associated with the monitoring entity. At step/operation502, the distributed ledger system computing entity106identifies one or more of: (i) the miner node associated with the miner node computing entity, (ii) the monitored node associated with the ledger entry request, or (iii) the distributed ledger associated with the ledger entry request. In some embodiments, a ledger entry request may comprise one or more identifying fields for the miner node that is configured to uniquely identify the miner node. In some embodiments, the one or more identifying fields may include a digital identifier (e.g., representing the miner node). In some embodiments, identifying the miner node associated with the miner node computing entity may comprise identifying the digital identifier from the one or more identifying fields associated with the ledger entry request. Additionally, and/or alternatively, in some embodiments, the ledger entry request may comprise one or more identifying fields for the monitored node that is configured to uniquely identify the monitored node. In some embodiments, the one or more identifying fields may include a digital identifier (e.g., representing the monitored node). In some embodiments, identifying the monitored node associated with the ledger entry request may comprise identifying the digital identifier from the one or more identifying fields associated with the ledger entry request. In some embodiments, the distributed ledger system computing entity106may identify the distributed ledger associated with a ledger entry request based at least in part on the one or more identifying fields for the monitored node. At step/operation503, the distributed ledger system computing entity106authenticates the ledger entry request. In some embodiments, the ledger entry request is authenticated based at least in part on permissions data associated with the miner node associated with the miner node computing entity. For example, a given monitored node may be associated with permissions data that describes access level for the miner node. A particular miner node, for example, may have the requisite permissions with respect to a first monitored node (e.g., to commit a candidate event for the first monitored node in associated distributed ledger) but may not have the requisite permission with respect to a second monitored node. In some embodiments authenticating the ledger entry request may comprise retrieving from a repository storing permissions data for miner nodes, permissions data for the miner node to determine if the miner node is associated with the requisite permissions to commit the candidate event associated with the ledger entry request to the corresponding distributed ledger. In some embodiments, authenticating the ledger entry request may comprise verifying authentication data (e.g., a password, a passcode, a pin number, and/or the like) associated with the ledger entry request. In some embodiments, the ledger entry request may include authentication data that is configured to, if verified, demonstrate that the miner node associated with the ledger entry request possesses the requisite permission to access the corresponding distributed ledger for the monitored node associated with the ledger entry request or otherwise possesses the requisite permission to propose entry of the candidate event in the corresponding distributed ledger. At step/operation504, the distributed ledger system computing entity106identifies a consensus threshold for the ledger entry request. In some embodiments, the consensus threshold for the ledger entry request may be identified based at least in part on the event type associated with the candidate event. The candidate event may be associated with an event type of a plurality of event types. As described above, a candidate event may be associated with an event type that is within an event type space associated with the distributed ledger system computing entity106, where an event type space may comprise a plurality of event types that indicate events that may be committed to a distributed ledger maintained by the distributed ledger system computing entity106. In some embodiments, an event type space may be associated with a category of events. For example, a distributed ledger system may be associated with an event type space, where each event type within the event type space comprises a health-related event type (e.g., physical activity event type, medical visit event type, laboratory visit event type, and/or the like). As another example, a distributed ledger system may be associated with an event type space, where each event type within the event type space comprises an E-commerce-related event type (e.g., a purchase event type, a supply event type, a delivery event type, and/or the like). In some embodiments, each event type of the plurality of event types is associated with a consensus threshold. In some embodiments, a given event type may be associated with a consensus threshold that is different from a consensus threshold for one or more other event types. In some embodiments, to identify the consensus threshold for the ledger entry request, the distributed ledger system computing entity106: (i) identifies the event type associated with the ledger entry request based at least in part on the candidate event associated with the ledger entry request, and (ii) identifies the consensus threshold for the candidate event based at least in part on the identified event type. In some embodiments, the distributed ledger system computing entity106may identify the event type for the candidate event associated with the ledger entry request based at least in part on an event classification machine learning model. The event classification machine learning model, for example, may be a trained machine learning model that is configured to generate an event type classification for a candidate event based at least in part on miner event data associated with the candidate event. In some embodiments, the event classification machine learning model may be a natural language-based machine learning model. A miner event data may describe monitoring data for the candidate event proposed by a miner node for entry into a distributed ledger. Miner event data, for example, may comprise data collected by the miner node during a candidate event associated with the monitored node, where the data may describe attributes of the candidate event. An example of a miner event data may include data collected during a physical activity event, where the data describes attributes of the physical activity event. For example, miner event data for a marathon event associated with a monitored node associated with a computing entity representative of an individual may comprise attributes of the marathon event that may include distance covered by the individual, the location of the marathon event, date and/or time of the marathon event, and/or the like. FIG.6depicts an operational example of event types associated with a proof of health event type space. As depicted inFIG.6, each event type601is associated with a consensus threshold602in a possible consensus threshold range of [01]. As depicted inFIG.6, each event type601may be associated with a different consensus threshold602.FIG.6further depicts example steps603associated with exemplary event types601. Returning toFIG.5at step/operation505, the distributed ledger system computing entity106generates a validation score (e.g., initializes a validation score) for the candidate event. The validation score may comprise a baseline score (e.g., initial validation score) for the candidate event. In some embodiments, the baseline score may be generated based at least in part on a miner effect measure associated with the miner node (e.g., miner node associated with the miner node computing entity that transmitted the ledger entry request). A miner effect measure may describe an estimated measure, such as a score, assigned to a miner node, that may be used by the distributed ledger system computing entity106to initialize a validation score for a candidate event, which in turn may be used to determine if a candidate event should be committed to a distributed ledger. In some embodiments, a miner effect measure for a given miner node may be reflective of an estimated mining efficacy of the miner node. Additionally, and/or alternatively, in some embodiments, a miner effect measure for a given monitored node may be indicative of a confidence level and/or a trust level associated with the miner node with respect to ledger entry requests (e.g., ratio of ledger entry requests resulting in candidate event entries in the corresponding distributed ledger). In some embodiments, the miner effect measure for a given miner node may be determined based at least on miner effect measure criteria. In some embodiments, examples of miner effect measure criteria may include a data collection size measure (e.g., that describes the amount of data currently maintained by a miner node), geospatial prevalence measure (e.g., that describes a population percentage with respect to which a miner node has data collection access), association with public API, past miner performance (e.g., a ratio of number of times candidate events proposed by a miner node were committed into a distributed ledger by validator nodes of the distributed ledger system), past ratings for the miner node, and/or the like. In some embodiments, the miner effect measure may be adjusted (e.g., increased or decreased) based at least in part on one or more of the noted miner effect measure criteria. In some embodiments, the miner effect measure for a miner node may be determined using a machine learning model based at least in part on one or more features/variables associated with the miner node, which for example, may include the noted miner effect measure criteria for the miner node. In some embodiments, a miner node may be associated with a miner effect measure for each event type of a plurality of event types. For example, in some embodiments, the baseline score may be generated based at least in part on a miner effect measure associated with the miner node for the event type of the candidate event. For example, in some embodiments, generating a baseline score may comprise identifying, for the miner node, the miner effect measure associated with the miner node for the particular event type for the candidate event, and generating the baseline score to comprise the miner effect measure associated with the miner node for the identified event type (e.g., initializing a validation score for the candidate event based at least in part on the miner effect measure associated with the miner node for the event type of the candidate event). It should be understood, however, that in some embodiments, the miner effect measure for a given miner node may be the same for each event type of the plurality of event types. Furthermore, it should be understood that in some embodiments, the baseline score may be generated based at least in part on other criteria/measures, that may not include applying (e.g., may not comprise) the miner effect measure associated with the miner node. In some embodiments, where a given node may represent a miner node and a validator node, a miner effect measure for the miner node may comprise a validator effect measure for the miner node. A validator effect measure may describe an estimated measure (such as a score) assigned to a validator node, that can be applied to a validation score for a candidate that is indicative of validation (e.g., confirmation, endorsement, and/or the like) of the occurrence of the candidate event. In some embodiments, a validator effect measure for a given validator node may be reflective of an estimated validation efficacy of the validator node. Additionally, and/or alternatively, in some embodiments, a validator effect measure for a given monitored node may be indicative of a confidence level and/or a trust level associated with the validator node with respect to validating (e.g., endorsing candidate events). In some embodiments, the validation effect measure for a given validator node may be determined based at least on validator effect measure criteria. In some embodiments, examples of validator effect measure criteria may include a data collection size measure (e.g., that describes the amount of data currently maintained by a validator node), geospatial prevalence measure (e.g., that describes a population percentage with respect to which a validator node has data collection access), association with public API, past validation performance, past ratings for the validator node, and/or the like. FIG.7depicts an operational example of validator effect measure criteria701and corresponding effect on validator effect measure702for a given validator node. As shown inFIG.7, each validator effect measure criteria701may contribute to the validator effect measure702for a given validator node (e.g., effect on a validation score). In some embodiments, the validator effect measure may be adjusted (e.g., increased or decreased) based at least in part on one or more of the noted validator effect measure criteria. In some embodiments, the validator effect measure for a validator node may be determined using a machine learning model based at least in part on one or more features/variables associated with the validator node, which for example, may include the noted validator effect measure criteria for the validator node. Returning toFIG.5at step/operation506, the distributed ledger system computing entity106causes the ledger entry request to be accessible to one or more validator node computing entities. In some embodiments, causing the ledger entry request to be accessible comprises adding the candidate event to a first validation queue (e.g., a primary validation queue). In some embodiments, adding the candidate event to a first validation queue (e.g., primary validation queue) may comprise transmitting the miner event data to the first validation queue (e.g., primary validation queue). As described above, the miner event data may comprise data reported by the miner node computing entity as representing the candidate event proposed by the miner node computing entity for entry into a distributed ledger associated with the corresponding monitored node. In some embodiments, a primary validation queue may describe a pipeline that is configured to temporarily maintain candidate events (e.g., miner event data) associated with ledger entry requests that have been authenticated but not associated with a validation iteration (e.g., yet to be validated or attempted to be validated by a validator node). In some embodiments, the primary validation queue may be accessible to, based at least in part on the event type associated with the candidate event, one or more validator node computing entities associated with one or more validator nodes. In some embodiments, a validator node may be associated with the requisite permissions to access data associated with candidate events having certain event types and may not be associated with the requisite permissions to access data associated with candidate events having certain other event types. As an operational example, consider a first validator node that is associated with a smart watch and a second validator node that is associated with a doctor visit record, the first validator node may be granted the requisite permissions to access data associated with physical activity events, whereby the validator node may validate (or attempt to validate) physical activity events, but may not be granted the requisite permissions to access data associated with doctor office visit event. In some embodiments, causing the ledger entry request to be accessible to the one or more validator node computing entities may comprise transmitting a ledger entry request notification to one or more validator node computing entities. At step/operation507, the distributed ledger system computing entity106generates a ledger entry determination for the candidate event. A ledger entry determination may describe a result of a ledger entry request that is determined based at least in part on a validation score for the candidate event proposed via the ledger entry request, to be committed into a distributed ledger. In some embodiments, an affirmative ledger entry determination may describe a ledger entry request associated with a validation score that satisfies the consensus threshold for the candidate event, and a non-affirmative ledger entry determination may describe a ledger entry request associated with a validation score that fails to satisfy the consensus threshold for the candidate event. In some embodiments, in response to an affirmative ledger entry determination, the distributed ledger system computing entity106initiates execution of a write command to add the candidate event to the distributed ledger associated with the monitored node associated with the candidate event. In some embodiments generating the ledger entry determination comprises executing a hierarchical validation workflow. In some embodiments, the hierarchical validation workflow comprises an ordered sequence of L validation iterations, where (i) each validation iteration is associated with a corresponding validation score based at least in part on output of the validation iteration, and (ii) each of one or more ith non-initial validation iterations is performed based at least in part on association of an (i−1)th validation iteration with a non-affirmative validation determination. In some embodiments, a given hierarchical validation workflow comprises an ordered sequence of L validation iterations that is less than and/or equal to a maximum number of validation iterations (M). For example, consider where M=5 and a first validation iteration is associated with a non-affirmative validation iteration, a second validation iteration may be performed based at least in part on association of the first validation iteration with a non-affirmative validation determination output for the first validation iteration, a third validation iteration may be performed based at least in part on association of the second validation iteration with a non-affirmative validation determination output for the second validation iteration, a forth validation iteration may be performed based at least in part on association of the third validation iteration with a non-affirmative validation determination output for the third validation iteration, and a fifth validation iteration may not be performed based at least in part on association of the fourth validation iteration with an affirmative validation determination output for the fourth validation iteration. Accordingly, in the noted example, L=4 and is less than the maximum number of validation iterations (M). Moreover, each of the first, second third and fourth validation iterations are non-final validation iterations for the particular executed hierarchical validation workflow. As another example, consider where M=3, and a first validation iteration is associated with a non-affirmative validation iteration, a second validation iteration may be performed based at least in part on association of the first validation iteration with a non-affirmative validation determination output for the first validation iteration, a third validation iteration may be performed based at least in part on association of the second validation iteration with a non-affirmative validation determination output for the second validation iteration, and a fourth validation iteration may not be performed based at least in part on association of the third validation iteration with a final validation iteration. In the noted example, L=3 and is equal to the maximum number of validation iterations (M). Moreover, in the noted example, the first and second validation iterations are non-final validation iterations, and the third validation iteration is the final validation iteration for the particular executed hierarchical validation workflow. In some embodiments, the maximum number of validation iterations (M) that may be performed for a given ledger entry request may be configurable. For example, M may be selected to be 5, 10, 20, 32, 40 and/or the like. A validation iteration may describe a computer executed operation that may be performed, in response to a ledger entry request. A validation iteration may be associated with a particular validator node and may comprise applying a validator effect measure for the validator node to the validator score for the candidate event based at least in part on comparing miner event data with validator event data. A validator event data may describe monitoring data that describes attributes of a candidate event that is used by a validator node to validate or attempt to validate a candidate event. For example, a validator event data may comprise data (e.g., monitoring data) that is deemed by a validator node as reflective of the candidate event. Validator event data, for example, may comprise data collected by the validator node during a candidate event associated with the monitored node, where the data may describe attributes of the candidate event. An example of a validator event data may include data collected during a physical activity event, where the data describes attributes of the physical activity event. For example, validator event data for a marathon event associated with a monitored node associated with a computing entity representative of an individual may comprise attributes of the marathon event that may include distance covered by the individual, the location of the marathon event, date and/or time of the marathon event, and/or the like. In some embodiments, where one or more nodes may comprise a miner node, as well as a validator node, for a given ledger entry request, each validation iteration can be associated with a validator node that is not the miner node associated with the miner node computing entity that transmitted the ledger entry request. In some embodiments, for each non-final validation iteration of a hierarchical validation workflow associated with a non-affirmative validation determination, the distributed ledger system computing entity106may be configured to make the ledger entry request accessible to one or more validator node computing entities based at least in part on adding the candidate event to one or more validation queues (e.g., transmitting the miner event data to a second validation queue, a third validation, a fourth validation queue, and/or the like). For example, in some embodiments, in response to an initial validation iteration that is associated with a non-affirmative validation determination, data (e.g., miner event data) associated with the candidate event is added to a second validation queue (e.g., a secondary validation queue). In some embodiments, in response to a non-initial validation iteration that is associated with a non-affirmative validation determination, data (e.g., miner event data) associated with the candidate event is added to another validation queue (e.g., a third validation queue, a fourth validation queue, and/or the like) that is a secondary validation queue. A secondary validation queue may describe a pipeline that is configured to temporarily maintain candidate events (e.g., miner event data) associated with at least one validation iteration but having a validation score that fails to satisfy the corresponding consensus threshold for the respective candidate event (e.g., candidate events that are yet to achieve a qualifying validation score after at least one validation iteration, where a qualifying validation score describes a validation score that satisfies the corresponding validation threshold for the candidate event). In some embodiments, data (e.g., miner event data) associated with the candidate event is added to different validation queues for each validation iteration that is associated with a non-affirmative validation determination. In some embodiments, data (e.g., miner event data) associated with the candidate event is added to the same validation queue for each validation iteration that is associated with a non-affirmative validation determination. In some embodiments, multiple validation iterations (e.g., L>1) may be performed, where each validation iteration is associated with a different validator node computing entity, and for each validation iteration, the distributed ledger system computing entity106incrementally updates the validation score associated with the immediately preceding validation iteration in response to matching score satisfying the matching score threshold until: (i) a maximum number of iterations (M) has been performed (e.g., L=M) or (ii) a validation iteration is associated with an affirmative validation determination (e.g., validation score that satisfies the consensus threshold for the candidate event). In some embodiments, a ledger entry request may be associated with an affirmative ledger entry determination based at least in part on a validation iteration that is associated with an affirmative validation determination. In some embodiments, a ledger entry request may be associated with a non-affirmative ledger entry determination based at least in part on a validation iteration that is associated with a non-affirmative validation determination. In some embodiments, the step/operation507may be performed in accordance with the example process800that is depicted inFIG.8, which is an example process for executing a validation iteration. The process800begins at step/operation801when the distributed ledger system computing entity106receives a validation request from a validator node computing entity. A validation request may describe a request by a validator node computing entity associated with a validator node, where the request may comprise a request to validate (e.g., endorse) a candidate event associated with a ledger entry request. In some embodiments, the validation request may comprise retrieving, by the validator node computing entity, a miner event data from a first validation queue (e.g., a primary validation queue). At step/operation802, the distributed ledger system computing entity106identifies a matching score that is generated based at least in part on miner event data for the candidate event and validator event data for the candidate event. In some embodiments the matching score is generated based at least in part on comparing the miner event data to the validator event data (e.g., comparing attributes of the candidate event according to the collected/recorded data of the validator node for the candidate event with attributes of the candidate event according to the collected/recorded data of the miner node for the candidate event). At step operation803, the distributed ledger system computing entity106determines whether the matching score satisfies a matching score threshold. At step/operation803a, the distributed ledger system computing entity106, in response to a matching score that satisfies the matching score threshold, incrementally updates the validation score based at least in part on the validator effect measure for the validator node associated with the validator node computing entity. In some embodiments, incrementally updating a validation score comprises applying a validator effect measure associated with the validator node for the event type to the validation score (e.g., validation score associated with the immediately preceding validation iteration) to generate the updated validation score. In some embodiments, applying a validator effect measure to a validation score for a candidate event comprise increasing the validation score for the candidate event based at least in part on the validator effect measure, where the validator effect measure, for example, may comprise a numerical value/score. At step/operation803b, the distributed ledger system computing entity106, in response to a matching score that fails to satisfy the matching score threshold, causes the ledger entry request to be accessible to one or more validator node computing entities based at least in part on association of the validation iteration with a non-final validation iteration. In some embodiments, causing the ledger entry request to be accessible comprises adding the candidate event to a secondary validation queue (e.g., transmitting the miner event data to a secondary validation queue). In some embodiments, in response to determining that the validation iteration is a final validation iteration (e.g., maximum number of iterations (M) has been performed), the distributed ledger system computing entity106may store the miner event data representative of the candidate event data in a repository (e.g., external repository) associated with the distributed ledger system computing entity106. At step/operation804, the distributed ledger system computing entity106, determines whether the updated validation score satisfies a consensus threshold for the event type associated with the candidate event. At step/operation804a, the distributed ledger system computing entity106, in response to the updated validation score satisfying the consensus threshold for the event type associated with the candidate event, generates an affirmative validation determination, and (ii) initiates execution of a commit command, write command, and/or similar words used herein interchangeably to add the candidate event to the distributed ledger. At step/operation804b, the distributed ledger system computing entity106, in response to the updated validation score failing to satisfy the consensus threshold for the event type associated with the candidate event: (i) generates a non-affirmative validation determination, and (ii) causes the ledger entry request to be accessible to one or more validator node computing entities based at least in part on association with a non-final validation iteration, where the one or more validator node computing entities may attempt to validate the candidate event associated with the ledger request. In some embodiments, causing the ledger entry request to be accessible comprises adding the candidate event to a secondary validation queue (e.g., transmitting the miner event data to a secondary validation queue). In some embodiments, in response to determining that the validation iteration is a final validation iteration (e.g., maximum number of iterations (M) has been performed), the distributed ledger system computing entity106may store the miner event data representative of the candidate event data in a repository (e.g., external repository) associated with the distributed ledger system computing entity106. In some embodiments, multiple validation iterations may be performed, repeating steps801-804with each validation iteration associated with a different validator node computing entity and incrementally updating the validation score associated with the immediately preceding validation iteration in response to matching score satisfying the matching score threshold until: (i) a maximum number of iterations (M) has been performed or (ii) a validation iteration is associated with an affirmative validation determination. In some embodiments, in response to determining that the validation iteration is a final validation iteration and determining that the candidate event is associated with (or otherwise comprise) data that is deemed critical, the distributed ledger system computing entity106may cause the ledger entry request to be accessible to one or more validator node computing entities. In some embodiments, causing the ledger entry request to be accessible to one or more validator node computing entities may comprise adding the candidate event to a tertiary validation queue (e.g., transmitting the miner event data to a tertiary validation queue), where one or more validator nodes may attempt to validate the candidate event. A tertiary validation queue may describe a pipeline that is configured to temporarily maintain candidate events (e.g., miner event data) having a validation score that fails to satisfy corresponding consensus threshold after execution of a hierarchical validation workflow for a candidate event that is deemed to be associated with critical data. In some embodiments, for an executed hierarchical validation workflow whose final validation iteration is associated with a non-affirmative validation determination, the distributed ledger system computing entity106may decrease the miner effect measure associated with the miner node and decrease the validator effect measure for each validator node associated with a non-affirmative validation determination. As noted above, in some embodiments, miner effect measure may be the same as validator effect measure. In some embodiments, upon determining that a ledger entry request is not associated with a validation iteration (e.g., where no validation requests is transmitted to the distributed ledger system computing entity106), the distributed ledger system computing entity106, may be configured to decrease the validator effect measure (e.g., for the particular event type) for the miner node. In some embodiments, as a non-limiting example, a validator effect measure may be decreased based at least in part on: Vnew=Vorig*e−1/n+1,   Equation 1 where: Vnewis the decreased validator effect measure; Vorigis the original (e.g., current) validator effect measure; and n is the number of validator nodes associated with the hierarchical validation workflow. As noted above, in some embodiments, a miner effect measure for a given miner node may be the same as the validator effect measure for the given miner node. Accordingly, in some embodiments, miner effect measure may be decreased based at least in part on equation 1 above. In some embodiments, for each ledger entry request where the corresponding candidate event is committed to a distributed ledger, the distributed ledger system computing entity106may be configured to increase the miner effect measure for: (i) the miner node, and (ii) each validator node associated with a validation iteration with an affirmative validation determination output. Additionally, and/or alternatively, in some embodiments, for each ledger entry request where the corresponding candidate event is committed to a distributed ledger, the distributed ledger system computing entity106may be configured to apply a reward to: (i) the miner node, and (ii) each validator node associated with a validation iteration with an affirmative validation determination. In some embodiments, the reward may comprise a fraction of a coin/value. Additionally, and/or alternatively, in some embodiments, the reward may comprise an increase in a trust measure (e.g., indicative of a trust level) associated with the corresponding miner node and validator nodes. For example, in some embodiments, each miner node and validator node may be associated with a trust measure, where for each ledger entry request where the corresponding candidate event is committed to a distributed ledger, the distributed ledger system computing entity106may be configured to increase the trust measure associated with: (i) the miner node, and (ii) each validator node associated with a validation iteration with an affirmative validation determination output. VI. Conclusion Many modifications and other embodiments will come to mind to one skilled in the art to which this disclosure pertains having the benefit of the teachings presented in the foregoing descriptions and the associated drawings. Therefore, it is to be understood that the disclosure is not to be limited to the specific embodiments disclosed and that modifications and other embodiments are intended to be included within the scope of the appended claims. Although specific terms are employed herein, they are used in a generic and descriptive sense only and not for purposes of limitation.
111,192
11860754
It is emphasized that, in the drawings, various features are not drawn to scale. In fact, in the drawings, the dimensions of the various features have been arbitrarily increased or reduced for clarity of discussion. DETAILED DESCRIPTION The following detailed description refers to the accompanying drawings. Wherever possible, same reference numbers are used in the drawings and the following description to refer to the same or similar parts. It is to be expressly understood that the drawings are for the purpose of illustration and description only. While several examples are described in this document, modifications, adaptations, and other implementations are possible. Accordingly, the following detailed description does not limit disclosed examples. Instead, the proper scope of the disclosed examples may be defined by the appended claims. The terminology used herein is for the purpose of describing particular examples and is not intended to be limiting. As used herein, the singular forms “a,” “an,” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. The term “another,” as used herein, is defined as at least a second or more. The term “coupled,” as used herein, is defined as connected, whether directly without any intervening elements or indirectly with at least one intervening element, unless indicated otherwise. For example, two elements can be coupled mechanically, electrically, or communicatively linked through a communication channel, pathway, network, or system. Further, the term “and/or” as used herein refers to and encompasses any and all possible combinations of the associated listed items. It will also be understood that, although the terms first, second, etc. may be used herein to describe various elements, these elements should not be limited by these terms, as these terms are only used to distinguish one element from another unless stated otherwise or the context indicates otherwise. As used herein, the term “includes” means includes but not limited to, the term “including” means including but not limited to. The term “based on” means based at least in part on. Typically, computing systems function using an operating system (OS), such as but not limited to, the Linux OS, the Windows OS, the MAC OS, the android OS and the like. The OS includes a kernel—a central module of the OS that connects a hardware (e.g., CPU, storage, input-output ports, etc.) of the computing system to an application software (e.g., user-level applications) executing on the OS. In particular, the kernel acts as an interface between the application software and the hardware and manages communication between the application software and hardware. In some applications, a system may include several integral computing systems (e.g., servers implemented as server blades or modules) each running its own OS and storage devices (e.g., storage systems implemented as storage blades or modules). The system may also include one or more management systems that provide management capabilities and certain out-of-band services for the integral computing systems and storage devices of the system. These management systems typically include a volatile memory such as random access memory (RAM) and a non-volatile memory such as non-volatile RAM (NVRAM). In addition, such management systems may operate by executing lightweight or compact OS, for example, embedded Linux (eLinux) and the like. Sometimes, the compact OS may not have capability to access disk memory, as the management system does not require physical storage disk for its operation. Typically, a kernel of the compact OS executing on a given management system manages the RAM by allocating desired memory to user applications running on the management system. Such user applications that run on the given management system may include applications that manage out-of-band services for the integral computing systems and the storage devices, applications that manage communication between any other standby management systems within the given system, applications that monitor the integral computing systems and the storage devices, applications that manage communications with management systems of other systems connected to the given system. To aid in the allocation of memory to various user applications running on the given management system, the kernel maintains a mapping of the RAM into a used memory, a free memory, and a loosely reserved memory. The used memory may be defined as a space from the RAM that is already occupied by the user applications running on the management system. The free memory may refer to a space from the RAM that is unoccupied excluding the loosely reserved memory. The loosely reserved memory may include a cache memory and a buffer memory. The buffer memory may include memory that is used by kernel buffers. The kernel stores references to particular inodes (e.g., files) that are referenced frequently by the user applications and/or the OS in the cache memory. In some examples, the cache memory may use memory that can be allocated slab wise, wherein a slab may represent a set of one or more contiguous pages of memory. The cache memory may include a reclaimable memory and a non-reclaimable memory. The reclaimable memory may refer to a memory that may be reclaimed for other purposes like user applications when the kernel is running low on the free memory. The non-reclaimable memory may refer to a memory that cannot be allocated to user applications even though the kernel is running low on the free memory. In particular, the non-reclaimable memory is reserved for the kernel. During operation of the management system, when no sufficient free memory is available to fulfill memory demand to start a new user application or any additional memory demanded by already running user application, the kernel may start releasing the loosely reserved memory to meet the memory demand. In particular, the kernel may first release the buffer memory followed by releasing the reclaimable memory. This indicates that the kernel is releasing its own cache memory to keep the management system running. This may cause the management system to become sluggish. In extreme situations, excessive use of the reclaimable memory may cause the OS running on the management system to crash. As will be appreciated, in accordance with the aspects of the present disclosure, a system is presented that includes a first management system. In particular, the first management system may obviate the above-mentioned challenges of the traditional systems by continuously monitoring memory usage and comparing the monitored memory usage with respective predetermined thresholds. To achieve these outcomes, the first management system may include a primary memory comprising a free memory, a used memory, and a loosely reserved memory, wherein the loosely reserved memory includes cache memory having a reclaimable memory. Further, the first management system may include a processing resource coupled to the primary memory. The processing resource may monitor an amount of the used memory and an amount of an available memory during runtime of the first management system, wherein the available memory is an estimate of an amount of memory from the primary memory that is available to start an application. Further, the processing resource may enable a synchronized reboot of the first management system if the amount of the used memory is greater than a memory exhaustion first threshold or the amount of the available memory is less than a memory exhaustion second threshold different from the memory exhaustion first threshold. In some examples, the memory exhaustion first threshold and the memory exhaustion second threshold are determined based on usage of the reclaimable memory and a number of major page faults (described later). In some examples, the synchronized reboot of the first management system may include backing-up data stored in the primary memory, fully or partially, into a non-volatile memory and then rebooting the first management system. In certain examples, in a system having a redundant second management system acting as a standby management system and the first management acting as an active management system, the synchronized reboot of the first management system may include changing a role of the second management system to the active management system and changing the role of the first management system to the standby management system. Further, the synchronized reboot of the first management system may also include rebooting the first management system after the role change of the second management system is completed. Furthermore, in some examples, the processing resource may also determine whether the amount of the used memory is greater than a used memory decay threshold or the amount of the available memory is less than the available memory decay threshold. The used memory decay threshold and the available memory decay threshold are respectively representative of the amount of the used memory and the amount of the available memory at which the amount of the reclaimable memory begins to decrease below certain threshold amount. Accordingly, by comparing the amount of the used memory with the used memory decay threshold or by comparing the amount of the available memory with the available memory decay threshold, the processing resource may identify that the reclaimable memory has begun decrease. Accordingly, the processing resource may generate some warning, which may be helpful to a user to take any corrective memory management action. In addition, in some examples, the processing resource can back-up the data stored in the primary memory into a non-volatile memory. The comparison of the amount of the used memory and the amount of the available memory respectively with the memory exhaustion first threshold and the memory exhaustion second threshold may help determine whether there is going to be any drastic increase in the number of major page faults. As soon as the amount of the used memory reaches the memory exhaustion first threshold or the amount of the available memory reaches the memory exhaustion second threshold, the processing resource may initiate the synchronized reboot of the first management system. Consequently, chances of the OS of the first management system crashing abruptly may be minimized. Referring now to the drawings, inFIG.1, a system100is presented, in accordance with an example. In some examples, the system100may be any electronic device capable of storing data, processing data, and/or communicating data with external devices over a network. Examples of the system100may include, but are not limited to, a server, a storage device, a network switch, a router, a mobile communication device, a desktop computer, a portable computer, a computing system resource enclosure, and a composable computing system including one or more servers and/or one or more storage devices. The server may be a blade server, for example. The storage device may be a storage blade, for example. Further, in some examples, the computing system enclosure may be a blade enclosure housing one or more blades (e.g., blade servers, storage blades, etc.). As depicted inFIG.1, in some examples, the system100may include a first management system102. The first management system102may implement various accessibility services for the system100and any server and storage blades (not shown) that are installed within the system100. The first management system102may be implemented using a processing resource that is separate from the processing resources of the server blades and the storage blades that are installed within the system100. In some examples, the first management system102may allow a user (such as a system administrator) to perform management operations on the system100irrespective of whether the server blades and the storage blades are operational. The first management system102may also have management capabilities for sub-systems (e.g., cooling system) of the system100. Moreover, in certain examples, the first management system102may provide so-called “out-of-band” (OOB) services, such as remote console access, remote reboot and power management functionality, monitoring health of the system100, access to system logs, and the like. The term OOB services as used herein may refer to any service provided by the first management system102execution of which does not interfere with instructions or workloads running on the storage blades and/or the server blades installed in the system100. The first management system102may include an interface (also referred to as a management channel) such as a network interface, and/or serial interface to enable communication with the first management system102. The first management system102may include a primary memory104, a processing resource106, and a non-volatile memory108. The processing resource106may be a physical device, for example, one or more central processing unit (CPU), one or more semiconductor-based microprocessors (single core or multicore), application-specific integrated circuit (ASIC), a field programmable gate array (FPGA), other hardware devices capable of retrieving and executing instructions, or combinations thereof. The processing resource106may fetch, decode, and execute the instructions to enable a synchronized reboot of the first management system102under low-memory situations. As an alternative or in addition to executing the instructions, the processing resource106may include at least one integrated circuit (IC), control logic, electronic circuits, or combinations thereof that include a number of electronic components for performing the functionalities intended to be performed by the first management system102by executing an operating system (OS)105. In some examples, the OS105may be compact, lightweight, and/or embedded OS. In some examples, the compact, lightweight, and/or embedded OS may be designed to perform specific set of tasks and need not manage disk storage. One example of the OS105may be the embedded Linux OS. The processing resource106may run the OS105by executing respective instructions stored in the non-volatile memory108that are fetched into the primary memory104. The OS105may also include a kernel (not shown). The kernel of the OS105connects a hardware (e.g., the processing resource106, the primary memory104, the non-volatile memory108, input-output ports (not shown), etc.) of the first management system102to an application software (e.g., user applications) executing on the OS105. In particular, the kernel of the OS105manages communication between the application software and the hardware by acting as an interface between the application software and the hardware. In some examples, the kernel of the OS105may occupy certain storage space (e.g., a loosely reserved memory and at least some portion of the used memory, described later) in the primary memory104. User applications that may be executed on the first management system102may include applications that manage out-of-band services for the server blades and/or the storage blades; applications that manage communication between any other standby management systems, if any (not shown inFIG.1) within the system100; applications that monitor the server blades and/or the storage blades; and/or applications that manage communications with management systems of other systems (not shown inFIG.1) connected to the system100. Both the primary memory104and the non-volatile memory108are non-transitory. In some examples, the primary memory104may be an electronic memory such as a Random Access Memory (RAM). Further, in some examples, the non-volatile memory108may be an electronic memory such as an Electrically Erasable Programmable Read-Only Memory (EEPROM), a flash memory, or a non-volatile RAM. In some examples, the non-volatile memory108may be encoded with executable instructions. These instructions may include instructions that are part of an operating system105(described later) and instructions that cause the processing resource106to perform one or more methods, for example, methods described inFIGS.3and8-10. During operation of the first management system102, these instructions are loaded into the primary memory104from the non-volatile memory108for execution by the processing resource106. Reference numeral110may represent a virtualized memory (hereinafter referred to as a virtual memory110) formed by the processing resource106that may include pointers to instructions112and data114that are loaded into the primary memory104from the non-volatile memory108. In some examples, memory space of the primary memory104may be categorized into a used memory116, a free memory118, and a loosely reserved memory120. The used memory116may be defined as a space from the primary memory104that is already occupied by the user applications running on the first management system102. The free memory118may refer to a space from the primary memory104that is unoccupied excluding the loosely reserved memory120. The loosely reserved memory120may include a buffer memory (BM)122and a cache memory124. The buffer memory122may include memory that is used by kernel buffers. The kernel stores references to particular inodes (e.g., files) that are referenced frequently by the user applications and the OS105in the cache memory124. In some examples, the cache memory124may use memory that can be allocated slab wise, wherein a slab may represent a set of one or more contiguous pages of memory. The cache memory124may include a reclaimable memory (RM)126and a non-reclaimable memory (NRM)128. The reclaimable memory126may refer to a memory that may be reclaimed for other purposes like user applications when the kernel is running low on the free memory118. The non-reclaimable memory128may refer to a memory that cannot be allocated to user applications even though the kernel is running low on the free memory. In particular, the non-reclaimable memory128is reserved for the kernel. In some implementations, the loosely reserved memory120may also include certain additional memory regions in addition to the buffer memory122and the cache memory124, without limiting the scope of the present disclosure. Further, in certain implementations, the cache memory124may also include certain additional memory regions in addition to the reclaimable memory126and the non-reclaimable memory128, without limiting the scope of the present disclosure. In addition to above mentioned categorization of the primary memory104, the kernel of the OS105may also keep a track of an available memory within the primary memory104. The available memory may represent an estimation of an amount of memory that is available for starting a new user application considering a fact that the kernel is capable of releasing a portion of the loosely reserved memory120when needed. For example, during an operation of the first management system102, if the primary memory104is running-out of free memory118, the kernel may release at least a portion of the buffer memory122followed by the reclaimable memory126. Such portion of the loosely reserved memory120that is releasable by the kernel is also accounted in the estimation of the available memory. Therefore, at any given time, the amount of the available memory may be higher than the amount of the free memory118. As briefly noted hereinabove, during the operation of the first management system102, as the user applications are executed and/or when the new user application(s) are initiated, more and more amount of the free memory118is occupied causing the amount of the used memory116to increase. In certain instances, the first management system102may experience low-memory situations if amount of the used memory116increases beyond certain value. In such low-memory situation, the kernel of the operating system105may start releasing its own memory (e.g., the loosely reserved memory120) to meet the memory demand. In particular, the kernel may first release the buffer memory122followed by releasing the reclaimable memory126. In a conventional management system, when an amount of used memory increases and no sufficient free memory is available to fulfill memory demanded by a new user application or any additional memory demanded by already running user applications, a kernel may start releasing its own cache memory to keep the conventional management system running. This may cause the conventional management system to become sluggish. In extreme situations, excessive use of the reclaimable memory may cause an OS running on the conventional management system to crash if too much additional memory is demanded by the user applications. In certain situations, one possible reason for such sluggishness and of crashing of the conventional management system is an exponential increase in a number of major page faults as a result of an increase in the amount of the used memory and/or reduction in the amount of the available memory during operation. Typically, a page fault may refer to an event or an exception raised by the kernel when one or more pages (containing instructions and/or data) that are requested by a user application and/or an operating system are not found in the virtual memory causing the one or more pages to be fetched from the primary memory into the virtual memory. On the other hand, the term major page fault may be defined as an event or an exception raised by the kernel when one or more pages (containing instructions and/or data) that are requested by the user application and/or the operating system are not found in the virtual memory as well as in the primary memory. When such an event occurs, the requested page(s) may need to be retrieved from the non-volatile memory separate from the primary memory. Moreover, since an embedded OS does not have a swap capability with the storage disks, all the requested pages may need to be copied to the primary memory from the non-volatile memory without removing certain pages pertaining to the user applications from the primary memory. Since the kernel cannot swap out the pages that have been allocated to the user applications, the kernel may end up swapping in and out the code segments that are to be executed. Accordingly, what may happen is the free memory may be depleted, the used memory be full, but the kernel may try to swap in and out code segments (e.g., programming instructions pertaining to the OS105) from the non-volatile memory that have to be executed. Moreover, such major page fault may cause increased utilization of the primary memory. Accordingly, amount of the available memory may drastically reduce and the amount of the used memory any quickly increase. As the amount of the used memory increases, number of major page faults may also increase exponentially which may in turn slow-down the management system and cause the operation system to crash. In accordance with aspects of the present disclosure, the processing resource106of the first management system102may monitor the primary memory104during a runtime of the first management system102in order to obviate the above-mentioned shortcomings of the conventional systems and to enable a synchronized reboot of the first management system102. The term runtime may refer to an operation of the first management system102when the first management system102is functioning in a real-time field application of the system100. In some examples, during the runtime of the first management system102, the processing resource106may monitor the amount of the used memory116and the amount of an available memory. Further, the processing resource106may determine whether the amount of the used memory116is greater than a memory exhaustion first threshold (METH1) or the amount of the available memory is less than a memory exhaustion second threshold (METH2). The memory exhaustion second threshold (METH2) is different from the memory exhaustion first threshold (METH1). In some examples, the memory exhaustion first threshold (METH1) and the memory exhaustion second threshold (METH2) are determined by the processing resource106during a test phase of the first management system102based on usage of the reclaimable memory126and a number of major page faults during a test phase of the first management system102. In certain other examples, the processing resource106, during the test phase, may also determine additional thresholds such as a used memory decay threshold (UMDTH) and an available memory decay threshold (AMDTH) that way serve as warning thresholds indicative of a start of depletion of the reclaimable memory126. The term “test phase” may refer to an operation of the first management system102during which the processing resource106determines various memory thresholds, such as, the used memory decay threshold (UMDTH), the available memory decay threshold (AMDTH), the memory exhaustion first threshold (METH1) and the memory exhaustion second threshold (METH2). The test phase may include a memory allocation and monitoring phase that is described in greater detail in conjunction withFIGS.3A and4-7. Further, the test phase may include a threshold determination phase that is described in greater detail in conjunction withFIGS.3B and4-7. Further, the processing resource106may enable a synchronized reboot of the first management system102in response to determining that the amount of the used memory is greater than the memory exhaustion first threshold (METH1) or the amount of the available memory is less than the memory exhaustion second threshold (METH2). In one example, the synchronized reboot of the first management system102may include backing-up data stored in the primary memory104, fully or partially, into the non-volatile memory108and then rebooting the first management system102. Additional details regarding the synchronized reboot of the first management system102is described in conjunction with methods described inFIGS.8-10. FIG.2depicts a system200, in accordance with another example. The system200may be representative of one example of the system100ofFIG.1. Examples of the system200may include, but are not limited to, a server, a storage device, a network switch, a router, a mobile communication device, a desktop computer, a portable computer, a computing system resource enclosure, and a composable computing system including one or more servers and/or one or more storage devices. The server may be a blade server, for example. The storage device may be a storage blade, for example. Further, in some examples, the computing system enclosure may be a blade enclosure housing one or more blades (e.g., blade servers, storage blades, etc.). Further, the system200may include certain components that are similar to the ones described inFIG.1, description of which is not repeated herein. For example, the system200includes the first management system102that is already described inFIG.1. In addition, the system200may include a second management system202and electronic devices204,206, and208(hereinafter collectively referred to as electronic devices204-208). The second management system202may be representative of one example of the first management system102and include similar components as included in the first management system102. Further, in some examples, the electronic devices204-208may include server blades, storage blades, network switches, routers, and the like. In some examples, in the system200, at any given time, one of the first management system102or the second management system202may be operated as an active management system and the other one may be operated as a standby management system. In the description hereinafter, the first management system102is illustrated as being the active management system and the second management system202is illustrated as being the standby management system. Both the first management system102and the second management system202are capable of communicating with the electronic devices204-208and provide management services for the electronic devices204-208. However, at any given time, only the active management system may have active communication links with the electronic devices204-208. Certain details of the first management system that are described inFIG.1are not repeated herein. During the runtime of the first management system102, the processing resource106may monitor the amount of the used memory116and the amount of the available memory. Further, the processing resource106may determine whether the amount of the used memory116is greater than the used memory decay threshold (UMDTH) or whether the amount of the available memory is less than the available memory decay threshold (AMDTH). If it is determined that the amount of the used memory116is greater than the used memory decay threshold (UMDTH) or the amount of the available memory is less than the available memory decay threshold (AMDTH), the processing resource106may synchronize data stored in the primary memory104with the non-volatile memory108. In some examples, such synchronization of the data may include backing-up certain log files or backing-up all the data stored in the primary memory104into the non-volatile memory108. In fact, a situation in which the amount of the used memory116is greater than the used memory decay threshold (UMDTH) or the amount of the available memory is less than the available memory decay threshold (AMDTH) is indicative of the fact that the kernel of the OS105has started releasing the reclaimable memory126and there are chances that the first management system102may reboot if the reclaimable memory126is used beyond certain limit. Therefore, detection of such start of the depletion of the reclaimable memory126by monitoring the amount of the used memory116and the amount of the available memory may aid in proactively securing, fully or partially, the data stored in the primary memory104by creating its backup in the non-volatile memory108. The processing resource106may continue to monitor the amount of the used memory116and the amount of the available memory during the runtime. Further, the processing resource106may determine whether the amount of the used memory116is greater than the memory exhaustion first threshold (METH1) or whether the amount of the available memory is less than the memory exhaustion second threshold (METH2). In some examples, the situation of the amount of the used memory116being greater than the memory exhaustion first threshold (METH1) or the amount of the available memory being less than the memory exhaustion second threshold (METH2) may be indicative of the fact that the reclaimable memory126has dropped and any excessive memory allocation from the primary memory104may cause the OS105to crash. At any point in time, if the processing resource106determines that the amount of the used memory116is greater than the memory exhaustion first threshold (METH1) or the amount of the available memory is less than a memory exhaustion second threshold (METH2), the processing resource106may enable synchronized reboot of the first management system102. To effect such synchronized reboot of the first management system102that is currently operating as the active management system, the processing resource106may first initiate a role change of the second management system202from the standby management system to the active management system. Once the second management system202becomes the active management system, the processing resource106may reboot the first management system102. However, in some examples, if the first management system102is operational as the standby management system and the second management system202is operational the active management system and it is determined that the amount of the used memory is greater than the memory exhaustion first threshold (METH1) or the amount of the available memory is less than a memory exhaustion second threshold (METH2), the processing resource106may notify the second management system202that the first management system102is going to reboot. After the second management system202is notified, the processing resource106may reboot the first management system102. Method of enabling the synchronized reboot is described in conjunction withFIG.10. As will be appreciated, the processing resource106may enable a synchronized reboot of the first management system102in low-memory situations (e.g., when the amount of the used memory116is greater than the memory exhaustion first threshold (METH1), or the amount of the available memory is less than the memory exhaustion second threshold (METH2)). This is achieved at least in part due to monitoring of the primary memory104, in particular, monitoring of the used memory116and the available memory and comparing the amounts of the used memory116and the available memory with the respective thresholds. In particular, comparison of the amount of the used memory116and the amount of the available memory respectively with the used memory decay threshold (UMDTH) and the available memory decay threshold (AMDTH) may help determine start of the depletion of the reclaimable memory126. Such determination of the start of the depletion of the reclaimable memory126may help generate certain warning message so that a user can perform any memory management operation for lowering memory consumption if the user desires to do so. Also, upon determining that the amount of the used memory116is greater than the used memory decay threshold (UMDTH) or the amount of the available memory is less than the available memory decay threshold (AMDTH), the processing resource106may proactively start backing-up the data stored in the primary memory104into the non-volatile memory108. Moreover, the comparison of the amount of the used memory116and the amount of the available memory respectively with the memory exhaustion first threshold (METH1) and the memory exhaustion second threshold (METH2) may help determine whether there is going to be any drastic increase in the number of major page faults. As soon as the amount of the used memory116reaches the memory exhaustion first threshold (METH1) or the amount of the available memory reaches the memory exhaustion second threshold (METH2), the processing resource106may initiate the synchronized reboot of the first management system102. Consequently, chances of the OS105of the first management system102crashing abruptly may be minimized. Further, the synchronized reboot of the first management system102may aid in switching the role of the second management system202of the first management system102that is operational as the active management system experiences the low-memory situations. In such situation, the role of the second management system202may be changed to an active management system. Advantageously, while the first management system102undergoes the reboot, the second management system202starts to perform operations that the first management system102used to perform. Accordingly, performance of the system200may not be impacted due the low-memory situations encountered by the first management system102. FIGS.3A and3Brespectively depict flow diagrams of methods300A and300B that collectively define a test phase of the first management system102, in accordance with an example. By way of example, the method300A illustrates a memory allocation and monitoring phase of the test phase, in accordance with an example. Further, the method300B illustrates a threshold determination phase of the test phase in which various thresholds for the used memory116and an available memory may be determined. In some examples, the method300B may be performed concurrently with the method300A. In certain other examples, the method300B may be performed after the method300A is executed. For illustration purposes, the methods300A and300B will be described in conjunctionFIGS.1and2. Referring now toFIG.3A, the method300A may include method blocks302,304,306,308, and310(hereinafter collectively referred to as blocks302-310) which may be performed by a processor-based system, for example, the first management system102. In particular, operations at each of the method blocks302-310may be performed by the processing resource106of the first management system102. At block302, the processing resource106may allocate a memory to a test process from the primary memory104. For example, the processing resource106may allocate a first amount of memory to the test process at block302. The test process may be a dummy process that may consume an allocated memory from the primary memory104by executing dummy instructions. Further, at block304, the processing resource106may monitor and log/record an amount of the reclaimable memory126, the amount of the used memory116, the amount of the available memory, and the number of the major page faults as the primary memory104is utilized by the test process. In order to effect such monitoring at block304, the processing resource106may execute instructions which when executed cause the processing resource106to command the kernel of the OS105to provide the amount of the reclaimable memory126, the amount of the used memory116, the amount of the available memory, and the number of the major page faults as the primary memory104is utilized by the test process. In certain other examples, the processing resource106may execute a predefined application programing interfaces (APIs) that can obtain such information from the kernel of the OS105. The processing resource106may store/log the monitored such amounts in the non-volatile memory108. Further, in some examples, at block306, a check may be performed by the processing resource106to determine whether a monitoring termination criterion is satisfied. In one example, the monitoring termination criterion may include the amount of the used memory116being equal to or greater than a used memory test threshold. The used memory test threshold may be set to a value close to about 90% of the total memory size of the primary memory104, for example. In another example, the monitoring termination criterion may include the amount of the available memory being equal to or less than an available memory test threshold. The available memory test threshold may be set to a value close to about 10% of the total memory size of the primary memory104, for example. In yet another example, the monitoring termination criterion may include the number of the major page faults increasing above certain value, for example, a major page fault threshold (described later). Accordingly, at block306, if the monitoring termination criterion is not satisfied, at block308, the processing resource106may further allocate additional memory to the test process and continue to perform monitoring at block304. However, at block306, if the monitoring termination criterion is satisfied, at block310, the memory allocation and monitoring phase (e.g., the method300A) may be terminated. In certain situations, the memory allocation and monitoring phase (e.g., the method300A) may also be terminated if the OS105crashes due to low-memory condition and data monitored at block304may be saved in the non-volatile memory108for later use by the processing resource106. Once the method300A is executed, the data regarding the amount of the reclaimable memory126, the amount of the used memory116, the amount of the available memory, and the number of the major page faults is stored in the non-volatile memory108. Table-1 depicted below represents an example data stored by the processing resource106at various time instances as the primary memory104is utilized by the test process. TABLE 1Example data logged during the memoryallocation and monitoring phaseUsedAvailableReclaimableNumber ofMemoryMemoryMemoryMajor Page(KiB)(KiB)(KiB)Faults413564487360106685795073283934641078857960918829131610832696709968192188107047148114889229673367718355326830071488348614244876858689638873164146455442584 For ease of illustration, such data collected during the memory allocation and monitoring phase of the test phase is shown in various graphical representations depicted inFIGS.4-7.FIGS.4-7are briefly described herein prior to moving on the threshold determination phase described inFIG.3B. Referring now toFIG.4, a graphical representation400showing variations in the amount of the reclaimable memory126with reference to the amount of the used memory116is depicted, in accordance with one example. In the graphical representation400, the X-axis402represents the amount of the used memory116in KiB and the Y-axis404represents the amount of the reclaimable memory126in KiB. Further, a line406represents variations in the amount of the reclaimable memory126with reference to the amount of the used memory116observed during the memory allocation and monitoring phase. As depicted in the graphical representation400ofFIG.4, the amount of reclaimable memory126starts to decrease when the amount of the used memory116is at about 72000 KiB. Further,FIG.5depicts a graphical representation500showing variations in the major page faults with reference to the amount of the used memory116, in accordance with one example. In the graphical representation500, the X-axis502represents the amount of the used memory116in KiB and the Y-axis504represents major page faults. Further, a line406represents variations in the number of the major page faults with reference to the amount of the used memory116observed during the memory allocation and monitoring phase. As depicted in the graphical representation500ofFIG.5, the number of major page faults starts to rise when the amount of the used memory116is about 82000 KiB. Furthermore,FIG.6depicts a graphical representation600showing variations in the amount of the reclaimable memory126with reference to the amount of the available memory, in accordance with one example. In the graphical representation600, the X-axis602represents the amount of the available memory in KiB and the Y-axis604represents the amount of the reclaimable memory126in KiB. Further, a line606represents variations in the amount of the reclaimable memory126with reference to the amount of the available memory observed during the memory allocation and monitoring phase. As depicted in the graphical representation600ofFIG.6, the amount of reclaimable memory126starts to decrease when the amount of the available memory is reduced to about 190000 KiB. Moreover,FIG.7depicts a graphical representation700showing variations in the major page faults with reference to the amount of the available memory, in accordance with one example. In the graphical representation700, the X-axis702represents the amount of the available memory in KiB and the Y-axis704represents major page faults. Further, a line706represents variations in the number of the major page faults with reference to the amount of the available memory observed during the memory allocation and monitoring phase. As depicted in the graphical representation700ofFIG.7, the number of major page faults starts to rise when the amount of the available memory is reduced to about 50000 KiB. Referring now toFIG.3B, the threshold determination phase of the test phase, in accordance with one example. The method300B may include method blocks312,314,316, and318(hereinafter collectively referred to as blocks312-318) which may be performed by a processor-based system, for example, the first management system102. In particular, operations at each of the method blocks312-318may be performed by the processing resource106of the first management system102. At block312, the processing resource106may determine the used memory decay threshold (UMDTH) as the amount of the used memory116at which the amount of the reclaimable memory126starts to decrease below a predefined threshold (RMTH). The predefined threshold (RMTH) may define beginning of a decline of the amount of the reclaimable memory126. Referring again toFIG.4, the predefined threshold (RMTH) may be about 10700 KiB, for example. In the example ofFIG.4, it is observed that the amount of the reclaimable memory126starts to decrease below the predefined threshold (RMTH) when the amount of the used memory116is at about 72000 KiB. Accordingly, in one example, the used memory decay threshold (UMDTH) may be set to 72000 KiB. Turning back toFIG.3B, at block314, the processing resource106may determine the memory exhaustion first threshold (METH1) based on the used memory decay threshold (UMDTH) and the amount of the used memory116at which the number of major page faults starts to rise above a major page fault threshold (MPFTH). Referring to the example ofFIG.5, it may be observed that the number of major page faults starts to increase exponentially once the amount of the used memory116reaches to a certain value, hereinafter referred to as a used memory page fault impact value (UMPFimpact). For example, as depicted inFIG.5, when the amount of the used memory116increases beyond about 82000 KiB, the number of major page faults starts to rise exponentially. In some examples, the used memory exhaustion first threshold (METH1) may be higher than the used memory decay threshold (UMDTH) and is determined using following equation (1). METH⁢1=UMDTH+(UMPFimpact-UMDTH)2Equation⁢(1) In one example, considering the available memory page fault impact value (UMPFimpact) being 82000 KiB, the memory exhaustion first threshold (METH1) may be determined as being 77000 KiB using equation (1). It may be noted that equation (1) presented hereinabove represents an example calculation for illustration purposes and should not be construed limiting the scope of the present disclosure. Furthermore, at block316, the processing resource106may determine the available memory decay threshold (AMDTH) as the amount of the available memory at which the amount of the reclaimable memory126starts to decrease below the predefined threshold (RMTH). In the example graphical representation600presented inFIG.6, it is observed that the amount of the reclaimable memory126starts to decrease below the predefined threshold (RMTH) when the amount of the available memory is at about 190000 KiB. Accordingly, in one example, the available memory decay threshold (AMDTH) may be set to 190000 KiB. Moreover, at block318, the processing resource106may determine the memory exhaustion second threshold (METH2) based on the available memory decay threshold and the amount of the available memory at which the number of major page faults starts to rise above the major page fault threshold (MPFTH). In the example graphical representation700presented inFIG.7, it may be observed that the number of major page faults starts to increase exponentially once the amount of the available memory reaches to a certain value, hereinafter referred to as an available memory page fault impact value (AMPFimpact). For example, as depicted inFIG.7, when the amount of the available memory decrease below about 50000 KiB, the number of major page faults starts to rise exponentially. In some examples, the memory exhaustion second threshold (METH2) may be higher than the available memory decay threshold (AMDTH) and is determined using following equation (2). METH⁢2=AMDTH-(AMPFimpact-AMDTH)2Equation⁢(2) In one example, considering the available memory page fault impact value (AMPFimpact) being 50000 KiB, the available memory exhaustion first threshold (METH1) may be determined as 120000 KiB. It may be noted that equation (2) presented hereinabove represents an example calculation for illustration purposes and should not be construed limiting the scope of the present disclosure. Turning now toFIG.8, a flow diagram depicting a method800for operating a management system such as the first management system102is presented, in accordance with one example. For illustration purposes, the method800will be described in conjunction with the first management system102described inFIGS.1and2. The method800may include method blocks802,804,806, and808(hereinafter collectively referred to as blocks802-808) which may be performed by a processor-based system, for example, the first management system102during the runtime its runtime. In particular, operations at each of the method blocks802-808may be performed by the processing resource106of the first management system102. At block802, the processing resource106may monitor an amount of the used memory116and an amount of an available memory during the runtime of the first management system102. As previously noted, the available memory is an estimate of an amount of memory from the primary memory104that is available to start a new application. In order to effect such monitoring at block802, the processing resource106may execute instructions which when executed command the kernel of the OS105to provide the amount of the used memory116and the amount of the available memory during the runtime. In certain other examples, the processing resource106may execute a predefined application programing interfaces (APIs) that can obtain such information from the kernel of the OS105. Further, at block804, the processing resource106may perform a check to determine whether the amount of the used memory116is greater than the memory exhaustion first threshold (METH1). In particular, at block804, the processing resource106may compare the amount of the used memory116monitored at block802with the memory exhaustion first threshold (METH1) to determine whether the amount of the used memory116is greater than the memory exhaustion first threshold (METH1). At block804, if it is determined that the amount of the used memory116is greater than the memory exhaustion first threshold (METH1), the processing resource106may execute an operation at block808(described later). However, at block804, if it is determined that the amount of the used memory116is not greater than the memory exhaustion first threshold (METH1), the processing resource106may perform another check at block806. At block806, the processing resource106may perform a check to determine whether the amount of the available memory is less than the memory exhaustion second threshold (METH2). In particular, at block806, the processing resource106may compare the amount of the available memory monitored at block802with the memory exhaustion second threshold (METH2) to determine whether the amount of the available memory is less than the memory exhaustion second threshold (METH2). At block806, if it is determined that the amount of the available memory is greater than or equal to the memory exhaustion second threshold (METH2), the processing resource106may again continue to monitor the amount of the used memory116and the amount of the available memory at block802. However, at block806, if it is determined that the amount of the available memory is less than the memory exhaustion second threshold (METH2), at block808, the processing resource106may enable a synchronized reboot of the first management system102. Detailed method steps of the enabling the synchronized reboot of the first management system102are described inFIG.10(described later). In the method800, although block806is shown as being performed after the execution of the method at block804inFIG.8, in some examples, the methods at blocks804and806may be performed in parallel. In certain other examples, the order of blocks804and806may be reversed. Referring now toFIG.9, a flow diagram depicting a method900for operating the first management system102is presented, in accordance with another example. The method900may be representative of one example of the method800ofFIG.8and includes certain blocks that are similar to those described inFIG.8, description of which is not repeated herein. For illustration purposes, the method900will be described in conjunction with the first management system102described inFIGS.1and2. The method900may include method blocks902,904,906,908,910,912,914,916,918, and920(hereinafter collectively referred to as blocks902-920) which may be performed by a processor-based system, for example, the first management system102. In particular, operations at each of the method blocks902-920may be performed by the processing resource106of the first management system102during its runtime. At block902, the processing resource106may monitor the amount of the used memory116and the amount of an available memory during the runtime of the first management system102in a similar fashion as described with reference to the block802ofFIG.8. Further, at block904, the processing resource106may perform a check to determine whether the amount of the used memory116is greater than the used memory decay threshold (UMDTH). In particular, at block904, the processing resource106may compare the amount of the used memory116monitored at block902with the used memory decay threshold (UMDTH) to determine whether the amount of the used memory116is greater than the used memory decay threshold (UMDTH). At block904, if it is determined that the amount of the used memory116is greater than the used memory decay threshold (UMDTH), the processing resource106may execute method at block908(described later). However, at block904, if it is determined that the amount of the used memory116is not greater than the used memory decay threshold (UMDTH), the processing resource106may perform another check at block906. At block906, the processing resource106may perform a check to determine whether the amount of the available memory is less than the available memory decay threshold (AMDTH). In particular, at block906, the processing resource106may compare the amount of the available memory monitored at block902with the available memory decay threshold (AMDTH) to determine whether the amount of the available memory is less than the available memory decay threshold (AMDTH). Although the operation at block906is shown as being performed after the operation at block904is performed, in some other examples, the operations at blocks904and906may be performed in parallel. In certain other examples, the order of the execution of blocks904and906may be reversed. At block906, if it is determined that the amount of the available memory is greater than or equal to the available memory decay threshold (AMDTH), the processing resource106may continue to monitor the amount of the used memory116and the amount of the available memory at block902. However, at block906, if it is determined that the amount of the available memory is less than the memory exhaustion second threshold (METH2), at block908, the processing resource106may synchronize data stored in the primary memory104with the non-volatile memory108. In some examples, such synchronization performed by the processing resource106may include storing all the content of the primary memory104into the non-volatile memory108. When the processing resource106detects that the amount of the used memory116is greater than the used memory decay threshold (UMDTH) or that the amount of the available memory is less than the available memory decay threshold (AMDTH) for the first time, the processing resource106may dump certain logs or, in some examples, the entire content of the primary memory104into the non-volatile memory108as a primary memory backup. For future instances of the amount of the used memory116being greater than the used memory decay threshold (UMDTH) or that the amount of the available memory being less than the available memory decay threshold (AMDTH), the processing resource106may incrementally update the primary memory backup stored in the non-volatile memory108so that the primary memory backup remains synchronized with the primary memory104. Furthermore, optionally, in some examples, at block910, the processing resource106may determine a time remaining (TR) for the primary memory104to be exhausted. In one example, the time remaining for the primary memory104to be exhausted may refer to a time duration for the used memory116to reach the memory exhaustion first threshold (METH1) during the runtime of the first management system102. In some examples, the time remaining (TR) may be determined based on a rate of change in the amount of the used memory116during the runtime as described below. In order to determine time remaining (TR), the processing resource106may log the amount of the used memory at several intervals for each day, in one example. Table-2 presented below represents an example log of the amount of the used memory116for six days. TABLE 2Example log of the amount of the used memoryDay 0Day 1Day 2Day 3Day 4Day 5AmountAmountAmountAmountAmountAmountof Usedof Usedof Usedof Usedof Usedof UsedMemoryMemoryMemoryMemoryMemoryMemoryHour(KiB)(KiB)(KiB)(KiB)(KiB)(KiB)133892433955234614834029634594423339068348420433972034982456339324340172350968733953235182483405723529409103404281133961234330835376812133409921434324035485615339692355752163441643568641736404418365252193398043670002034372836885621346292367000223403642336885624 Additionally, in some examples, the processing resource106may also log the all latest highs for any day that had a higher amount of the used memory116compared to the previous day along with number of processes running on the first management system102when the high was reached. For example, Table-3 presented below shows daily highs of the used memory116along with the number of the processes running on the first management system102when the respective high was reached, and an average memory utilization for one process. In particular, Table-3 represents data corresponding to the days (e.g., day 0, day 1, day 2, and day 5) that have shown in an increase in the amount of the used memory116compared to the previous day. TABLE 3Example log of data corresponding to days showingincreases utilization of the used memoryDay 0Day 1Day 2Day 5AmountAmountAmountAmountof Usedof Usedof Usedof UsedMemoryMemoryMemoryMemory(KiB)(KiB)(KiB)(KiB)Highest340364344164346292368856Amountof the usedmemory for agiven dayNumber of431432431439ProcessesAverage789.7796.7803.5840.2memory usedby a singleprocess As observed from the Table-3, not only there is shown in increase in the number of processes from Day 0 to Day 5, but also the average amount of the used memory116occupied per process has also increased. Such an increase in the average amount of the used memory116occupied per process may also indicate that overall memory usage is increasing. In some examples, for a given day, the processing resource106may determine a time duration for the amount of used memory116to reach the memory exhaustion first threshold (METH1) based on a rate of increase of the amount of the used memory116. The rate of change (e.g., increase) of the amount of the used memory116(RATEx) for a given day X may be determined using peak utilization of previous days those observed increase in the amount of the used memory116. For example, based on the data of Table-3, the rate of increase in the amount of the used memory116on the day 2 (i.e., RATEDAY2) may be determined as 344164−340364=3800 KiB/day. Similarly, the rate of increase in the amount of the used memory116on day 6 (i.e., RATEDAY6) may be determined as (344164−340364)/(5−2)=7521 KiB/day. In some examples, the processing resource106may determine the time remaining (TR) for the primary memory104to be exhausted (e.g., the time duration for the used memory to reach the memory exhaustion first threshold (METH1)) using following equation (3). TR=METH⁢1+UMLatest⁢PeakRATEX⁢DaysEquation⁢(3) Where UMLatest Peakmay represent latest peak amount of the used memory observed before Day X and METH1=77000. For example, for Day 2 with UMLatest Peakbeing 344164 KiB and the RATEDAY2being 3800 KiB/day, the time remaining (TR) may be determined as ((77000+344164)/3800)=70.3 days. Similarly, on day 6 with UM Latest Peak being 368856 KiB and the RATEDAY6being 7521 KiB/day, the time remaining (TR) may be determined as ((77000+368856)/7521)=38.8 days. Furthermore, optionally, in some examples, at block912, the processing resource106may issue a warning indicating that the primary memory104has started running out of memory and the user may want to take any suitable action (e.g., terminating certain low priority user applications) to that some amount of memory may be freed-up. In some example, the warning issued at block912may also include an information regarding the time remaining (TR) to provide the user an idea about how much time is left for the used memory116to reach the memory exhaustion first threshold (METH1). The warning may be issued to the user of the system200(or the system100) via one or more messaging techniques, including but not limited to, displaying a warning message on a display associated with the system100,200, via a text message such as an SMS, an MMS, and/or an email, via an audio, video, or an audio-visual alarm, and the like. In certain instances, if the condition that has led to creation of the primary memory backup does not exist due to release of any used memory116, the processing resource106may erase the primary memory backup from the non-volatile memory108thereby efficiently managing the storage space in the non-volatile memory108. In some examples, the operations at blocks914,916,918, and920ofFIG.9are respectively similar to the operations at blocks802,804,806, and808of method800ofFIG.8, description of which is not repeated herein. FIG.10is a flow diagram depicting a method1000for enabling the synchronized reboot of the first management system102, in accordance with another example. In particular, the method1000may represent an example method to execute the operation that is intended to be performed at block808ofFIG.8or the block920ofFIG.9. For illustration purposes, the method1000will be described in conjunction with the system200ofFIG.2. The method1000may include method blocks1002,1004,1006,1008, and1010(hereinafter collectively referred to as blocks1002-1010) which may be performed by a processor-based system, for example, the first management system102. In particular, operations at each of the method blocks1002-1010may be performed by the processing resource106of the first management system102. At block1002, the processing resource106may determine whether the first management system102is an active management system or a standby management system. In some examples, the first management system102may store a role information (e.g., a flag) indicative of the role of the first management system102as either the active management system or the standby management system in a predefined memory region of the primary memory104and/or the non-volatile memory108. The processing resource106may verify such role information to ascertain whether the first management system102is the active management system or the standby management system. At block1002, if it is determined that the first management system102is the active management system, the processing resource106, at block1004, may initiate a role change of the second management system202. In some examples, to effect such role change, the processing resource106of the first management system102(which is currently the active management system) may send a role change command the second management system202(which is currently the standby management system) requesting the second management system202to become the active management system. Upon receipt of such command from the first management system102, the second management system202may change its role to the active management system. In order to effect such change, the second management system202may update respective role information (e.g., a flag) to indicate that the role of the second management system202is changed to the active management system. Moreover, in some examples, the first management system102may also update respective role information (e.g., a flag) to indicate that its role is changed from the active management system to the standby management system. Further, at block806, the processing resource106may reboot the first management system102after the role change of the second management system202is completed (i.e., after the second management system202becomes the active management system). At block1002, if it is determined that the first management system102is not the active management system (e.g., the first management system102being standby management system), the processing resource106, at block1008, may notifying the second management system202that the first management system is going to reboot. Moreover, at block1010, the processing resource106may rebooting the first management system102after notifying the second management system202. Turning now toFIG.11, a block diagram1100depicting a processing resource1102and a machine-readable medium1104encoded with example instructions to determine various memory thresholds for the first management system102is presented, in accordance with an example. The machine-readable medium1104may be non-transitory and is alternatively referred to as a non-transitory machine-readable medium1104. In some examples, the machine-readable medium1104may be accessed by the processing resource1102. In some examples, the processing resource1102may represent one example of the processing resource106of the first management system102. Further, the machine-readable medium1104may represent one example of the primary memory104or the non-volatile memory108. The machine-readable medium1104may be encoded with executable instructions1106,1108,1110,1112,1114, and1116(hereinafter collectively referred to as instructions1106-1116) for performing the methods300A and300B described inFIGS.3A and3B, respectively. The processing resource1102may be physical device, for example, one or more CPU, one or more semiconductor-based microprocessor, ASIC, FPGA, other hardware devices capable of retrieving and executing the instructions1106-1116stored in the machine-readable medium1104, or combinations thereof. In some examples, the processing resource1102may fetch, decode, and execute the instructions1106-1116stored in the machine-readable medium1104to determine various memory thresholds, such as, the used memory decay threshold (UMDTH), the available memory decay threshold (AMDTH), the memory exhaustion first threshold (METH2), and the memory exhaustion second threshold (METH2). In certain examples, as an alternative or in addition to retrieving and executing the instructions1106-1116, the processing resource1102may include at least one IC, other control logic, other electronic circuits, or combinations thereof that include a number of electronic components for determining abovementioned memory thresholds. In some examples, the instructions1106when executed by the processing resource1102may cause the processing resource1102to cause an incremental utilization of the primary memory104by a test process during the memory allocation and monitoring phase (seeFIG.3A). Further, the instructions1108when executed by the processing resource1102may cause the processing resource1102to monitor an amount of the reclaimable memory126, an amount of the used memory116, an amount of the available memory, and a number of the major page faults as the primary memory104is utilized by the test process during the memory allocation and monitoring phase. Furthermore, in some examples, the instructions1110-1116may be performed during the threshold determination phase (seeFIG.3B) of the test phase. The instructions1110when executed by the processing resource1102may cause the processing resource1102to determine the used memory decay threshold (UMDTH) as the amount of the used memory116at which the amount of the reclaimable memory126starts to decrease below the predefined threshold (RMTH). Moreover, the instructions1112when executed by the processing resource1102may cause the processing resource1102to determine the memory exhaustion first threshold (METH1) based on the used memory decay threshold (UMDTH) and the amount of the used memory (e.g., UMPFimpact) at which the number of major page faults starts to rise above a major page fault threshold (MPFTH). Moreover, in some examples, the instructions1114when executed by the processing resource1102may cause the processing resource1102to determine the available memory decay threshold (AMDTH) as the amount of the available memory at which the amount of the reclaimable memory126starts to decrease below the predefined threshold (RMTH). Additionally, the instructions1116when executed by the processing resource1102may cause the processing resource1102to determine the memory exhaustion second threshold (METH2) based on the available memory decay threshold (AMDTH) and the amount of the available memory (e.g., AMPFimpact) at which the number of major page faults starts to rise above the major page fault threshold (MPFTH). FIG.12is a block diagram1200depicting a processing resource1202and a machine-readable medium1204encoded with example instructions to operate the first management system102, in accordance with an example. The processing resource1202may be representative of one example of the processing resource1102and the machine-readable medium1204may be representative of one example of the machine-readable medium1104. In some examples, the machine-readable medium1204may be encoded with executable instructions1206,1208, and1210(hereinafter collectively referred to as instructions1206-1210) for performing the method800described inFIG.8. Although not shown, in some examples, the machine-readable medium1204may be encoded with certain additional executable instructions to perform the methods900and1000described inFIGS.9and10, without limiting the scope of the present disclosure. The instructions1206when executed by the processing resource1202may cause the processing resource1202to monitor an amount of the used memory116and an amount of an available memory during the runtime of the first management system102. Further, the instructions1208when executed by the processing resource1202may cause the processing resource1202to determine whether the amount of the used memory116is greater than the memory exhaustion first threshold (METH1) or the amount of the available memory is less than a memory exhaustion second threshold (METH2) different from the memory exhaustion first threshold (METH1). Moreover, the instructions1210when executed by the processing resource1202may cause the processing resource1202to enable a synchronized reboot of the first management system102. As will be appreciated, the processing resource106may enable a synchronized reboot of the first management system102in the low-memory situation caused by the amount of the used memory116reaching the memory exhaustion first threshold (METH1) or the amount of the available memory reaching the memory exhaustion second threshold (METH2). Moreover, comparison of the amount of the used memory116and the amount of the available memory respectively with the used memory decay threshold (UMDTH) and the available memory decay threshold (AMDTH) may help determine start of the depletion of the reclaimable memory126. Such determination of the start of the depletion of the reclaimable memory126may help generate certain warning message so that a user can perform any memory management operation for lowering memory consumption if the user desires to do so. Also, upon determining that the amount of the used memory116is greater than the used memory decay threshold (UMDTH) or the amount of the available memory is less than the available memory decay threshold (AMDTH), the processing resource106may proactively start backing-up the data stored in the primary memory102into the non-volatile memory108. Moreover, the comparison of the amount of the used memory116and the amount of the available memory respectively with the memory exhaustion first threshold (METH1) and the memory exhaustion second threshold (METH2) may help determine whether there is going to be any drastic increase in the number of major page faults. As soon as the amount of the used memory116reaches the memory exhaustion first threshold (METH1) or the amount of the available memory reaches the memory exhaustion second threshold (METH2), the processing resource106may initiate the synchronized reboot of the first management system102. Consequently, chances of the OS105of the first management system102crashing abruptly may be minimized. Further, the synchronized reboot of the first management system102may aid in switching over the role of the second management system202of the first management system102that is operational as the active management system experiences the low-memory situation caused due to the amount of the used memory116reaching the memory exhaustion first threshold (METH1) or the amount of the available memory reaching the memory exhaustion second threshold (METH2). In such situation, the role of the second management system202may be changed to an active management system so that while the first management system102undergoes the reboot, the second management system202starts to perform operations that the first management system102used to perform. Accordingly, performance of the system200may not be impacted due the low-memory situation encountered by the first management system102. While certain implementations have been shown and described above, various changes in form and details may be made. For example, some features and/or functions that have been described in relation to one implementation and/or process can be related to other implementations. In other words, processes, features, components, and/or properties described in relation to one implementation can be useful in other implementations. Furthermore, it should be appreciated that the systems and methods described herein can include various combinations and/or sub-combinations of the components and/or features of the different implementations described. Moreover, in the foregoing description, numerous details are set forth to provide an understanding of the subject matter disclosed herein. However, implementation may be practiced without some or all of these details. Other implementations may include modifications, combinations, and variations from the details discussed above. It is intended that the following claims cover such modifications and variations.
74,585
11860755
DETAILED DESCRIPTION In the following description, for the purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the embodiments. It will be apparent, however, to one skilled in the art that the embodiments are be practiced without these specific details. In other instances, well-known structures and devices are shown in block diagram form in order to avoid unnecessarily obscuring the embodiments. I. Overview II. Architecture III. Memory Profiling Aggregator IV. Microcode Execution Examples I. Overview An approach is provided for implementing memory profiling aggregation. A hardware aggregator provides memory profiling aggregation by controlling the execution of a plurality of hardware profilers that monitor memory performance in a system. For each hardware profiler of the plurality of hardware profilers, a hardware counter value is compared to a threshold value. When a threshold value is satisfied, execution of a respective hardware profiler of the plurality of hardware profilers is initiated to monitor memory performance. When a threshold value is no longer satisfied, execution of the respective hardware profiler is stopped. Multiple hardware profilers of the plurality of hardware profilers may execute concurrently and each generate a result counter value. The result counter values generated by each hardware profiler of the plurality of hardware profilers are aggregated to generate an aggregate result counter value. The aggregate result counter value is stored in memory that is accessible by a software processes for use in optimizing memory-management policy decisions. Techniques discussed herein support multiple key hardware monitoring features available across modern processor architectures and achieve low profiling overhead while maximizing informativeness. In addition, techniques discussed herein allow multiple memory profiling methods to be used in unison to provide standardized profiling statistics for optimizing memory-management policy decisions. II. Architecture FIG.1is a block diagram that depicts a hardware aggregator120for implementing memory profiling aggregation as described herein. Hardware aggregator120includes processing logic122and local storage124. In one embodiment, such as shown inFIG.1, hardware aggregator120exists as part of microprocessor110. The microprocessor110may be any type of Central Processing Unit (CPU), Graphics Processing Unit (GPU) or logic capable of processing commands. The microprocessor110may include any number of cores that may vary depending upon a particular implementation and embodiments are not limited to any particular number of cores. Processing logic122may be any type of Central Processing Unit (CPU), Graphics Processing Unit (GPU) or logic capable of processing and/or executing microcode126. Local storage124includes microcode126, counter addresses128and threshold addresses130. Microcode126includes hardware-level instructions that can control, coordinate, and in some cases perform operations required by hardware aggregator120. For example, microcode126may comprise microcode for a plurality of hardware profilers that each monitor performance of memory. Examples of microcode126are further discussed herein. Counter addresses128includes a plurality of memory addresses or IDs for accessing counter values. For example, counter addresses128may store one or more memory addresses that correspond to different hardware counter values. Threshold addresses130includes a plurality of memory addresses or IDs for accessing threshold values. For example, threshold addresses130may store one or more memory addresses that correspond to different threshold values. In some embodiments, counter addresses128and threshold addresses130may be preloaded and periodically updated via basic input/output system (BIOS). In some embodiments, counter addresses128and threshold addresses130may be set using software. In other embodiments, hardware aggregator may include other elements not described herein. For example, aggregator may include a bank of local registers for storing counter values and/or threshold values. In some embodiments, hardware aggregator120is associated with a memory controller or a last level cache that exists between processing logic and memory. III. Memory Profiling Aggregator TABLE 1 describes various hardware profilers that perform memory monitoring operations in a system: TABLE 1Hardware ProfilersHardwareProfilerOverhead?GranularityDescriptivenessPerformancevery lowcoarse (pervery lowMonitoringthread/last(single number forCounters (PMCs)level cache)many accesses)PTE's Accessed-Depends on PIDsmediumlowbit trackingtracked + may(per page)(detects first TLBrequire a TLBmiss since last reset)shootdownTrace-baseddepends onfinedepends onsampling rate(per access)sampling rateMemory AccesslowmediummediumMonitor(per page)(access count perpage) As shown in TABLE 1, each hardware profiler may specialize in different aspects of memory monitoring. For example, the Memory Access Monitor (MAM) hardware profiler specializes in low overhead for GPU memory accesses but only has medium granularity and descriptiveness. As another example, the Performance Monitoring Counters (PMCs) hardware profiler specializes in very low overhead for CPUs and GPUs but has a low descriptiveness. A hardware aggregator120can be used to combine multiple hardware profilers to maximize informativeness and minimize overhead of memory monitoring. In one embodiment, hardware aggregator120is configured to determine whether a first threshold value is satisfied by comparing a first hardware counter value to the first threshold value. The first threshold value defines a threshold value that, when satisfied, triggers execution of a hardware profiler that is associated with the first threshold value. For example, the first threshold value or a threshold value in general may indicate a threshold count of cache misses, a threshold count of page faults, or a threshold count of TLB misses stored in a register accessible by hardware aggregator120. The first hardware counter value defines a hardware counter value. For example, the first hardware counter value or a hardware counter value in general may be a count of cache misses, a count of page faults, or a count of TLB misses stored in a register accessible by hardware aggregator120. The first threshold value and first hardware counter value are retrieved by the hardware aggregator120using threshold addresses130and counter addresses128stored in local storage124of the hardware aggregator120. Once the first hardware counter value and the first threshold value are retrieved, hardware aggregator120compares the first threshold value to the first hardware counter value. If hardware aggregator120determines that the first threshold value is satisfied by the first hardware counter value, hardware aggregator120initiates execution of a first hardware profiler of a plurality of hardware profilers to monitor memory performance. Each hardware profiler of the plurality of hardware profilers comprises a set of instructions, such as microcode126stored in local storage124of hardware aggregator120. In one embodiment, the execution of a hardware profiler of the plurality of hardware profilers is initiated by processing logic122causing instructions for the first hardware profiler to be executed. In an embodiment, because hardware counter values such as the first hardware counter value may change as the first hardware profiler is executing, hardware aggregator120performs periodic comparisons of the first threshold value to the first hardware counter value while the first hardware profiler is executing. If hardware aggregator120determines, during execution of the first hardware profiler, that the first threshold value is not satisfied, hardware aggregator causes execution of the first hardware profiler to stop. Additional hardware counter values can be compared to additional threshold values to determine whether to initiate execution of additional hardware profilers of the plurality of hardware profilers. For example, the hardware aggregator may determine whether a second threshold value is satisfied by comparing a second hardware counter value to the second threshold value. In response to determining that the first threshold value is satisfied, hardware aggregator120initiates execution of a second hardware profiler of the plurality of hardware profilers to monitor memory performance. In some embodiments, multiple hardware profilers are executed concurrently. For example, a second hardware profiler may be executed concurrently with a first hardware profiler. In some embodiments, in a scenario where a first hardware profiler is executing, when a threshold value associated with a second hardware profiler is satisfied, execution of the second hardware profiler is initiated, and execution of the first hardware profiler is stopped. For example, if the second hardware profiler requires resources to execute that are currently being consumed by other hardware profilers such as the first hardware profiler, when the threshold value associated with the second hardware profiler is satisfied, execution of the second hardware profiler is initiated and execution of the first hardware profiler is stopped. Using this technique, resources that are consumed by the execution of the first hardware profiler can be allocated to the execution of the second hardware profiler. To incorporate new profiling methods for memory monitoring, hardware aggregator120, which in some embodiments may associated with a memory controller, maintains local storage124that includes hardware profilers available on a given system. Local storage124includes (a) microcode126for performing memory monitoring via a given hardware profiler and (b) memory addresses for hardware counter values128and memory addresses for the associated threshold values130that determine when to enable/disable various hardware profilers. In one embodiment, the local storage124may be preloaded and periodically updated via BIOS. In some embodiments, hardware profilers are configured to store result counter values. For example, by executing a first hardware profiler, the first hardware profiler generates a result counter value that comprises a memory performance metric. Hardware aggregator120aggregates the result counter values that result from the execution of multiple hardware profilers to generate an aggregate result counter value. The aggregate result counter value is an aggregate memory performance metric that can be used by software processes to perform memory management operations such as optimizing memory-management policy decisions. In some embodiments, a result counter value generated by a hardware profiler is assigned a weight based on the hardware profiler that the result counter value is associated with. For example, a first result counter value generated by a first hardware profiler may be assigned a first weight and a second result counter value generated by a second hardware profiler may be assigned a second weight. The weighted values are then combined into the aggregate result counter value and made accessible to a software process. FIG.2is a flow diagram200that depicts an approach for performing memory profiling aggregation using hardware aggregator120. In this example, specific hardware profilers and hardware counter value types are used to illustrate an example of hardware profiler aggregation, but embodiments are not limited to this example and embodiments include using other types of hardware profilers and associated hardware counter value types. In step205, translation lookaside buffer (TLB) misses are collected and stored as a hardware counter value in a register that is accessible by hardware aggregator120. In step210, hardware aggregator120compares the TLB misses to a threshold value that is associated with an accessed-bit tracking hardware profiler to determine whether the TLB misses satisfy the threshold value. The threshold value is stored in a register that is accessible by hardware aggregator120. In step215, if hardware aggregator120determines in step210that the TLB misses exceed or satisfy the threshold value that is associated with the accessed-bit tracking hardware profiler, the accessed-bit tracking hardware profiler is enabled and begins execution. By executing the accessed-bit tracking hardware profiler, a result counter value is generated that indicates a memory performance metric. Execution of the accessed-bit tracking hardware profiler is further described inFIG.3. In step220, if hardware aggregator120determines in step210that the TLB misses do not exceed or satisfy the threshold value that is associated with the accessed-bit profiling hardware profiler, the accessed-bit tracking hardware profiler is disabled. In a situation where the accessed-bit tracking hardware profiler is already executing, the execution of the accessed-bit tracking hardware profiler is stopped. In step225, cache misses are collected and stored as a hardware counter value in a register that is accessible by hardware aggregator120. In step230, hardware aggregator120compares the cache misses to a threshold value that is associated with a trace-based hardware profiler to determine whether the cache misses satisfy the threshold value. The threshold value is stored in a register that is accessible by hardware aggregator120. In step235, if hardware aggregator120determines in step230that the cache misses exceed or satisfy the threshold value that is associated with the trace-based hardware profiler, the trace-based hardware profiler is enabled and begins execution. By executing the trace-based hardware profiler, a result counter value is generated that indicates a memory performance metric. Execution of the trace-based hardware profiler is further described inFIG.4. In step240, if hardware aggregator120determines in step230that the cache misses do not exceed or satisfy the threshold value that is associated with the trace-based hardware profiler, the trace-based hardware profiler is disabled. In a situation where the trace-based hardware profiler is already executing, the execution of the trace-based hardware profiler is stopped. In step245, page faults are collected from a GPU/FPGA and stored as a hardware counter value in a register that is accessible by hardware aggregator120. In step250, hardware aggregator120compares the page faults to a threshold value that is associated with a memory access monitor hardware profiler to determine whether the page faults satisfy the threshold value. The threshold value is stored in a register that is accessible by hardware aggregator120. In step255, if hardware aggregator120determines in step250that the page faults exceed or satisfy the threshold value that is associated with the memory access monitor hardware profiler, the memory access monitor hardware profiler is enabled and begins execution. By executing the memory access monitor hardware profiler, a result counter value is generated that indicates a memory performance metric. In step260, if hardware aggregator120determines in step250that the page faults do not exceed or satisfy the threshold value that is associated with the memory access monitor hardware profiler, the memory access monitor hardware profiler is disabled. In a situation where the memory access monitor hardware profiler is already executing, the execution of memory access monitor hardware profiler is stopped. In step265, the result counter values that are generated by each of the accessed-bit tracking hardware profiler, the trace-based hardware profiler, and the memory access monitor hardware profiler are aggregated to generate an aggregate result counter value. The aggregate result counter value is used by a page placement policy implemented by software to perform memory management operations. In step270, hardware aggregator120determines whether to stop memory profiling operations. If hardware aggregator120determines to stop memory profiling operations, flow200stops. If hardware aggregator120determines to continue memory profiling operations, the flow proceeds to step205. IV. Microcode Execution Examples FIG.3is a flow diagram300that depicts an approach for executing an accessed-bit tracking hardware profiler. In one embodiment, executing the accessed-bit tracking hardware profiler comprises processing logic122causing a set of instructions from microcode126that corresponds to the accessed-bit tracking hardware profiler to be executed. In step305, the accessed-bit tracking hardware profiler collects all IDs of running processes (PIDs) from a system. In step310, the accessed-bit tracking hardware profiler generates a list of processes that are occupying system resources such as CPU or memory and filters the list of processes by resource usage. For example, the accessed-bit tracking hardware profiler selects processes with at least 5% CPU or 10% memory in order to reduce the number of page tables traversed for accessed-bit collection. In step315, the accessed-bit tracking hardware profiler determines whether the filtered PID list is empty. If the filtered PID list is empty, the flow proceeds back to step305where the accessed-bit tracking hardware profiler collects PIDs of running processes from the system. If the filtered PID list is not empty, the flow proceeds to step320where the accessed-bit tracking hardware profiler selects a PID from the filtered PID list. In step325, the accessed-bit tracking hardware profiler iterates the page mapping of the virtual memory space for the process corresponding to the PID selected in step320. In step330, the accessed-bit tracking hardware profiler determines whether the iteration of page table entries is finished. If the iteration is finished, the flow proceeds back to step315. If the iteration is not finished, the flow proceeds to step335where the accessed-bit tracking hardware profiler checks the accessed-bit of the page table entry for the current iteration. In step340, the accessed-bit tracking hardware profiler determines whether the accessed-bit is checked. If the accessed-bit is checked, i.e. if the accessed-bit is set to 1, the flow proceeds to step345where the accessed-bit result counter value of the page is incremented inside the hardware aggregator120. In step350, the accessed-bit tracking hardware profiler determines whether the accessed-bit tracking is finished. If the accessed-bit tracking is not finished iterating through page table entries, the flow proceeds back to step325. If the accessed-bit tracking is finished iterating through page table entries, the accessed-bit result counter value is then shared with a software process. FIG.4is a flow diagram400that depicts an approach for executing a trace-based hardware profiler. In one embodiment, executing the trace-based hardware profiler comprises processing logic122causing a set of instructions from microcode126that corresponds to the trace-based hardware profiler to be executed. In step405, the trace-based hardware profiler is enabled. In step410, the trace-based hardware profiler selects an event to collect pages based on. For example, events such as L1 cache, L2 cache or last level cache (LLC) misses may be selected. In step415, the trace-based hardware profiler waits for an interrupt. For example, the trace-based hardware profiler waits for an Instruction-Based Sampling (IBS) and/or Processor Event-Based Sampling (PEBS) interrupt. In step420, the trace-based hardware profiler determines if an interrupt is generated. If the trace-based hardware profiler determines that an interrupt is generated, the flow proceeds to step425where a IRQ handler is handled by the trace-based hardware profiler. If the trace-based hardware profiler determines that an interrupt is not generated, the flow proceeds to step430where a buffer is read by the trace-based hardware profiler. For example, the trace-based hardware profiler may use machine-specific registers (MSR) to collect memory-trace samples into a buffer and use a register interrupt handler to indicate when the tracing buffer is full. In step435, the trace-based hardware profiler filters samples based on event. In step440, the trace-based hardware profiler determines if an event occurred. If the trace-based hardware profiler determines that an event occurred, the flow proceeds to step445where the trace-based result counter value that is stored for each page is incremented. If the trace-based hardware profiler determines that an event has not occurred, the flow proceeds to step450where the trace-based hardware profiler determines whether to stop the execution of the trace-based hardware profiler In step450, the trace-based hardware profiler determines whether the trace-based tracking is finished. If the trace-based tracking is not finished, the flow proceeds back to step415. If the trace-based tracking is finished, the trace-based result counter value is then shared with a software process. In one embodiment, hardware aggregator120takes advantage of the trace-based hardware profiler as shown inFIG.4to inspect memory accessed from the unified last level caches (i.e., if the data source is out of local, combined level 3 LLCs). The hardware aggregator120supplements this information with the accessed-bit profiling hardware profiler as shown inFIG.3to gain visibility into memory accesses from the TLB caches (a.k.a. cache misses of the address translation path). The trace-based result counter value is aggregated with the accessed-bit result counter value to generate an aggregate result counter value, which is then made accessible to a software process. The aggregate result counter value comprises memory profiling statistics for optimizing memory-management policy decisions.
21,949
11860756
DETAILED DESCRIPTION FIG.1depicts a system100for automatically alerting system activities and status based on cognitive sentiment analysis on logs of a computing platform, in accordance with one or more embodiments set forth herein. In conventional system maintenance and administration context, all hardware and software components in computing platforms generate some form of records on status and operations, recorded as respective logs in the computing platforms. Most of the computing platforms monitor operations of the computing platforms at the operating system level, and individual devices and software applications generate preconfigured error codes or predefined text messages for the computing platforms. The computing platforms generate various types of logs including, at the operating system level on system functionalities, operational status of each system functionality, resource utilization, status information on the devices of the computing platform, and, at the application level, on application functionalities, any embedded software functionalities when used in a framework of another software application, etc. The logs generated by the computing platforms are of numerous characteristics and have corresponding parameters and various predefined text messages. Some of the logs convey information can benefit system performance and efficiency in operations of the computing platform when remedial measures according to the logs are timely applied to the computing platforms. However, because the logs generated from the computing platforms are quite large in numbers and because information of the logs are often presented in preconfigured codes and short form descriptions intended for machine interface internal to the computing platforms, it is hard to have the log message noticed by a human administrator of the computing platforms to have the logs understood in time and have a remedial action taken accordingly. The system100includes a cognitive alert system110reporting a system alert199to a user101based on inputs of indexed log files107. The indexed log files107are generated by a generic log parser105running in a computing platform as raw system logs103of the computing platform are processed. In certain embodiments of the present invention, the indexed log files107are collected from respective Elasticsearch indices or Apache Kafka topics. (Elasticsearch is a trademark of Elasticsearch B. V., registered in the U.S. and in other countries; Apache is a trademark of the Apache Software Foundation in the United States and/or other countries.) The cognitive alert system110includes a message processor120, a distinct message table130, a sentiment analyzer140, a message classification model150, a cognitive alert curator160, and an alert generator170. The cognitive alert system110is operatively coupled to external tools including, but not limited to, natural language processing (NLP) tools113and cognitive analytics/machine learning (CA/ML) tools115. The cognitive alert system110also includes a configuration file specifying an account of the user101to which the system alert199is transferred, a type of the system alert199, types of input log sources, and other parameters and corresponding values to configure operations of the cognitive alert system110. The message processor120processes the log messages in the indexed log files107into text log messages and data by removing special characters and noises inserted for data separation and formatting according to a configuration file for the cognitive alert system110. The message processor120then extracts data entities required for operations of the cognitive alert system110and the system alert110. The data entities extracted by the message processor120can be configured individually per account holder, based on the areas of interest on which the user101wish to be reported. The message processor120generates distinct messages based on the extracted data entities and stores the distinct messages in the distinct message table130with a newly created message index, also referred to as an identifier, for each of the distinct messages. In certain embodiments of the present invention, the message processor120creates a dataframe (DF) to organize data entities and other information in the distinct messages. In this specification, term “dataframe” indicates a data structure based on Apache Spark DataFrame, which is the most common structured API that represents a table of data with rows and columns, which can be partitioned across many data centers to store very large amount of data, often referred to as big data, and to process the big data efficiently. In the same embodiments of the present invention, the message processor120removes special characters and duplicates from the log messages and converts timestamps in the log messages to a date-time form for Python API for Apache Spark for easily integrate Resilient Distributed Dataset (RDD) of Spark into Python programs. In the same embodiments of the present invention, the message processor120creates columns additional to the DF by extracting data entities such as Host Name, Internet Protocol Address (IP), Job Name, Application Name, etc., relevant to the log message. The message processor120then creates a column “Clean Message” by keeping text portion of the log messages that is constant, as the numeric and/or alpha numeric portions of the log message have been extracted as separate data entities and respective columns have been previously created. As all variable portions of the log messages have been separated, the text of the “CleanMessage” column can repeatedly appear in many log messages. Accordingly, the message processor120takes one of distinct message texts from “CleanMessage” column and create another dataframe (DF1) for a unique message, which will be a data structure for the distinct message table130of the cognitive alert system110. The message processor120adds a new column “ID” in the DF1as an identifier for each distinct message in the distinct message table130. The sentiment analyzer140cognitively analyzes the distinct messages stored in the distinct message table130and determines a sentiment value expressed in each of the distinct messages. The sentiment analyzer140updates the respective log messages in the distinct message table130with the newly assigned sentiment values associated with respective log messages in the distinct message table130. The sentiment analyzer140creates and utilizes a log message lexicon indicating how a particular word would have a certain sentiment value when used in log messages. The sentiment analyzer140also weighs each word in the log message lexicon based on how critical an issue that is being addressed by the word in the log message lexicon. The log message lexicon would be continuously readjusted and updated with the weights and a custom sentiment when a certain user feedback to affect the sentiment previously associated with the word in the log message lexicon. Detailed operations of the sentiment analyzer140are presented inFIG.2and corresponding description. In certain embodiments of the present invention, the sentiment analyzer140utilizes a method described inFIG.2in combination with currently available sentiment analysis tools. For example, the sentiment analyzer140may employ SentiWordNet which is an opinion lexicon derived from the WordNet database where each term is associated with numerical scores indicating positive and negative sentiment information often used for sentiment classification in NLP along with the log message lexicon. Accordingly ordinary words in log messages can be readily scored for a sentiment and classified based on SentiWordNet, while for a sentiment of any context specific words in the log messages a SentiWordNet score would be assessed and/or weighted based on a context of a log message for accuracy. In certain embodiments of the present invention, a sentiment value associated with a distinct message is preconfigured to one of binary values {Positive, Negative}, where the sentiment value of the distinct message is assigned to “Positive” if the distinct message represents a positive event or activity in the computing platform, and if the distinct message represents a negative event or activity in the computing system, then the sentiment value of the distinct message would be assigned to “Negative”. The determination on negative/positive activities and events are trained into a machine learning model based on historical logs and user responses corresponding to each of the historical logs. The message classification model150is a hybrid classification model utilizes for the distinct log messages in the distinct message table130. In certain embodiments of the present invention, the message classification model150utilizes a classical part-of-speech method and a similarity method to classify the distinct log messages stored in the distinct message table130. The categories for the message classification model150can be configured with a priority referring to a level of urgency of the services/issue on function of the computing platform, a class referring to a relevant functionality and/or resource(s) addressed by the log message, a frequency of the log message within a predefined time window if not addressed in real time, etc. In this specification, terms “class”, “issue”, or “label” of a log message indicate a functionality of the computing platform that is being addressed by the log message. In the embodiments of the present invention same as the message processor120creates two (2) dataframes DF and DF1as above, the sentiment analyzer140creates a new column “Sentiment” in DF1and assigns a sentiment value, one of {“Positive”, “Negative”} to each distinct message stored in the distinct message table130. The sentiment analyzer140determines the sentiment value respective to each of the distinct messages stored in the distinct message table130by applying a hybrid method for sentiment value assessment described in block230ofFIG.2below. The message classification model150creates an additional column “Category” in DF1and instantiates with a label phrase combining negative words in the distinct message. The terms “category”, “label”, “issue”, and “class” are used interchangeably to address what is the problem that is being alerted with the distinct message. In the same embodiments of the present invention, the cognitive alert system110utilizes the message classification model150that employs a custom part-of-speech (POS) classification method in combination with a similarity-based classification. The cognitive alert system110then merges DF and DF1to add columns “ID” for identifying distinct messages and “Sentiment” as assessed to the log message dataframe DF. The cognitive alert system110filters input log messages with a “Negative” sentiment value from the log message dataframe DF and send to the cognitive curator160for alerting. The cognitive alert curator160examines the sentiment value and the class of the log message and determines a type of alert for the log message based on the class and the sentiment value associated with the log message. Detailed operations of the cognitive alert curator160are presented inFIG.3and corresponding description. As shown inFIG.3, the cognitive alert curator160utilizes three (3) types of qualifications for alerts based on a combination of a priority, a class, and a sentiment value associated with the log message. The cognitive alert curator160defines the types of alert qualification as {Real-time alert, Frequency based alert}, where the Frequency based alert type has two (2) subcategories of {Static window, Dynamic window}. In certain embodiments of the present invention, the cognitive alert curator160implements two (2) sets of key performance indicators (KPIs) from negative intent logs based on urgency of alerting the user101. A first set of KPIs is utilized to alert for a type of negative intent logs that needs attention by the user101immediately. Accordingly, a real time alert will be reported with the first set of KPIs. Exemplary alerts based on the first set of KPIs are presented inFIG.4and corresponding description. The real time alerts are reported in real time as individual log message with modified priorities according to respective error codes appearing in the negative intent logs such as “Info”, “Warn”, and “Error”. In the same embodiment of the present invention as above, the cognitive alert curator160configures a second set of KPIs to report a group of negative intent logs that would be alerted based on a frequency of each of the negative intent logs. The cognitive alert curator160determines a time window, in a range from a few to several minutes, and counts how many of the same negative intent logs are generated. Based on a preconfigured threshold count of the log messages, the cognitive alert curator160qualifies the negative intent logs for alerting if the negative intent logs had been generated more times than the preconfigured threshold count. In the same embodiments of the present invention, the cognitive alert curator160determines how long the time window would be for new logs observation, if the time window should be fixed or should be adjusted such that some log messages would be observed for more time than other log messages, and if the preconfigured threshold count would be fixed or to be adjusted. For example, the cognitive alert curator160can be configured to qualify a log message arriving three (3) times within one (1) minute of static time window as an alert. In the same embodiments of the present invention, the cognitive alert curator160qualifies grouped messages in related issues using statistical analysis-based baselines from machine learning/deep learning anomaly detection models. Embodiments of the present invention improves accuracy in sentiment values and overcomes drawbacks in conventional attempts to analyze sentiment of log messages caused by literature-based lexicon or other generic vocabulary common in conventional NLP by creating and adopting a custom lexicon specific to log messages. Conventional sentiment analysis based on generic vocabulary often produces an inaccurate sentiment value for log message, as the context and usage of words are different from regular literature in log messages. For example, a phase “high utilization” may be analyzed as having a positive sentiment value in regular literature, but “high utilization” in a log message indicates a low availability of the same resource and thus a negative sentiment value should be assigned for the log message having “high utilization”. Conventionally, lexicon-based sentiment analysis and prelabelled data-based sentiment analysis are widely used for sentiment analysis of log messages but the sentiment values produced for log messages are not accurate enough to be usable. On the other hand, simply performing a big data research of the log data does not produce much results in identifying patterns of abnormal system activities, diagnosing root causes of a certain outages does not employ sentiment analysis specific for log messages. Further, because conventional log analysis methods often focus on filtering log messages having high-risk terminologies such as ‘error’, ‘information’, ‘warning’, for alerts but does not analyse the log messages for information on subtle abnormalities in system operations and root causes of certain outages that can be prevented. Embodiments of the present invention facilitates fully automated analysis of log messages in computing platforms based on cognitive sentiment analysis specific to log messages. The embodiments of the present invention cognitively curate types of alerts to improve efficiency in a recovery from a service outage reported in a critical alert in real-time. By use of the cognitive curation of alert types, the embodiments of the present invention cognitively also facilitate proactive remedies for any chronic issues with a certain performance degradation as reported in non-critical alerts, which improves efficiency of the computing platform over time. The embodiments of the present invention improve visibility and readability of log messages greatly according to respective priority and related issue, which had been performed manually in small scale or not performed at all without the cognitive alert system110. FIG.2depicts a flowchart of operations performed by the sentiment analyzer140ofFIG.1, in accordance with one or more embodiments set forth herein. In block210, the sentiment analyzer140creates a log data specific lexicon based on log data samples gathered across a plurality of platforms of a compatible architecture and capacity. The sentiment analyzer140does not use any of natural language literatures often used as basis of a lexicon such as generic social network comments except technical discussion websites in building the log data specific lexicon. Then, the sentiment analyzer140proceeds with block220. In block220, the sentiment analyzer140calculates a weighted sentiment score for each word in the log data specific lexicon generated from block210. In this specification, terms “weighted sentiment score”, “weight”, or “weighted score” are used interchangeably. The log data specific lexicon includes all English words appearing in the log data samples. The sentiment analyzer140assigns each word in the log data specific lexicon with weight scores for both positive and negative polarities. These weights are derived for every word based on meanings and intents of the word in a context of log messages. As noted, the same word may mean differently in the context of the log messages from natural language contexts. For example, “utilization” is a word in general has a positive or neutral in natural language world but log messages such as “High utilization of memory leads slow response in the process”, “High disk space utilization on the server abc123”, indicates a lack of available resources or performance degradation caused thereby. Accordingly, “utilization” in the context of log messages will have negative weight. Then, the sentiment analyzer140proceeds with block230. In block230, the sentiment analyzer140obtains a log message to analyze and assigns a sentiment value to the log message as a sum of weighted sentiment scores corresponding to words in the log message, as well as classifies the log message by use of the message classification model150. The sentiment analyzer140assigns a class, also referred to a label, an issue, or an intent, to the log message that specifies which functionality of the computing platform is being addressed by the log message, by use of the message classification model150. In certain embodiments of the present invention, the sentiment analyzer140adds weights of all negative intent words in the log message and assigns a positive sentiment value if the sum of negative word weights is equal to zero (0), that is, if no negative intent words are present in the log message. Otherwise, the sentiment analyzer140assigns a negative sentiment value to the log message. The sentiment analyzer140determined how critical the log message is based on the quantity of the sum of negative weights for the log message. If the sentiment analyzer140discovers there are multiple sentences in the log message, the sentiment analyzer140splits the log message and classifies every sentence and assigns respective sentiment values which results in finding the exact issue more efficiently. Then, the sentiment analyzer140proceeds with block240. In block240, the sentiment analyzer140produces the log message with the sentiment value as assigned from block230to the cognitive alert curator160. See description ofFIG.1or block250below regarding subsequent workflow on the log message by the cognitive alert system110. Then, the sentiment analyzer140proceeds with block250. In block250, the sentiment analyzer140determines whether or not any user feedback had been received regarding a reduced accuracy of the sentiment value of the log message. Between block240and block250, a type of alert for the log message is determined by the cognitive alert curator160, then an alert corresponding to the determined type is generated by the alert generator170on the log message, and finally presented to the user101as the system alert199. If the sentiment analyzer140had received some user feedback commenting that the accuracy of the sentiment value of the log message had been reduced, then the sentiment analyzer140loops back to block220to recalculate the sentiment scores to respective words in the log data specific lexicon such that the accuracy of the sentiment value would be improved, and the weighted scores are readjusted across the log data specific lexicon. If the sentiment analyzer140had not received user feedback regarding reduced accuracy at all, then the sentiment analyzer140loops back to block230to process a next log message. FIG.3depicts a flowchart of operations performed by the cognitive alert curator160ofFIG.1, in accordance with one or more embodiments set forth herein. In block310, the cognitive alert curator160obtains the log message with the sentiment value and the class as produced by the sentiment analyzer140at block230inFIG.2. Then, the cognitive alert curator160proceeds with block320. In block320, the cognitive alert curator160determines an alert type of the log message based on instances of priority, class, and sentiment value of the log message from block310. As noted above, in certain embodiments of the present invention, a size of the weighted negative score would indicate how critical an issue is addressed by the log message. In the same embodiment as above where there are three alert types, the cognitive alert curator160determines one of {Real-time, Static window, Dynamic window} as the alert type for the log message. If the cognitive alert curator160determines that the alert type for the log message is “Real-time” as being critical and to report immediately, then the cognitive alert curator160proceeds with block330. If the cognitive alert curator160determines that the alert type for the log message is “Static window” as being less than critical and to observe for a static time window on how many more identical log messages would arrive, then the cognitive alert curator160proceeds with block340. If the cognitive alert curator160determines that the alert type for the log message is “Dynamic window” as being less than critical and to observe for a dynamic time window on how many more identical log messages would arrive, then the cognitive alert curator160proceeds with block350. In block330, the cognitive alert curator160recalculates the priority of the log based on status code as being curated as critical for immediate alert. Then the cognitive alert curator160terminates and the log message proceeds to the alert generator170. In block340, the cognitive alert curator160first observes how many more identical log messages would arrive for the static time window. If the cognitive alert curator160observes a number of the identical log messages greater than or equal to a threshold count for alerting within the static time window, then the cognitive alert curator160forwards the log message to the alert generator170and terminates. If the cognitive alert curator160observes a number of the identical log messages less than the threshold count for alerting within the static time window, then the cognitive alert curator160terminates without forwarding the log message to the alert generator170. In block350, the cognitive alert curator160first observes how many more identical log messages would arrive for the dynamic time window. If the cognitive alert curator160observes a number of the identical log messages greater than or equal to a threshold count for alerting within the dynamic time window, then the cognitive alert curator160forwards the log message to the alert generator170and terminates. If the cognitive alert curator160observes a number of the identical log messages less than the threshold count for alerting within the dynamic time window, then the cognitive alert curator160terminates without forwarding the log message to the alert generator170. The alert generator170creates alerts for all log messages that had been qualified from blocks330,340, and350. In certain embodiments of the present invention same as above where the message processor120takes inputs of the indexed log files107from respective Elasticsearch indices or Apache Kafka topics, the alert generator170sends all qualified logs that had been produced as the system alert199to Apache Kafka topic or stores in Elasticsearch index. FIG.4depicts exemplary alerts400curated for real-time reporting via block330ofFIG.3, in accordance with one or more embodiments set forth herein. The alert generator170generates the exemplary alerts400subsequent to block330by the cognitive alert curator160, determining the alert as of the Real-time type from block320based on a preconfigured set of KPIs including columns of Index401, Timestamp402, CleanMessage403, Final_class404, ID405, Model_Type406, Sentiment407, and Priority408. Each row in the exemplary alerts400indicates a log message that is being alerted. Values in column Index401indicate an index of the log message. Values in column Timestamp402indicate a time stamp of the log message, in a converted form for Python API. Values in column CleanMessage403indicate message text without any variables as stored in the distinct message table130. Values in column Final_class404indicate an issue label of the log message as classified by the sentiment analyzer140using the message classification model150. Values in column ID405indicate a unique identifier of a distinct message from the distinct message table130. Values in column Model_Type406indicate a name of a model by which the values in column Final_class is generated. Value “sim model” for the column Model_Type406indicates a similarity model for classification, and value “POS_model” for the column Model_Type406indicates a parts-of-speech model specific to the log data specific lexicon. Values in column Sentiment407indicate respective sentiment values of the log message, which are all “Negative” in the exemplary alerts400as no log message with Positive sentiment value would be alerted to the user101. Values in column Priority408indicate a priority configured/assessed for the log message. FIG.5depicts exemplary log messages500curated for reporting per observed frequency via both blocks340and350ofFIG.3, in accordance with one or more embodiments set forth herein. As noted forFIG.4, the alert generator170generates the system alert199based on some of the exemplary log messages500subsequent to block340or350by the cognitive alert curator160, determining the log message had qualified as the system alert199because the log message had been generated more times within the respective time windows than the preconfigured threshold count for alerting. The second preconfigured set of KPIs noted above inFIG.1includes columns of Index501, year502, month503, day504, hour505, window (m)506, ID507, Sentiment508, Priority509, CleanMessage510, count511and Alert Filter512. As inFIG.4, each row in the exemplary log messages500indicates a log message. The exemplary log messages500show all candidate log messages but only the log messages of Index501values “2”, “3”, “6”, and “7” would be sent to the user101as the system alert199as column Alert Filter512values “Qualify” respectively indicate, based on qualifying threshold values and values in column window(m)506and column count511for respective log messages. As noted forFIG.4Values in column Index501indicate an index of the log message, values in column ID507indicate a unique identifier of a distinct message from the distinct message table130, values in column Priority509indicate a priority configured/assessed for the log message, values in column CleanMessage510indicate message text without any variables as stored in the distinct message table130. Certain embodiments of the present invention improve accuracy of sentiment analysis in log messages by use of an adaptive log data specific lexicon and a customized sentiment analysis method by weighing conventional sentiment score of each word based on meaning and purpose of the log message. Certain embodiments of the present invention classify log messages more accurately than conventional NLP classification based on a message classification model optimized for identifying issues from clean texts in log messages. Certain embodiments of the present invention provide the cognitive alert curator which determines a type of qualification based on priority and class of the log message. Certain embodiments of the present invention provide the system alert in real-time based on preconfigured class and priority of a log message to thereby facilitate resolution of an issue addressed in the log message. Certain embodiments of the present invention provide a frequency-based system alert by counting a number of occurrences of a same log message within a time window and qualify a certain log message as the system alert when the log message has occurred more than a threshold count during the time window, to thereby facilitate alerting the user on consistent and frequently reported issue addressed by the log messages. Certain embodiments of the present invention may be implemented by use of a cloud platform/data center/server farm in various types including a Software-as-a-Service (SaaS), Platform-as-a-Service (PaaS), Database-as-a-Service (DBaaS), and combinations thereof based on purpose of the cognitive alert system. The cognitive alert system on any hardware platform can be offered for and delivered to any service providers/business entities/vendors of software applications from any location in the world in need of more efficient system administration, discovery of root causes of any problems addressed in the log messages, and resolution of issues that often overlooked by the user due to low visibility and readability of the log messages. Embodiments of the present invention present a computer implemented method including, for instance: creating, by one or more processors, a log data specific lexicon based on log data samples, each word in the log data specific lexicon corresponding to a weighted sentiment score with a binary polarity; obtaining, by the one or more processors, a log message from a computing platform; assigning, by the one or more processors, a sentiment value to the log message based on respective weighted sentiment scores by the log data specific lexicon corresponding to words appearing in the log message; classifying, by the one or more processors, the log message for a class indicating an issue the log message addresses; determining, by the one or more processors, an alert type for the log message based on the sentiment value, the class, and a priority of the log message, where the alert type is preconfigured with a set of alert type values of varying risk levels; and producing, by the one or more processors, a system alert to a user according to the alert type for the log message, where the system alert includes a predefined set of key performance indicators corresponding to the alert type to thereby inform the user on the issue addressed by the log message. Embodiments of the present invention present a computer implemented method also including, for instance: processing the log message, prior to the assigning, by separating text of the log message as resulting from removing special characters of the log message and extracting parameters and values of the log messages as respective data entities; allocating an identifier for a unique text of the log message; and storing the unique text of the log message in a distinct message table if the unique text of the log message is not already present. Embodiments of the present invention present a computer implemented method also including, for instance: determining respective sentiment scores of the words appearing in the log message by use of a sentiment lexicon available in natural language processing as being applied to the log data specific lexicon; weighing the respective sentiment scores of the words in the log message based on a meaning of each of the words in log message contexts; and determining a sum of all of the weighted sentiment scores respective to each of the words in the log message as the sentiment value to the log message. Embodiments of the present invention present a computer implemented method also including, for instance: determining, subsequent to the determining the alert type, a set of key performance indicators corresponding to the alert type, wherein the set of alert type values comprises Real time, wherein if the alert type is Real time then the log message is immediately alerted to the user with the key performance indicators comprising a timestamp, a text-only message, the class, an identifier of a distinct message, the sentiment value, and the priority of the log message. Embodiments of the present invention present a computer implemented method also including, for instance: determining, subsequent to the determining the alert type, a set of key performance indicators corresponding to the alert type, where the set of alert type values comprises Static time window and Dynamic time window, where if the alert type is Static time window then the log message is alerted only when the log message is repeated more times than a threshold count of the log message within a predefined static time window, and where the log message is alerted to the user with the key performance indicators comprising an arrival time of the log message, a time window for the log message to be repeated, the sentiment, the priority, a text-only message as identified by a distinct message, the sentiment value, and the priority of the log message, a count of the log message indicating how many times the log message has been repeated within the time window, and an alert filter indicating whether or not the count of the log message is greater than a threshold count to qualify as the alert. Embodiments of the present invention present a computer implemented method also including, for instance: performing a part-of-speech analysis on the log message by use of natural language processing tools; collecting nouns of the log message resulting from the part-of-speech analysis and forming noun phrases in the log message with any immediately consecutive nouns in the log message; and determining a noun phrase with the greatest number of characters as the class of the log message. Embodiments of the present invention present a computer implemented method also including, for instance: obtaining a user feedback commenting on a reduced accuracy of the weighted sentiment score for a group of words in the log data specific lexicon; and reassessing the weighted sentiment score with the binary polarity for each word in the group of words in the log data specific lexicon based on a new log data samples. FIGS.6-8depict various aspects of computing, including a cloud computing system, in accordance with one or more aspects set forth herein. It is to be understood that although this disclosure includes a detailed description on cloud computing, implementation of the teachings recited herein are not limited to a cloud computing environment. Rather, embodiments of the present invention are capable of being implemented in conjunction with any other type of computing environment now known or later developed. Cloud computing is a model of service delivery for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, network bandwidth, servers, processing, memory, storage, applications, virtual machines, and services) that can be rapidly provisioned and released with minimal management effort or interaction with a provider of the service. This cloud model may include at least five characteristics, at least three service models, and at least four deployment models. Characteristics are as Follows: On-demand self-service: a cloud consumer can unilaterally provision computing capabilities, such as server time and network storage, as needed automatically without requiring human interaction with the service's provider. Broad network access: capabilities are available over a network and accessed through standard mechanisms that promote use by heterogeneous thin or thick client platforms (e.g., mobile phones, laptops, and PDAs). Resource pooling: the provider's computing resources are pooled to serve multiple consumers using a multi-tenant model, with different physical and virtual resources dynamically assigned and reassigned according to demand. There is a sense of location independence in that the consumer generally has no control or knowledge over the exact location of the provided resources but may be able to specify location at a higher level of abstraction (e.g., country, state, or datacenter). Rapid elasticity: capabilities can be rapidly and elastically provisioned, in some cases automatically, to quickly scale out and rapidly released to quickly scale in. To the consumer, the capabilities available for provisioning often appear to be unlimited and can be purchased in any quantity at any time. Measured service: cloud systems automatically control and optimize resource use by leveraging a metering capability at some level of abstraction appropriate to the type of service (e.g., storage, processing, bandwidth, and active user accounts). Resource usage can be monitored, controlled, and reported, providing transparency for both the provider and consumer of the utilized service. Service Models are as Follows: Software as a Service (SaaS): the capability provided to the consumer is to use the provider's applications running on a cloud infrastructure. The applications are accessible from various client devices through a thin client interface such as a web browser (e.g., web-based e-mail). The consumer does not manage or control the underlying cloud infrastructure including network, servers, operating systems, storage, or even individual application capabilities, with the possible exception of limited user-specific application configuration settings. Platform as a Service (PaaS): the capability provided to the consumer is to deploy onto the cloud infrastructure consumer-created or acquired applications created using programming languages and tools supported by the provider. The consumer does not manage or control the underlying cloud infrastructure including networks, servers, operating systems, or storage, but has control over the deployed applications and possibly application hosting environment configurations. Infrastructure as a Service (IaaS): the capability provided to the consumer is to provision processing, storage, networks, and other fundamental computing resources where the consumer is able to deploy and run arbitrary software, which can include operating systems and applications. The consumer does not manage or control the underlying cloud infrastructure but has control over operating systems, storage, deployed applications, and possibly limited control of select networking components (e.g., host firewalls). Deployment Models are as Follows: Private cloud: the cloud infrastructure is operated solely for an organization. It may be managed by the organization or a third party and may exist on-premises or off-premises. Community cloud: the cloud infrastructure is shared by several organizations and supports a specific community that has shared concerns (e.g., mission, security requirements, policy, and compliance considerations). It may be managed by the organizations or a third party and may exist on-premises or off-premises. Public cloud: the cloud infrastructure is made available to the general public or a large industry group and is owned by an organization selling cloud services. Hybrid cloud: the cloud infrastructure is a composition of two or more clouds (private, community, or public) that remain unique entities but are bound together by standardized or proprietary technology that enables data and application portability (e.g., cloud bursting for load-balancing between clouds). A cloud computing environment is service oriented with a focus on statelessness, low coupling, modularity, and semantic interoperability. At the heart of cloud computing is an infrastructure that includes a network of interconnected nodes. Referring now toFIG.6, a schematic of an example of a computer system/cloud computing node is shown. Cloud computing node10is only one example of a suitable cloud computing node and is not intended to suggest any limitation as to the scope of use or functionality of embodiments of the invention described herein. Regardless, cloud computing node10is capable of being implemented and/or performing any of the functionality set forth hereinabove. In cloud computing node10there is a computer system12, which is operational with numerous other general purposes or special purpose computing system environments or configurations. Examples of well-known computing systems, environments, and/or configurations that may be suitable for use with computer system12include, but are not limited to, personal computer systems, server computer systems, thin clients, thick clients, hand-held or laptop devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputer systems, mainframe computer systems, and distributed cloud computing environments that include any of the above systems or devices, and the like. Computer system12may be described in the general context of computer system-executable instructions, such as program processes, being executed by a computer system. Generally, program processes may include routines, programs, objects, components, logic, data structures, and so on that perform particular tasks or implement particular abstract data types. Computer system12may be practiced in distributed cloud computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed cloud computing environment, program processes may be located in both local and remote computer system storage media including memory storage devices. As shown inFIG.6, computer system12in cloud computing node10is shown in the form of a general-purpose computing device. The components of computer system12may include, but are not limited to, one or more processors16, a system memory28, and a bus18that couples various system components including system memory28to processor16. Bus18represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. By way of example, and not limitation, such architectures include Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MCA) bus, Enhanced ISA (EISA) bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnects (PCI) bus. Computer system12typically includes a variety of computer system readable media. Such media may be any available media that is accessible by computer system12, and it includes both volatile and non-volatile media, removable and non-removable media. System memory28can include computer system readable media in the form of volatile memory, such as random access memory (RAM)30and/or cache memory32. Computer system12may further include other removable/non-removable, volatile/non-volatile computer system storage media. By way of example only, storage system34can be provided for reading from and writing to a non-removable, non-volatile magnetic media (not shown and typically called a “hard drive”). Although not shown, a magnetic disk drive for reading from and writing to a removable, non-volatile memory device (e.g., a “thumb drive”, “external hard drive”), and an optical disk drive for reading from or writing to a removable, non-volatile optical disk such as a CD-ROM, DVD-ROM or other optical media can be provided. In such instances, each can be connected to bus18by one or more data media interfaces. As will be further depicted and described below, memory28may include at least one program product having a set (e.g., at least one) of program processes that are configured to carry out the functions of embodiments of the invention. One or more program40, having a set (at least one) of program processes42, may be stored in memory28by way of example, and not limitation, as well as an operating system, one or more application programs, other program processes, and program data. Each of the operating system, one or more application programs, other program processes, and program data or some combination thereof, may include an implementation of the cognitive alert system110ofFIG.1. Program processes42, as in the cognitive alert system110generally carry out the functions and/or methodologies of embodiments of the invention as described herein. Computer system12may also communicate with one or more external devices14such as a keyboard, a pointing device, a display24, etc.; one or more devices that enable a user to interact with computer system12; and/or any devices (e.g., network card, modem, etc.) that enable computer system12to communicate with one or more other computing devices. Such communication can occur via Input/Output (I/O) interfaces22. Still yet, computer system12can communicate with one or more networks such as a local area network (LAN), a general wide area network (WAN), and/or a public network (e.g., the Internet) via network adapter20. As depicted, network adapter20communicates with the other components of computer system12via bus18. In addition to or in place of having external devices14and the display24, which can be configured to provide user interface functionality, computing node10in one embodiment can include another display25connected to bus18. In one embodiment, the display25can be configured as a touch screen render and can be configured to provide user interface functionality, e.g. can facilitate virtual keyboard functionality and input of total data. Computer system12in one embodiment can also include one or more sensor device27connected to bus18. One or more sensor device27can alternatively or in addition be connected through I/O interface(s)22. The one or more sensor device27can include a Global Positioning Sensor (GPS) device in one embodiment and can be configured to provide a location of computing node10. In one embodiment, the one or more sensor device27can alternatively or in addition include, e.g., one or more of a camera, a gyroscope, a temperature sensor, a humidity sensor, a pulse sensor, a blood pressure (BP) sensor or an audio input device. It should be understood that although not shown, other hardware and/or software components could be used in conjunction with computer system12. Examples, include, but are not limited to: microcode, device drivers, redundant processors, external disk drive arrays, Redundant Array of Independent/Inexpensive Disks (RAID) systems, tape drives, and data archival storage systems, etc. Referring now toFIG.7, illustrative cloud computing environment50is depicted. As shown, cloud computing environment50includes one or more cloud computing nodes10running the cognitive alert system110with which local computing devices used by cloud consumers, such as, for example, personal digital assistant (PDA) or cellular telephone54A, desktop computer54B, laptop computer54C, and/or automobile computer system54N may communicate. Nodes10may communicate with one another. They may be grouped (not shown) physically or virtually, in one or more networks, such as Private, Community, Public, or Hybrid clouds as described hereinabove, or a combination thereof. This allows cloud computing environment50to offer infrastructure, platforms and/or software as services for which a cloud consumer does not need to maintain resources on a local computing device. It is understood that the types of computing devices54A-N shown inFIG.7are intended to be illustrative only and that computing nodes10and cloud computing environment50can communicate with any type of computerized device over any type of network and/or network addressable connection (e.g., using a web browser). Referring now toFIG.8, a set of functional abstraction layers provided by cloud computing environment50(FIG.7) is shown. It should be understood in advance that the components, layers, and functions shown inFIG.8are intended to be illustrative only and embodiments of the invention are not limited thereto. As depicted, the following layers and corresponding functions are provided: Hardware and software layer60includes hardware and software components. Examples of hardware components include: mainframes61; RISC (Reduced Instruction Set Computer) architecture based servers62; servers63; blade servers64; storage devices65; and networks and networking components66. In some embodiments, software components include network application server software67and database software68. Virtualization layer70provides an abstraction layer from which the following examples of virtual entities may be provided: virtual servers71; virtual storage72; virtual networks73, including virtual private networks; virtual applications and operating systems74; and virtual clients75. In one example, management layer80may provide the functions described below. Resource provisioning81provides dynamic procurement of computing resources and other resources that are utilized to perform tasks within the cloud computing environment. Metering and Pricing82provide cost tracking as resources are utilized within the cloud computing environment, and billing or invoicing for consumption of these resources. In one example, these resources may include application software licenses. Security provides identity verification for cloud consumers and tasks, as well as protection for data and other resources. User portal83provides access to the cloud computing environment for consumers and system administrators. Service level management84provides cloud computing resource allocation and management such that required service levels are met. Service Level Agreement (SLA) planning and fulfillment85provide pre-arrangement for, and procurement of, cloud computing resources for which a future requirement is anticipated in accordance with an SLA. Workloads layer90provides examples of functionality for which the cloud computing environment may be utilized. Examples of workloads and functions which may be provided from this layer include: mapping and navigation91; software development and lifecycle management92; virtual classroom education delivery93; data analytics processing94; transaction processing95; and processing components for the cognitive alert system110including the sentiment analyzer and the cognitive alert curator96, as described herein. The present invention may be a system, a method, and/or a computer program product at any possible technical detail level of integration. The computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present invention. The computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. A non-exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing. A computer readable storage medium, as used herein, is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire. Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network. The network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. A network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device. Computer readable program instructions for carrying out operations of the present invention may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, configuration data for integrated circuitry, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++, or the like, and procedural programming languages, such as the “C” programming language or similar programming languages. The computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). In some embodiments, electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present invention. Aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions. These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram block or blocks. The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus or other device to produce a computer implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks. The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the blocks may occur out of the order noted in the Figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions. The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting. As used herein, the singular forms “a,” “an,” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprise” (and any form of comprise, such as “comprises” and “comprising”), “have” (and any form of have, such as “has” and “having”), “include” (and any form of include, such as “includes” and “including”), and “contain” (and any form of contain, such as “contains” and “containing”) are open-ended linking verbs. As a result, a method or device that “comprises,” “has,” “includes,” or “contains” one or more steps or elements possesses those one or more steps or elements, but is not limited to possessing only those one or more steps or elements. Likewise, a step of a method or an element of a device that “comprises,” “has,” “includes,” or “contains” one or more features possesses those one or more features, but is not limited to possessing only those one or more features. Furthermore, a device or structure that is configured in a certain way is configured in at least that way, but may also be configured in ways that are not listed. The corresponding structures, materials, acts, and equivalents of all means or step plus function elements in the claims below, if any, are intended to include any structure, material, or act for performing the function in combination with other claimed elements as specifically claimed. The description set forth herein has been presented for purposes of illustration and description, but is not intended to be exhaustive or limited to the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the disclosure. The embodiment was chosen and described in order to best explain the principles of one or more aspects set forth herein and the practical application, and to enable others of ordinary skill in the art to understand one or more aspects as described herein for various embodiments with various modifications as are suited to the particular use contemplated.
60,811
11860757
DETAILED DESCRIPTION Various embodiments are described herein to various apparatuses, systems, and/or methods. Numerous specific details are set forth to provide a thorough understanding of the overall structure, function, manufacture, and use of the embodiments as described in the specification and illustrated in the accompanying drawings. It will be understood by those skilled in the art, however, that the embodiments may be practiced without such specific details. In other instances, well-known operations, components, and elements have not been described in detail so as not to obscure the embodiments described in the specification. Those of ordinary skill in the art will understand that the embodiments described and illustrated herein are non-limiting examples, and thus it can be appreciated that the specific structural and functional details disclosed herein may be representative and do not necessarily limit the scope of the embodiments, the scope of which is defined solely by the appended claims. Reference throughout the specification to “various embodiments,” “some embodiments,” “one embodiment,” or “an embodiment,” or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment. Thus, appearances of the phrases “in various embodiments,” “in some embodiments,” “in one embodiment,” or “in an embodiment,” or the like, in places throughout the specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. Thus, the particular features, structures, or characteristics illustrated or described in connection with one embodiment may be combined, in whole or in part, with the features, structures, or characteristics of one or more other embodiments without limitation given that such combination is not illogical or non-functional. Overview. Embodiments of the instant disclosure generally facilitate determining the performance impact of a selected change in one or more managed computer systems, and includes three main components: (i) a first component that is configured to track changes to individual managed computer systems and consolidate the resulting change data into a single, centralized master repository—this first component may involve functionality at the managed computer system level in addition to functionality at a master computer system level; (ii) a second component that is configured to aggregate performance measurements (values), for example daily, across the plurality of managed computer systems and to consolidate the respective performance data into the master repository—this second component also may involve functionality at the managed computer system level in addition to functionality at the master computer system level; and (iii) a third component that is configured to generate an output (e.g., before-change versus after-change) to allow a user to determine a performance impact, if any, for a user-selected change to one or more of the managed computer systems. Generally, embodiments in accordance with the instant disclosure may include features such as a basic data collection feature that is the same or similar to that of a workspace analytics system with data collection/analysis functionality but extended in a number of areas as described herein. For example, such a workspace analytics system may be seen by reference to a commercial product available under the trade designation SYSTRACK from Lakeside Software, LLC, Bloomfield Hills, Michigan USA, and as described above and as set forth in the '012 application, the '189 application, and the '630 application, identified herein and wherein the '012 application, the '189 application, and the '630 application are all hereby incorporated by reference as though fully set forth herein. Referring now to the drawings wherein like reference numerals are used to identify identical components in the various views,FIG.1shows an illustrated embodiment of an apparatus10for generating an output for facilitating assessment of a performance impact of a change in one or more managed computer systems. In an embodiment, apparatus10may be deployed in a distributed computing network, designated generally as a network14. The distributed computing network14may include a plurality of computer systems designated121,122, . . .12nand may be of the type installed in one or more domains, such as Domain A, designated14A, Domain B, designated14B, and Domain C, designated14C.FIG.1also shows a communications network11through and over which one or more of the managed computer systems and the master computer system may communicate or otherwise transmit/transfer data/information. The computer systems121,122, . . .12nmay be, but are not required to be, arranged in an hierarchical relationship wherein computer121may be a top-level master node at a headquarters (HQ) or main location and hereinafter sometimes referred to as a master computer system121. In another embodiment, the top-level master node may be accessible via the internet in a cloud computing model. The computer systems122,123,124, . . .12nmay comprise workstation type computer systems or server type computer systems, or may be another type of computing device. It should be understood however that embodiments consistent with the instant disclosure can be applied to a wide range of managed computer systems such as tablet computers, phone (smart phones), and various other dedicated computing hardware. In other words, managed computer systems is contemplated to be a broad term. The managed computer systems122,123,124, . . .12nmay hereinafter sometimes be referred to as local or managed computer systems (also sometimes called a “child” computer system or alternatively a managed device). It should be understood that the number of managed computer systems shown inFIG.1is exemplary only and embodiments consistent with the instant disclosure are not so limited and may be applicable to a very large number of managed computer systems. Moreover, the managed computer systems are not limited to physically different hardware. Further descriptions of exemplary managed computer systems122,123,124, . . .12nmay be found elsewhere in this document. Each of the managed computer systems122,123,124, . . .12nmay include (i) a respective data collection agent16(e.g., shown in exemplary fashion for managed computer system12n), (ii) a respective local (or child) managed data store such as databases182,183,184, . . .18n, (iii) a respective condenser module20configured to operate as a condensing agent20, shown in exemplary fashion for managed computer system12n, a change recording system24, shown in exemplary fashion for managed computer system12n, and a respective performance monitoring (measurement) module or system26, shown in exemplary fashion for managed computer system12n. The master computer system121may include an optional console module22, an analysis module28, a user interface30, and an output block designated as output block32, which in an embodiment may take the form of various reports and/or a presentation of one or more multi-device views of relevant before-change versus after-change data, such as for example only as illustrated inFIGS.7-9. Such output32that contains before-change versus after-change performance data as a result of a given change in one or more managed computer system(s) allows the user to evaluate a performance impact of such change. The output32may include both displayed format outputs as well as electronic format file(s) containing the above-referenced information. The data collection agent16is configured to collect original data, which includes at least inventory data (e.g., data or information that is generally static in nature), operating data, and modified data, all designated by reference numeral38(best shown inFIG.2), which data is associated with various components and operation of a managed computer system12i(where i is 2 to n) with which the data collection agent16is associated. The collected original data38may further include, without limitation, performance data, configuration data, hardware and software inventory data, as well as information about the relationships between various entities, as well as time-stamp information as described herein. Entities may include user names, application names, system names, domain names, disk names, network interfaces, and other items relevant within the workspace analytics system. The data collection agent16is also configured to record the captured data into a data store such as the local/managed databases182,183,184, . . .18n. Each local database182,183,184, . . .18nmay be associated with a respective one of the managed computer systems122,123,124, . . .12n. In an embodiment, the local databases18i(where i is 2 to n) may reside (i.e., where the data is saved) directly on the managed computer system12iwith which it is associated, although it should be understand that the local databases may also be stored in shared storage, on a server, in a cloud storage facility, and/or in another place. In a typical embodiment, each of the local databases18imay be dedicated to its respective single managed computer system12i(or a single user thereof), although the sharing of such databases may also be possible. The condenser module20is installed on each managed computer and is configured to operate as a condensing agent20, including taking the original data/inventory data/modified data/other data stored in the local database18ias an input and performing a variety of operations designed to reduce its overall volume to produce condensed data. The condensing agent20is further configured to transmit the condensed data to an aggregation point, which may be the condensed database181. The condensed database181contains condensed data originating from the condensing agents20running on the managed computer systems12i. The master computer system121is configured (e.g., via software executing thereon) to receive such condensed data from the managed computer systems and may be configured to operate on such received condensed data, and store it in the condensed database181. The condensed database181itself may exist partially on storage systems and partially in the memory on the master computer system121. In embodiments, the information stored in the condensed database181includes, among other things, change records relating to the changes that have occurred on one or more of the managed computer systems122,123,124, . . .12n, as well as various performance values relating to performance metrics or other performance indicators indicative of performance of a managed computer system, wherein the change records and the performance data (i.e., values) are both time-stamped for purposes to be described herein. The change recording system24that is included on each managed computer system12iis configured to evaluate original data38, including the inventory data, that is stored in the local database by the data collection agent16. Since changes to hardware, software, configuration settings and the like may lead to various problems including performance issues in the operation of the managed computer system, the change recording system24is included as a mechanism to identify such changes. In an embodiment, the change recording system24may make a record (e.g., in the form of change records) of at least (i) what changes have occurred in the managed computer system and (ii) when (i.e., the time) such changes were detected, which may take the form of a time-stamp. The time of a change can be used, in an embodiment, to indicate a reference time for determining before-change versus after-change performance impact, as explained in greater detail herein. In a constructed embodiment, the change recording system24may operate on the managed computer system12i, although it would be possible to perform that work externally in other computing devices, in other embodiments. The performance monitoring system26is configured generally to determine a plurality of performance values for one or more performance indicators/metrics/parameters respectively associated with the managed computer systems12i, for predetermined times (e.g., daily). The determined performance values each have a respective time-stamp associated therewith. The analysis module28in an embodiment is associated with the master computer system121. The analysis module28is configured to analyze performance values, for a given change, to determine a before-versus-after format output32to thereby allow a user to determine the performance impact, if any, of the selected change. The analysis is based on the information contained in the condensed database181as an input (i.e., the change records with time-stamps and the performance values also with time-stamps). The user interface30generally allows the user to interact with the analysis module28to allow, among other things, the user to select a particular recorded change that has occurred in the one or more managed computer systems, in order to conduct a performance impact analysis. FIG.2is a diagrammatic and block diagram view showing, in greater detail, an exemplary managed computer system (e.g., managed computer system122) in which aspects of apparatus10may be implemented. In the illustrated embodiment, managed computer system122includes an electronic processor342and an associated memory362. The processor342may include processing capabilities as well as an input/output (I/O) interface through which processor342may receive a plurality of input and generate a plurality of outputs. Memory362is provided for storage of data and instructions or code (i.e., software) for processor342. Memory362may include various forms of non-volatile (i.e., non-transitory) memory including flash memory or read only memory (ROM) including various forms of programmable read only memory (e.g., PROM, EPROM, EEPROM) and/or volatile memory including random access memory (RAM) including static random access memory (SRAM), dynamic random access memory (DRAM) and synchronous dynamic random access memory (SDRAM). Memory362stores executable code which when executed by the processor342performs a number of functions as described herein on the managed computer system in the overall apparatus10, including code to perform the functionality of the data collection agent16(as described herein), code to perform the functionality of the change recording system24(as described herein), and code to perform the functionality of the performance monitoring system26(as described herein). These and other functions described herein may together contribute to the overall performance of the method and apparatus for generating an output for facilitating assessment of the performance impact of a selected change in one or more managed computer systems (e.g., a before-change versus after-change display of a performance metric). The managed computer systems122,123,124, . . .12nwill also include an operating system (not illustrated) where the managed computer system may be a Windows-based computer, macOS-based computer, a Linux-based computer, a mobile phone, a kiosk, a special purpose handheld device such as barcode scanner, an Android-based computer, ChromeOS based computer, and IOS based devices, and/or other computing devices without limitation. The managed computer systems122,123,124, . . .12nmay further include application programs suited in function and operation to the purpose(s) for which it has been developed and deployed. The data collection agent16is configured to monitor and collect many types of information, including without limitation, performance data, configuration data, hardware and software inventory data, system health, connectivity including network connectivity, computing resource availability and usage, information about the relationships between various entities, as well as other information. This information is collectively designated at box38. Most types of data collected by agent16are time-stamped when saving to allow time correlation analysis and for other purposes as described herein. It should be understood that the data collection agent16on each managed computer system12igathers information that may be relevant for many different purposes. In an alternative embodiment, the local data store may be present on external storage or another computing system. The data collection agent16may be configured to use various operating system application programming interfaces (APIs), access various existing collections of data of interest, and perform various direct observations of user, application, system activity, peripheral, hardware, software, and other aspects of operation. The data collection agent16may be configured to operate as a software service on the managed computer system, and may be configured to operate continuously. The data collection agent16may be further configured to directly record such captured data/observations (i.e., the original data38) in the local data store (i.e., associated local database18), and/or be configured to perform first-level analytics on the data to make decisions about what to save and/or perform various functions, such as mathematical transformations, on the observed/collected data (modified data). The data collection agent16may also be configured (i) to operate as a system service directly on a managed computer system12i; (ii) to operate substantially continuously; and (iii) to operate autonomously with respect to the master computer system and with respect to other managed (monitored) computer systems, which may have advantages for scale, mobility, cloud, and privacy. In an alternative embodiment, the data collection agent16may be configured to operate remotely from the managed (monitored) computer system itself. In an embodiment, the data collection/analysis functionality that is performed by the data collection agent16may be implemented, at least in-part, by one or more components of a workspace analytics system, which may be a commercial product available under the trade designation SYSTRACK from Lakeside Software, LLC, Bloomfield Hills, Michigan USA, as mentioned above, and as described in the '012 application, the '189 application, and the '630 application mentioned above. The condenser module (agent)20runs periodically on the managed computer system. The condenser module20is configured to take the original/inventory data/other data contained in the local database (e.g., database182) as an input and, in an embodiment, perform a variety of operations designed to reduce the volume of the data and/or create mathematically interesting analysis of that data. The condenser module20is further configured to send the condensed data to the condensed database181, which data may include the herein mentioned time-stamped change records and time-stamped performance values. The condenser module20is configured to run on the managed computer system12iat predetermined times (e.g., periodically) and/or at different time schedules for different types of data so as to provide flexibility to the implementation, and/or may be triggered to run upon demand by the master computer system121and/or signaled by another external system. The condensed database181contains data from many managed computer systems—such data being collectively sent to it via the respective condensers20. The condensed database181itself may exist partially on storage systems and partially in memory on the master system121. Tracking changes across plural managed computer systems. As mentioned above, embodiments consistent with the instant disclosure include three main components, the first of which includes functionality configured to track changes to individual managed computer systems and to store or record such tracked/identified changes in the local data store182(initially), and subsequently, to consolidate the recorded changes from the local data store182to the condensed database181. The first tracking changes component may make use of, in part, the functionality of the data collection agent16described herein and as disclosed generally in the '630 application that runs on each system being monitored (i.e., the managed computer system) as well as the functionality of the change recording system24, which is configured to store change records including time-stamps (see box40inFIG.2) reflecting any changes made to the managed computer system12iin its local data store18. In an embodiment, the change recording system24is configured to process the original data38(including the inventory data) obtained by data collection agent16from its associated local database18iand determine whether any changes associated with the managed computer system12ihave occurred (and when). For example, change recording system24, as installed on the first managed computer system122, analyzes information about the first computer system122as well as the original/inventory data38stored in the associated managed database182and then determines whether any changes have occurred to the first managed computer system122. This general operation is performed on each of the managed computer systems12i. The change recording system24may detect changes by comparing the current gathered data with previously gathered information and determine differences to thereby detect changes, for example, as either an “upgrade”, “add”, “change” or a “delete” (e.g., of a hardware or software component). As additional example, the change recording system24can detect changed values and settings performed by a straightforward comparison of old and new data values. When the system24detects such changes, a change log (change records) may be created and/or maintained/updated in the local database18isuch that it is possible to know what inventory item has changed, what an old setting/value was and what the new setting/value is and when (i.e., the time) this change occurred. In most cases, the time at which the change was detected is used as the time when the change occurred. The inventory data collected by data collection agent16may be evaluated either on a polling basis, where the time interval for when to conduct the check/comparison, is configurable (user configurable) or on an event-driven basis. In another embodiment, the change recording system24may be configured to detect changes in resource utilization patterns, health, performance, and other key indicators by using the same notion of comparing old and new values, and then updating the change log when necessary. The instant teachings allows for the leveraging of many types of change information. Changes in utilization patterns can also be detected as changes herein. As an example of change detection, consider the removal of a connected USB device from the system (such as a mouse) and replacement with a different device. This may be detected as two changes, a device removal and a device addition. These would also be recorded in the change log. The changes that can be monitored/recorded include but are not limited to the following types of changes:(1) Adding, removing or updating software packages;(2) Installing operating system patches;(3) Adding or removing hardware;(4) Installing or updating hardware device drivers;(5) Changing device configuration settings;(6) Changing security policies or user profiles; and(7) Adding or removing browser add-ins. The change recording system24, including any support functionality of the data collection agent16, records (i) the details of each change as it happens (or as nearly as can be determined) on the managed computer system, along with (ii) a respective time stamp indicating when the change occurred (or was determined to have occurred). This change data (i.e.,FIG.2—change records40including time-stamps) is stored on the managed computer system (e.g., on an edge device), in the local database store18associated therewith. In an embodiment, at predetermined times, such as once per day, each managed computer system being monitored is configured to send a filtered list of the changes recorded (i.e., change records40) for that day to the condensed database181(master repository) associated with the master computer system121. As noted above, the list of changes may be filtered so as to exclude changes that would be specific (i.e., unique) to that managed computer system, or unlikely to be useful in finding performance issues among a larger or alternatively an entire population of managed computer systems. The filtering may happen at a configuration level, which can be assigned to one or more managed computer systems, and in this regard, there can be a default list of changes that are designated to be ignored (i.e., filtered and therefore not sent) that all the configurations inherit by default. Alternatively, this default list can be modified on a per configuration basis. Some examples of changes that are ignored by default may include but are not limited to: the last password change date, battery-related measurements, and a last write time for a user profile. With continued reference toFIG.2, each change record40sent to the condensed database181may include at least the following:(1) System id (identification);(2) Change time;(3) Change type—Add, Remove (sometimes described as Delete), Update (sometimes described as Change), Upgrade;(4) Change class—including but not limited to—Software, Operating System Patch, Group Policy, Monitor, Disk Drive, Network Interface, Add Ins, Accounts, System Drivers, etc.;(5) Change description; and(6) For Upgrade type, the previous version and new version if available. Tracking daily performance across managed computer systems. The second main component involves monitoring/tracking and aggregating performance measurements (i.e., values) across the managed computer systems at predetermined times, such as on a daily basis. In this regard, the performance monitoring system26is provided and is configured to output performance values (measurements)42including associated time-stamps, as explained below. This functionality may be implemented in part by the base functionality of the workspace analytics system referred to herein and in the incorporated by reference patent applications, namely, the '012 application, the '189 application, and the '630 application, but as extended in functionality as described herein. The performance monitoring system26is configured to constantly monitor the managed computer system's performance and in an embodiment, may make use of, in part, the functionality of the data collection agent16and/or condensing agent20, as seen by reference to the '630 application. Performance data may be, and in an embodiment is, collected and stored at predetermined time intervals, such as every 15 seconds by the data collection agent16in a local data store18on the managed computer system (for example only—on an edge computer system). In an embodiment, the performance monitoring system26is configured to produce a daily performance view, which aggregates this detailed performance data into a daily summary and consolidates it to the condensed database181(master repository) as described above with respect to the change records40. For example, this process may calculate the number of minutes during the day that a user was actively using the managed computer system and in an embodiment, may determine over fifty (50) separate performance metrics, aggregated by day. Performance values for these performance metrics may include but are not limited to the following:(1) Average, maximum and standard deviation of CPU usage while active;(2) Average processor queue length;(3) Average, maximum, minimum and standard deviation of memory usage while active;(4) Average, maximum and standard deviation of network usage in both bytes and percentage;(5) Average input/output (I/O) reads and writes, as well as I/O rates;(6) Average disk utilization and disk queue length;(7) Average CPU usage in user vs. kernel mode;(8) Average application load times; In an embodiment, each day this daily performance data is consolidated from individual managed computer systems in combination with data in the local database18to the condensed database181, where it can be joined with the change data described above. FIG.3is a simplified diagrammatic and block diagram view of the master computer system121on which aspects of the overall apparatus10may be implemented. In the illustrated embodiment, the master computer system121includes an electronic processor341and an associated memory361. The processor341may include processing capabilities as well as an input/output (I/O) interface through which processor341may receive a plurality of input and generate a plurality of outputs. Memory361is provided for storage of data and instructions or code (i.e., software) for processor341. Memory361may include various forms of non-volatile (i.e., non-transitory) memory including flash memory or read only memory (ROM) including various forms of programmable read only memory (e.g., PROM, EPROM, EEPROM) and/or volatile memory including random access memory (RAM) including static random access memory (SRAM), dynamic random access memory (DRAM) and synchronous dynamic random access memory (SDRAM). In an embodiment, memory361stores executable code which when executed by the processor341performs a number of functions as described herein in the master computer system121as part of the overall apparatus10, for example, code to perform the functionality of the analysis module28as described herein and code to perform the functionality of the user interface30as described herein and produce the described output(s)32. The master computer system121has associated with therewith, additionally, software that receives condensed data (i.e., change records40and performance records42) and that operate on such data and further stores it in the condensed database181. The condensed database181contains data from many managed computer systems12i—such data being collectively sent to it via the respective condensers20. The condensed database181itself may exist partially on storage systems and partially in memory on the master computer system121. The master computer system121will include (1) an operating system (not illustrated) where the master computer system may be a Windows-based computer, macOS-based computer, a Linux-based computer, Android, ChromeOS, or IOS based devices for example only without limitation and (2) may include application program(s) suited in function and operation to the purpose(s) of the user(s) of the master computer system121. The master computer system121may be embodied in a per-system-collection “on premises” solution, or may be embodied in a cloud-delivered solution. In a constructed embodiment, the master computer system comprises a Windows-based computer. The user interface30is configured to allow the user to interact with the analysis module28to obtain a user's selection and generate an output as will be described below, such as a before-versus-after chart44(e.g., on a display) or alternatively as an electronic file. The analysis module28is associated with the master computer system121. The condensed database181(master repository) is configured to maintain the consolidated change data (i.e., change records40) from the managed computer systems that have reported such data for a programmable (predetermined) number of days (a “retention period”), which may typically be around thirty (30) days. It should be understood that the block diagram format forFIG.3is exemplary and for purposes of description. One of ordinary skill in the art will understood that variations are possible and that in constructed embodiments, functionality may be consolidated. For example, the user interface30and the before/after chart44may present as a single UI box that includes the chart and the functionality to interact with the user—with two way communication with the analysis module28. Additionally, in constructed embodiments, code may be transmitted from the master computer system to a client computer system separate from the master computer system where such code is executed in a browser, for example only. FIG.4is a simplified screenshot view of a table46produced via user interface30that includes consolidated change data in tabular form, which includes a first column48(“Action”), which corresponds to the above-described Change type, a second column50(“Class”), which corresponds to the above-described Change class, a third column52(“Change”), which corresponds to the above-described Change id and Change description, fourth and fifth columns54,56(“New Version” and “Previous Version”), which indicates the new version and previous version of the changed item, and a sixth column58(“System Count”), which corresponds to the total number of managed computer systems that have had the specified change made thereto. The table46further includes search functionality as shown by the drop down menu60and input text box62. This consolidated change data, a portion of which is displayed in table46, can then be further processed (e.g., by the analysis module28) such that consolidated change data can be grouped by change type (“Action” column48), change class (“Class” column50) and change description (“Change” column52) to give an overall count of the number of managed computer systems12ithat had a given change at any point during the above-mentioned retention period (e.g., “System Count” column58). The analysis module28is further configured to be able to determine and output as a further view to a user on which day any individual managed computer system had a specific change applied, for example only. The functionality of tracking, consolidation, and processing of the change data associated with the managed computer system(s) (by the analysis module28) may therefore be distributed in the sense that the data collection and first-level processing occurs at the managed computer system level while further processing in regard to the changes of the population of managed computer systems may occur at the master computer system level. FIG.5is a simplified block diagram designated64, illustrating the overall apparatus10from a data flow perspective, for generating an output for facilitating the assessment of the performance impact of a selected change in a managed computer system (or systems). As shown, each managed computer system122,123,124, . . .12ncollects and stores both change data and performance data (daily). The change and performance data is then transmitted by each managed computer system122,123,124, . . .12nto the master computer system121, where it is consolidated to the condensed database181and where it may be joined together with data from all other managed computer systems. The analysis module28is configured to generate an output designated66(corresponding to output32inFIG.1and output44inFIG.3). The output66constitutes a detailed performance impact report for each selected change. The output66also comprises an exemplary before-change versus after-change chart for a selected change, which displays in exemplary fashion the changes in a performance metric for three managed computers having lines68,70, and72associated therewith all plotted on a common timeline. The change time-stamp associated with the selected change acts as a reference time74(i.e., for distinguishing before-change versus after-change, and hereafter may be referred to as the “Day 0” reference time). Calculating the performance impact for a selected change. The user interface30is configured to present to the user a display and mechanism to allow the user to select a change to investigate from a list of all recorded changes, for example, as shown inFIG.4. After the user selects a particular change for analysis, the analysis module28is configured to generate an output comprising various before-versus-after charts/graphs rendered in various ways, as described in greater detail herein, that allows the user to evaluate the impact, if any, of the selected change on one or more performance metrics/indicators. It should be understood that the output generated by the analysis module28may comprise, in an embodiment, data that is used by the user interface to construct the herein mentioned before versus after tables and/or charts. In an embodiment, the analysis module28is configured to process the daily performance data for the managed computer systems that have the selected change so as to time-shift the performance data so that the day of the selected change is deemed to be or is considered as Day 0. The day after the selected change on each managed computer system—which could be a different calendar day for any given managed computer system—is considered Day 1, the second day after the selected change is Day 2 and so on. The first day before the selected change is considered Day −1, the day before that is considered Day −2, and so on. By doing this alignment procedure, the analysis module28can calculate aggregate performance metrics across multiple managed computer systems that have the selected change occurring on disparate days. FIGS.6A-6Dare timing diagrams illustrating how performance values from two different managed computer systems can be aligned according to the alignment procedure mentioned above so as to allow the analysis module28to calculate aggregate performance metrics as also mentioned above. FIG.6Ashows performance data for a first managed computer system having a selected change occurring at particular time, such time being indicated by a first change time-stamp CTS1. InFIG.6A, the time of the selected change for the first managed computer system will be deemed “Day 0” as illustrated.FIG.6Afurther shows eight performance values designated PV1, PV2, PV3, PV4, PV5, PV6, PV7, and PV8which are associated with a given performance metric associated with the first managed computer system. These eight performance values PV1, PV2, PV3, PV4, PV5, PV6, PV7, and PV8are respectively associated with Day −4, Day −3, Day −2, Day −1, Day 0, Day 1, Day 2, and Day 3—all timing being relative to the Day 0 reference time—the time that the selected change occurred on the first managed computer system. FIG.6Bsimilarly shows performance data for a second, different managed computer system having the same selected change as the first managed computer system but occurring at different time indicated by a second change time-stamp CTS2. InFIG.6B, the time of the selected change for the second managed computer system will be deemed to be “Day 0” as illustrated. While this second change time-stamp CTS2may act as the reference time (“Day 0”) for considering before-versus-after performance impact analysis for that second managed computer system, the difference in time with respect to when the selected change occurred on the two different managed computer systems makes direct aggregation of the performance values for a multi-system before-versus-after analysis not directly available.FIG.6Bshows that the second managed computer system also has eight performance values designated PV9, PV10, PV11, PV12, PV13, PV14, PV15, and PV16which are associated with the given performance metric. These eight performance values PV9, PV10, PV11, PV12, PV13, PV14, PV15, and PV16are respectively associated with Day −4, Day −3, Day −2, Day −1, Day 0, Day 1, Day 2, and Day 3—all timing being relative to the Day 0—the day the selected change occurred on the second managed computer system. FIG.6Cshows the result of the alignment process described above where a difference in time is determined based on the respective change time-stamps CTS1and CTS2—here it is three days. Once this difference is determined, the performance values, for example of the second managed computer, are time-shifted and aligned using the determined difference in change time-stamps. After the shifting/alignment has been done, the performance values of the first managed computer system (FIG.6A) and the aligned performance values of the second managed computer system (FIG.6C) can be directly processed and compared from a timing perspective for a true before-versus-after aggregation, since the “Day 0” time for the two different systems has now been aligned. FIG.6Dis a timing diagram that shows in exemplary fashion a composite performance value for the first and second managed computer systems. For example, for Day −4, the composite value can be the arithmetic average of the two values AVERAGE(PV1, PV9), the composite value for Day −3 can be the arithmetic average of the two values AVERAGE(PV2, PV10), and so on. The performance impact of a selected change occurring at disparate times across multiple managed computer systems can now be aggregated for an accurate multi-system before-change versus after-change assessment. In an embodiment, the performance value at Day 0 will be a composite value (e.g., such as an average value) on the before versus after chart and moreover, the Day 0 values will be included in both the before and the after overall averages in the grid. The analysis module28may be configured, in an embodiment, to only consider daily performance records for managed computer systems that had at least one active user session during the day. For example, if the managed computer system is unused on a particular day, the analysis module28may be configured to ignore the performance record for that day. The analysis module28may be further configured, in an embodiment, to calculate the average for every metric across every managed computer system for a programmable (i.e., predetermined) number of days. Typically, and in one embodiment, the predetermined number of days would be from Day −6 to Day 6. The analysis module28is further configured to calculate, in an embodiment, a “before” average for each metric on Days −N through Day 0, and an “after” average for each metric on Days 0 through Day +N. This allows a high-level comparison of overall managed computer system performance before versus after the selected change (seeFIGS.7A-7Bbelow). FIGS.7A-7Bshow exemplary displays76,76′ illustrating an overall “Before Change” and an “After Change” presentation generated by the analysis module28for a plurality of performance metrics, given a selected change occurring on the managed computer systems. Displays76,76′ are rendered in a tabular format where the before-change data is distinguishable from the after-change data. In particular, displays76,76′ include a “Before Change” row78and an “After Change” row80, where individual columns represent individual performance metrics, such as without limitation column82corresponding to an “Active CPU Average” performance metric, column84corresponding to an “Active Memory Average” performance metric, column86corresponding to an “Active I/O Read Bytes” performance metric, and column88corresponding to an “Active I/O Written Bytes” performance metric. In displays76,76′, the “before change” data in row78is distinguishable from the “after change” data in row80by virtue of at least the separate organization and presentation/arrangement. In embodiments, the “after change” performance data in row80can be further distinguished by color or other visually distinguishable means into a first group and a second group. The first group—as represented in display76in exemplary fashion in columns86,88as cells being rendered in the color green—may correspond to performance metric(s) that are not adversely affected by the selected change. The second group—as represented in exemplary fashion in columns82,84as cells being rendered in the color red—may correspond to performance metric(s) that are deemed to have been adversely affected by the selected change. Note that in display76, the entire cell is rendered in the desired color while in display76′, alternatively, a green color icon (circle shape) or a red color icon (square shape) is present within the cell, which may additionally feature a color line border around the perimeter of the cell (e.g., a red color line border for a cell having a red color square icon or a green color line border for a cell having a green color circle icon). Variations are possible. In an embodiment, each performance metric has a property associated therewith that indicates whether a higher value or a lower value is better. Each performance metric further also has a threshold value, such that if the after-change value is more than the threshold (e.g., 10% but can be configurable) in either direction from the before-change value, the cell is rendered in a green color or a red color, depending on whether the direction the value changed is in a better or worse direction. In addition, for each individual performance metric, the analysis module28via the user interface30is configured to allow the user to select and thereby investigate a daily trend over a covered period (see, e.g.,FIG.8). FIG.8is an exemplary display illustrating a chart90of such a daily trend of a particular performance metric (“Average Network Usage”) across and for the plurality of managed computer systems having a particular, same change. Each daily performance value in chart90is the average of the plurality of performance values of the managed computer systems, for each of the days shown before and after the time when the selected change occurred on the managed computer systems. In particular, chart90shows a first line segment92corresponding to a “before-change” category of performance values for Days −6, −5, −4, −3, −2, and −1, which line segment continues to a Day 0 value. Chart90also shows a second line segment94corresponding to an “after-change” category of performance values starting from the Day 0 time and extending through times at Day 1, 2, 3, 4, 5, and terminating at the time of Day 6. The first line segment92may be distinguishable from the second line segment94, by virtue of line type, line thickness, color, symbols arranged thereon, or in other ways now known or hereafter developed. In an embodiment, the displays ofFIG.4(change data),FIG.7A or7B(performance metrics), andFIG.8(day-by-day graph) may all appear on the same screen display simultaneously. The change is first selected (FIG.4) and as a consequence the summary for all the performance metrics (FIG.7A or7B) will also be displayed. One performance metric can (and will) be chosen to show the day-by-day (before-change versus after-change) chart, like inFIG.8. It should be understood that variations are possible. In another embodiment, as an example, the chart inFIG.8could be generated as an output from analysis module28from the change and performance data for four individual managed computer systems, as shown inFIG.9. FIG.9is an exemplary display illustrating a chart96relating to four managed computer systems. As can be seen inFIG.9, not every managed computer system will have data for every day before and after a selected change. A managed computer system may be off because of work patterns, or a user may not have actively used such managed computer system on a given day for any number of reasons. By time shifting and sanitizing missing data, the analysis module28will be able to give an accurate view of the true performance impact of a selected change across a plurality of managed computer systems (e.g., for an organization). In this regard, the term sanitizing means ignoring data for those days when a managed computer system is not being used—as opposed to treating the subject performance value as a zero value and thereby skewing the calculated averages. The daily average utilizes the number of systems actually in use each day, rather than the total number of managed computer systems with the selected change. In an embodiment, the scope or range of possible changes and/or performance metrics to be monitored may be made, as an initial matter, by a system designer when an embodiment is constructed. However, it should be understood that indirectly, embodiments consistent with the instant disclosure do allow for the dynamic addition of inventory data, which then falls into change management, and also the dynamic addition of performance metrics. In an alternate embodiment, a user interface may be provided to allow a user to control the selection of changes, performance metrics, or both. In an embodiment, the apparatus according to the instant disclosure may be configured to make such change/performance metrics based on other factors. FIG.10is a simplified flowchart diagram showing a method98in accordance with an embodiment, for generating an output for facilitating the assessment of the performance impact of a selected change in one or more managed computer systems. The method begins in step100. In step100, the method involves determining changes associated with a first managed computer system of the one or more managed computer systems and storing, in a first data store, change records corresponding to the determined changes. Each of the change records includes a respective change time-stamp indicating when each change was determined. The method proceeds to step102. In step102, the method involves determining a plurality of performance values for at least one performance metric associated with the first managed computer system for predetermined times (e.g., daily). The method further involves associating with each of the performance values a respective performance time-stamp. It should be understood that in embodiments, steps100,102may occur substantially continuous and in parallel. In addition, in embodiments, such change records and performance data may be consolidated to the condensed database, as described herein, and in particular, for embodiments involving a plurality of managed computer systems, such consolidated data represents the changes in the plurality of managed computer systems. The method proceeds to step104. In step104, the method involves selecting one of the changes that occurred on the first managed computer system for performance impact assessment. The selected one of the changes has an associated selected change time-stamp. The method proceeds to step106. In step106, the method involves identifying first performance values from the plurality of performance values that have performance time-stamps that are prior in time to the time-stamp of the selected change. The first performance values thereby identified are associated with a before-change category of performance values. The method also involves identifying second performance values from the plurality of performance values that have performance time-stamps that are later in time relative to the time-stamp of the selected change. The second performance values thereby identified are associated with an after-change category of performance values. The method proceeds to step108. In step108, the method involves generating an output that includes the first and second performance values. In the generated output, the first performance values associated with the before-change category are distinguished from the second performance values associated with the after-change category. In an embodiment, the first and second performance values may be arranged in a tabular format or on a common timeline format, for example only, to thereby allow the user to determine the performance impact, if any, of the selected change using such output. It should be understood that the foregoing method can be applied to a plurality of managed computer systems where the output comprises a multi-system output reflecting the performance impact across the plurality of managed computers systems, even where the same change occurred at disparate times, all as described herein. In sum, embodiments consistent with the instant disclosure generally facilitate determining the performance impact of a selected change in one or more managed computer systems, and include three main components: (i) a first component that is configured to track changes to individual managed computer systems and consolidate the resulting change data into a single, centralized master repository—this first component involving functionality at the managed computer system level in addition to functionality at a master computer system level; (ii) a second component that is configured to aggregate performance measurements (values), for example daily, across the plurality of managed computer systems and to consolidate the respective performance data into the master repository—this second component also involving functionality at the managed computer system level in addition to functionality at the master computer system level; and (iii) a third component that is configured to analyze the foregoing change and performance data and generate an output (e.g., before-change versus after-change) to allow a user to determine the performance impact, if any, for the user-selected change that occurred to one or more of the managed computer systems. It should be understood that a processor as described herein may include conventional processing apparatus known in the art, capable of executing pre-programmed instructions stored in an associated memory, all performing in accordance with the functionality described herein. To the extent that the methods described herein are embodied in software, the resulting software can be stored in an associated memory and can also constitute the means for performing such methods. Implementation of certain embodiments, where done so in software, would require no more than routine application of programming skills by one of ordinary skill in the art, in view of the foregoing enabling description. Such a processor may further be of the type having both ROM, RAM, a combination of non-volatile and volatile (modifiable) memory so that any software may be stored and yet allow storage and processing of dynamically produced data and/or signals. It should be further understood that an article of manufacture in accordance with this disclosure includes a computer-readable storage medium having a computer program encoded thereon for implementing the logic for determining the performance impact of changes in a computing system and other functionality described herein. The computer program includes code to perform one or more of the methods disclosed herein. Such embodiments may be configured to execute on one or more processors, multiple processors that are integrated into a single system or are distributed over and connected together through a communications network, and where the network may be wired or wireless. The terms “electrically connected” and “in communication” are meant to be construed broadly to encompass both wired and wireless connections and communications. It is intended that all matter contained in the above description or shown in the accompanying drawings shall be interpreted as illustrative only and not limiting. Changes in detail or structure may be made without departing from the invention as defined in the appended claims. Any patent, publication, or other disclosure material, in whole or in part, that is said to be incorporated by reference herein is incorporated herein only to the extent that the incorporated materials do not conflict with existing definitions, statements, or other disclosure material set forth in this disclosure. As such, and to the extent necessary, the disclosure as explicitly set forth herein supersedes any conflicting material incorporated herein by reference. Any material, or portion thereof, that is said to be incorporated by reference herein, but which conflicts with existing definitions, statements, or other disclosure material set forth herein will only be incorporated to the extent that no conflict arises between that incorporated material and the existing disclosure material. While one or more particular embodiments have been shown and described, it will be understood by those of skill in the art that various changes and modifications can be made without departing from the spirit and scope of the present teachings.
56,343
11860758
DETAILED DESCRIPTION FIG.1is a conceptual diagram illustrating an example computing system configured to identify performance issues with an application, relative to performance of other applications that execute on similar or corresponding computing platforms, and determine ways to improve performance of the application, in accordance with one or more aspects of the present disclosure. System100includes computing system160in communication, via network130, with computing device110, computing devices116A-116N (collectively “computing devices116”), and computing devices118A-118N (collectively “computing devices118”). Although operations attributed to system100are described primarily as being performed by computing system160and computing devices110,116, and118, in some examples, the operations of system100may be performed by additional or fewer computing devices and systems than what is shown inFIG.1. For example, computing devices110,116, and118, may be a single computing device. Computing system160includes developer service module162and application data store164. Computing device110includes developer client module120and further includes user interface component (“UIC”)112which is configured to output user interface114. Each of computing devices116executes a respective instance of application122A and each of computing devices118executes a respective instance of application122B. Network130represents any public or private communications network, for instance, cellular, Wi-Fi, and/or other types of networks, for transmitting data between computing systems, servers, and computing devices. Network130may include one or more network hubs, network switches, network routers, or any other network equipment, that are operatively inter-coupled thereby providing for the exchange of information between computing system160and computing devices110,116, and118. Computing system160and computing devices110,116, and118may transmit and receive data across network130using any suitable communication techniques. Computing system160and computing devices110,116, and118may each be operatively coupled to network130using respective network links. The links coupling computing system160and computing devices110,116, and118to network130may be Ethernet, ATM or other types of network connections, and such connections may be wireless and/or wired connections. Computing system160represents any combination of one or more computers, mainframes, servers, blades, cloud computing systems, or other types of remote computing systems capable of exchanging information via network130as part of an application performance evaluation service. That is, computing system160may receive application performance data via network130and analyze the performance data to determine performance issues and ways to resolve the performance issues of applications executing at computing devices116,118. Computing system160may output recommendations to fix any identified performance issues uncovered during the analysis of the performance data. In some cases, computing system160outputs the recommendations and other information about its analysis to computing device110for subsequent presentation, e.g., via user interface114, to a developer or other user of computing device110. In some examples, computing system160may interface with computing devices116,118directly to implement fixes for improving performance of individual applications executing at computing devices116,118. Developer service module162controls computing system160to perform specific operations for implementing the application performance evaluation service provided by computing system160. Developer service module162may provide an interface between computing system160and client devices, such as computing device110, that access the performance evaluation service provided by computing system160, e.g., to obtain information about an application's performance and ways to improve the application's performance. Developer service module162may perform operations described herein using software, hardware, firmware, or a mixture of hardware, software, and firmware residing in and/or executing at computing system160. Computing system160may execute developer service module162with multiple processors or multiple devices, as virtual machines executing on underlying hardware, as one or more services of an operating system or computing platform, and/or as one or more executable programs at an application layer of a computing platform of computing system160. Developer service module162is configured to collect application performance data from different applications that execute on similar or corresponding computing platforms. Developer service module162analyzes application performance data to determine issues in a particular application's performance, relative to other, different applications that execute in a similar or corresponding execution environment. That is, unlike other debugging or analysis tools that might analyze application performance data for a single application to determine performance issues that might arise as the application executes on different computing architectures, operating systems, or computing platforms, developer service module162instead identifies performance discrepancies between individual applications and other, different applications that execute in similar or corresponding computing environments. Developer service module162collects and stores application performance data collected from network130at application data store164. The performance data (e.g., data about application stability, rendering times, battery usage, permission denials, startup times, wireless radio scans, and other information indicative of an anomaly or performance metric) contained in application data store164may be organized and searchable according to an application identifier (e.g., name, publisher, etc.), application category (e.g., travel, leisure, entertainment, etc.), application genre (e.g., navigation, transportation, game, etc.), or other searchable criteria. For example, application data store164may store first performance data collected from computing devices116during their respective executions of application122A separately from second performance data collected from computing devices118during their respective executions of application122B. In some examples, performance data stored in application data store164may be a used as a proxy indicator or indirect indicator for performance. In other words, the data collected and maintained at data store164may be low-level performance details that by themselves do not provide insight into a particular issue but when aggregated together, are a sign of performance. For example, radio scans and location querying tend to consume a lot of battery power and therefore, a computing platform may request that applications minimize performing such functions unless necessary or only if done with a limited frequency. An application therefore that performs more radio scans or locations queries than other applications may indicate that the application is a poor performer, relative to the other applications, for battery consumption. Developer service module162may make further determinations about things like “rendering performance” or other categories of performance by looking at multiple types of performance data concurrently. When collecting and storing performance data, developer service module162may take precautions to ensure that user privacy is preserved. That is, developer service module162may only collect, store, and analyze performance data if developer service module162receives explicit permission from a user of the computing device from which the performance data originated. For example, in situations discussed below in which developer service module162may collect information about performance of applications executing at computing devices116,118, individual users of computing devices116,118may be provided with an opportunity to provide input to computing devices116,118and/or computing system160to control whether developer service module162can collect and make use of their information. The individual users may further be provided with an opportunity to control what developer service module162can or cannot do with the information. Application performance data may be pre-treated in one or more ways before it is transferred to, stored by, or otherwise used by developer service module162, so that personally-identifiable information is removed. For example, before developer service module162collects performance data associated with application122A while executing at computing device116A, computing device116A may pre-treat the performance data to ensure that any user identifying information or device identifying information embedded in the performance data is removed before being transferred to computing system160. In other examples, developer service module162may pre-treat the performance data upon receipt and before storing the performance data at data store164. In either case, the user may have control over whether the performance data is collected, and how such information, if collected, may be used by computing device116A and computing system160. Computing device110represents any suitable computing device or computing system capable of exchanging information via network130to access the performance evaluation service provided by computing system160for obtaining information about application performance and ways to improve application performance. That is, computing device110may be a software developer or designer workstation configured to access performance data stored at data store164, obtain analysis performed by developer service module162, and obtain performance-improvement recommendations generated by developer service module162. Developer client module120may provide the interface between computing device110and the service provided by computing system160. For example, developer client module120may be a stand-alone application executing at computing device110, or in some examples, developer client module120may be a subroutine or internet application accessed from an internet browser executing at computing device110. In either case, developer client module120is configured to exchange information with computing system160to implement the performance evaluation service. Developer client module120is configured to receive, from computing system160and via network130, results of an analysis of application performance data and recommendations determined by computing system160to improve the results. Module120may perform operations described herein using software, hardware, firmware, or a mixture of hardware, software, and firmware residing in and/or executing at computing device110. Computing device110may execute module120with multiple processors or multiple devices, as virtual machines executing on underlying hardware, as one or more services of an operating system or computing platform, and/or as one or more executable programs at an application layer of a computing platform of computing device110. UIC112of computing device110may function as an input and/or output device for computing device110. UIC112may be implemented using various technologies. For instance, UIC112may function as an input device using presence-sensitive input screens, microphone technologies, infrared sensor technologies, or other input device technology for use in receiving user input. UIC112may function as output device configured to present output to a user using any one or more display devices, speaker technologies, haptic feedback technologies, or other output device technology for use in outputting information to a user. Developer client module120may cause UIC112to output a user interface associated with the service provided by computing system160. For example, as shown inFIG.1, developer client module120may send instructions to UIC112that cause UIC112to display user interface114at a display screen of UIC112. User interface114may present information obtained from developer service module162in visual, audible, or haptic formats so that a developer can better understand how performance of a particular application compares to performance of other applications, executing on similar or corresponding computing platforms. Computing devices116,118each represent any suitable computing device or computing system capable of locally executing applications, such as applications122A and122B, and further capable of exchanging information via network130with computing system160about performance of the locally executing applications. Examples of computing devices116,118include mobile phones, tablet computers, laptop computers, desktop computers, servers, mainframes, blades, wearable devices (e.g., computerized watches etc.), home automation devices, assistant devices, gaming consoles and systems, media players, e-book readers, television platforms, automobile navigation or infotainment systems, or any other type of mobile, non-mobile, wearable, and non-wearable computing device configured to execute applications on a computing platform being analyzed by developer service module162. Each of computing devices116,118executes applications on similar or corresponding computing platforms, such as a common operating system. Applications122A and122B represent two different machine-readable, executables configured to operate at an application layer of similar or identical computing platforms that operate on each of computing devices116and118. Computing devices116may execute instructions associated with a particular computing platform in which application122A executes and computing devices118may execute similar instructions associated with the same computing platform as computing devices116, however, for executing application122B instead of application122A. Examples of applications122A and122B are too numerous to list. As some examples, applications122A and122B may include business applications, developer tools, educational applications, entertainment applications, financial applications, game applications, graphic or design applications, health and fitness applications, lifestyle or assistant applications, medical applications, music applications, news applications, photography, video, and other multimedia applications, productivity applications, reference applications, social networking applications, sports applications, travel applications, utility applications, weather applications, communication applications, calendar applications, or any other category or type of application. Applications122A and122B are completely different applications and may be associated with different application developers, producers, versions, and the like. In some examples, applications122A and122B are “peers” being that they are from a same category or genre of application (e.g., travel) and perform similar functions or provide similar features (e.g., both may enable transportation booking). In other examples, applications122A and122B are from a same category or genre of application (e.g., travel) but are not “peers” as the two applications may perform different functions or provide different features (e.g., one may enable transportation booking and another may aid in navigation). In any event, whether applications122A and122B are merely from similar categories or genres, or whether applications122A and122B are peers, applications122A and122B are completely different applications from different application developers, producers, or the like. In operation, a user of computing device110, who in the example ofFIG.1is a software developer or designer of application122A, may wish to understand how performance of application122A compares to performance of other applications, such as application122B, executing on the same or similar computing platforms. The user may provide input at a presence-sensitive screen of UIC112at or near a location of UIC at which user interface114is displayed. UIC112may provide information about the input to developer client module120and in response to the information about the input, developer client module120may access the service provided by developer service module162for obtaining an analysis of the performance of application122A relative to performance of other applications executing on similar or corresponding computing platforms. After having received explicit permission from each of the end users of computing devices116,118to collect and make use of performance data collected during execution of applications122A,122B, developer service module162of computing system160may obtain performance data collected by computing devices116,118while computing devices116,118were executing respective instances of applications122A,122B. For example, when application122A executes at computing device116A, application122A may output performance data indicative of how application122A performs in an execution environment of computing device116A. Computing device116A may send, via network130, the performance data to computing system160where developer service module162may store the performance data at application performance data store164. Each of computing devices116,118may perform similar operations as computing device116A to collect and send performance data to computing system160that is indicative of how applications122A,122B perform in respective execution environments of computing devices116,118. Developer service module162may analyze the performance data of each of applications122A and122B to determine one or more performance metrics for each of applications122A and122B. The basis behind the performance metrics is described in greater detail with respect to the additional FIGS. However, in general, a performance metric may be any quantitative and measurable value that can be derived from application performance data to indicate a level of performance associated with some aspect of an application. Some examples of performance metrics include: battery statistics (e.g., consumption rates, etc.), stability measurements (e.g., a crash rate, etc.), rendering metrics (e.g., latency between frame renderings, etc.), timing metrics (e.g., start-up time, transition delay from one mode to another, delay in implementing an intent, delays for retrieving or uploading information, etc.), permission metrics (e.g., frequency of unsuccessful or user denied requests to make use of device component or user information, etc.). With a set of defined performance metrics, developer service module162may establish one or more benchmarks to use in evaluating the performance metrics of individual applications; specifically, to determine an application's performance on a computing platform, relative to performance of other applications that run on the computing platform. Said differently, developer service module162may determine, based on a performance metric of one or more applications, a performance goal that other applications should operate towards so that a user interacting with applications on the computing platform has an enjoyable, consistent, and frustration free user experience no matter which application he or she is interacting with. To establish a benchmark, developer service module162may rank the performance metrics of multiple applications to determine a highest performing application, for each particular metric. In some examples, developer service module162may determine a composite metric (e.g., an average value, median value, etc.) based on two or more applications' metrics to use as a benchmark in evaluating performance of other applications. In some examples, developer service module162may determine a composite metric based two or more highest ranking, two or more lowest ranking, or some other combination of applications' metrics to use as a benchmark in evaluating performance of other applications. Developer service module may evaluate an application by comparing a performance metric of the application to a corresponding benchmark. For instance, if performance data stored at data store164indicates that application122A has a performance metric that is within a threshold of a benchmark value, developer service module162may determine that application122A does not have a performance issue, as the performance of the application relates to that particular metric. Otherwise, developer service module162may determine that if application122A has a performance metric that is outside the threshold of the benchmark value, that application122A does have a performance issue, as the performance of application122A relates to that particular metric. In the example ofFIG.1, assume developer service module162establishes a benchmark for a particular metric (e.g., start-up time) to be based on a corresponding metric associated with application122B. Developer service module162may subsequently evaluate start-up time of application122A by comparing the start-up time of application122A to the benchmark. Developer service module162may determine whether the start-up time metric is within a threshold amount of the benchmark. In some examples, the threshold amount may be a percentage (e.g., ten percent) or a value (e.g., two milliseconds). In some examples, the threshold amount may be zero and satisfying the benchmark may require meeting or exceeding the benchmark performance. In some examples, the threshold amount may be a percentile ranking of an application relative to other applications executing on the computing platform. That is, to satisfy a benchmark, developer service module may require an application's performance to be better than the bottom ‘x’ percentile (e.g., where ‘x’ is any percentile value) of other applications that have been rated against the benchmark. In the example ofFIG.1, developer service module162may determine that the start-up time of application122A does not satisfy the start-up time benchmark as the start-up time of application122A (e.g., on average) exceeds the benchmark by more than a threshold amount of time (e.g., five milliseconds). Responsive to determining that a performance metric is not within a threshold amount of a corresponding benchmark, developer service module may determine a fix to the application to improve its performance. The fix to the application may include one or more modifications to source code or a configuration file associated with the application. In some examples, the fix to the application may include disabling a library, function call, software development kit, application programming interface, or service utilized by the application. In some examples, the fix to the application may include replacing a first library, function call, or service utilized by the application with an alternative library, function call, or service. For example, in addition to tracking performance data associated with applications executing at computing devices116,118, developer service module162may maintain information about ways to improve application performance, relative to each specific metric. In some examples, the information may be based on feedback obtained from other developers of applications that have metrics which form the basis for benchmarks. In some examples, the information may be based on feedback obtained from other developers of applications that have found ways to improve their metrics relative to the benchmarks. In some examples, the information may be based on work-arounds obtained from a developer of the computing platform who may have uncovered the fix after identifying ways that other similarly situated applications executing on the computing platform have improved their metrics relative to the benchmarks. In some examples, the information about ways to improve application performance may be derived automatically by developer service module162(e.g., using a machine learning model, trial and error experimentation, or other ways which are described in more detail with respect to the additional FIGS.). For example, developer service module162may determine which application programming interfaces (APIs) or libraries an underperforming application uses, determine that other better performing applications on the computing platform use different APIs or libraries for the same functionality (e.g., a different third-party library for user authentication), and conclude that the underperforming application may improve its performance by replacing existing APIs or library calls with those used by the better performing applications. In the example ofFIG.1, developer service module162may determine that the start-up time issue identified with application122A could be attributed to a slow-performing application programming interface (API) relied on by application122A during start-up which has been observed by developer service module162to cause slow start-up times in other applications. As some examples, developer service module162may determine a fix to the start-up time issue with application122A is to: use a more responsive API, relocate the API call outside the start-up thread, or adjust a start-up sequence so that the slow API does not get used until other resources of application122A are otherwise up and running. In any case, once developer service module162determines a fix, developer service module162may share information about the fix with developer client module120, for subsequent presentation to the user developer. Developer service module162may output, for presentation at computing device110, an indication of the fix to application122A. For example, developer service module162may send data via network130that is interpreted by developer client module120as an instruction to update user interface114with information about the fix. Developer client module120, in response to receiving instruction to update user interface114with information about the fix, may cause UIC112to output user interface114. That is, developer client module120may cause UIC112to output, for display a graphical indication of any determined performance anomalies or issues and/or any fix identified to resolve the determined performance anomalies or issues. In this way, a computing system that operates in accordance to the described techniques may output an indication of a performance issue and a potential fix which a software developer, designer, or other user of the computing system, can elect to implement or not implement, so as to improve performance of an application, relative to performance of other applications executing in a similar or corresponding computing platform. By establishing benchmarks and then evaluating performance metrics of applications executing on a computing platform against the benchmarks, an example computing system can identify performance issues with an application before a developer even thinks to look for performance improvements. The example computing system may output information about a gap in performance to educate a developer to understand how the performance of his or her application compares to other applications that are available on the platform. A developer need not know what areas need improving, what areas can be improved, or even how to improve these areas. Instead, the example computing system may automatically identify areas for improvement and suggest or automatically implement fixes to improve performance of an application to be more in-line with performance of other applications executing on the computing platform. Accordingly, the example computing system may cause applications executing on a particular computing platform to operate more efficiently and perhaps, provide a more consistent user experience as a user interacts with different applications on the computing platform. FIG.2is a block diagram illustrating an example computing system configured to identify performance issues with an application and determine ways to improve performance of the application, relative to performance of other applications that execute on similar or corresponding computing platforms, in accordance with one or more aspects of the present disclosure. Computing system260ofFIG.2is described below as an example of computing system160ofFIG.1.FIG.2illustrates only one particular example of computing system260, and many other examples of computing system260may be used in other instances and may include a subset of the components included in computing system260or may include additional components not shown inFIG.2. As shown in the example ofFIG.2, computing system260includes one or more processors270, one or more communication units272, and one or more storage components276communicatively coupled via communication channel274. Storage components276includes developer service module262and application performance data store264. Developer service module262includes UI module266, analysis module267, and correction module268. Communication channels274may interconnect each of the components266,270,272, and276for inter-component communications (physically, communicatively, and/or operatively). In some examples, communication channels274may include a system bus, a network connection, an inter-process communication data structure, or any other method for communicating data. One or more communication units272of computing system260may communicate with external devices (e.g., computing devices110,116,118ofFIG.1) via one or more wired and/or wireless networks by transmitting and/or receiving network signals on the one or more networks. Examples of communication units272include a network interface card (e.g. such as an Ethernet card), an optical transceiver, a radio frequency transceiver, a GPS receiver, or any other type of device that can send and/or receive information. Other examples of communication units272may include short wave radios, cellular data radios, wireless network radios, as well as universal serial bus (USB) controllers. One or more storage components276within computing system260may store information for processing during operation of computing system260(e.g., computing system260may store application performance data collected and accessed by modules262,266,267, and268, and data store264during execution at computing system260). In some examples, storage component276is a temporary memory, meaning that a primary purpose of storage component276is not long-term storage. Storage components276on computing system260may be configured for short-term storage of information as volatile memory and therefore not retain stored contents if powered off. Examples of volatile memories include random access memories (RAM), dynamic random-access memories (DRAM), static random access memories (SRAM), and other forms of volatile memories known in the art. Storage components276, in some examples, also include one or more computer-readable storage media. Storage components276in some examples include one or more non-transitory computer-readable storage mediums. Storage components276may be configured to store larger amounts of information than typically stored by volatile memory. Storage components276may further be configured for long-term storage of information as non-volatile memory space and retain information after power on/off cycles. Examples of non-volatile memories include magnetic hard discs, optical discs, floppy discs, flash memories, or forms of electrically programmable memories (EPROM) or electrically erasable and programmable (EEPROM) memories. Storage components276may store program instructions and/or information (e.g., data) associated with modules262,266,267, and268, and data store264. Storage components276may include a memory configured to store data or other information associated with modules262,266,267, and268, and data store264. One or more processors270may implement functionality and/or execute instructions associated with computing system260. Examples of processors270include application processors, display controllers, graphics processors, auxiliary processors, one or more sensor hubs, and any other hardware configure to function as a processor, a processing unit, or a processing device. Modules262,266,267, and268may be operable by processors270to perform various actions, operations, or functions of computing system260. For example, processors270of computing system260may retrieve and execute instructions stored by storage components276that cause processors270to perform operations attributed to modules262,266,267, and268. The instructions, when executed by processors270, may cause computing system260to store information within storage components276. Developer service module262may include some or all of the functionality of developer service module162of computing system160ofFIG.1. Developer service module262may additionally include some or all of the functionality of client service module120of computing device110ofFIG.1. Developer service module262may perform similar operations as modules162and120for providing an application performance evaluation service that identifies performance anomalies in applications, relative to other application executing on a computing platform, and recommends or implements fixes to improve the applications' performance on the computing platform. UI module280may provide a user interface associated with the service provided by developer service module262. For example, UI module280may host a web interface from which a client, such as computing device110, can access the service provided by developer service module262. For example, a user of computing device110may interact with a web browser or other application (e.g., developer client module120) executing at or accessible from computing device110. UI module280may send information to the web browser or other application that causes the client to display a user interface, such as user interface114ofFIG.1. Analysis module267may analyze application performance data stored at data store264. Analysis module267may determine performance metrics associated with individual applications executing at a computing platform, determine benchmarks for evaluating the performance metrics, and compare the performance metrics to the benchmarks to determine whether an individual application has a potential performance anomaly or issue, relative to other applications executing on the computing platform. As one example of a performance metric, analysis module267may determine a permissions denial rate associated with an application. For example, a permission may grant an application access to a user's calendar, a camera, a user's contact list, a user's or devices location, a microphone, a telephone, other sensors, messaging services, and storage. Applications often depend specific permissions to function properly, yet a developer may be unaware of what percentage of users have actually agreed to grant the application each permission and how the user behavior towards granting or not granting the application each permission compares with other applications executing on a computing platform. A computing platform may want to minimize a frequency of permission denials to improve user satisfaction with the computing platform. Analysis module267may define a permissions denial rate as an amount (e.g., a percentage) of daily permission sessions during which users did not grant or otherwise denied an application a permission. As another example of a performance metric, analysis module267may determine a start-up time for an application. The start-up time may indicate an amount of time for an application to launch. If a user has a choice between two peer applications, he or she may more often choose to launch the peer with the shortest start-time. A computing platform may want all applications to launch as quickly as possible so as to provide the best user experience with the computing platform. Analysis module267may quantify a start-time based on a measurement of time (e.g., seconds, milliseconds, etc.). Analysis module267may quantify a start-time as being one of multiple different levels, e.g., slow, moderate, fast. Analysis module267may further quantify a start-time metric in one or more other ways. For example, analysis module267may assign an application a “slow cold start” label when a percentage of daily sessions during which users of an application experienced at least one cold startup time of more than five seconds. A cold start of an application is when an application launches from scratch (e.g., often displaying a splash screen, etc.). Whereas, analysis module267may assign an application a “slow warm start” label when a percentage of daily sessions during which users of an application experienced at least one hot startup time of more than one and a half seconds. A warm start of an application occurs when an application is brought into the foreground of a user interface, after previously executing in the background (e.g., often displaying a previously viewed screen that was last in view when the application was last viewed, etc.). Analysis module267may assign other types of labels to a start-up time metric that are specific to a particular computing platform. As another example of a performance metric, analysis module267may determine a failed wireless signal scan metric. For example, some applications may cause a computing device to scan for available Bluetooth®, Wi-Fi®, near-field-communication (NFC), or other wireless signals. Such scanning is power intensive and often results in increased battery drain. Therefore, a computing platform may wish to encourage the prevention of failed wireless signal scans that result in wasted battery consumption. A failed wireless signal scan may indicate a percentage of battery sessions (i.e., periods between two full charges of a device) during which users of an application experienced at least one failed wireless signal scan that lasted more than thirty minutes or some other time duration. Other examples of performance metrics generated by analysis module267may include an application non-responsive (ANR) rate that is defined as a percentage of daily sessions during which application users experienced at least one occurrence where an application was non-responsive and required restart or a crash rate that indicates a percentage of daily sessions during which users experienced at least one crash of an application. Still other performance metric examples include a stuck wake lock rate that indicates a percentage of battery sessions during which users experienced at least one partial wake lock of more than one hour while the app was in the background. Correction module268may obtain information from analysis module267about a potential issue with an application executing on a computing platform and determine a fix or other action for computing system260to take for addressing the issue and improve the application's performance, relative to other applications executing on the same or similar computing platform. For example, correction module268may cause UI module266to output a recommendation for fixing an application so that it provides a detailed explanation for a permission request when the permission denial rate outside a suitable range for the computing platform. In some cases, correction module268may alert computing system160to cause an underperforming application to be demoted in ratings in an application store. If an application has performance metrics that are far outside a threshold amount of a benchmark, correction module268may alert computing system160to cause the underperforming application to be undiscoverable or unavailable from the application store. Correction module268may maintain information about ways to improve application performance, relative to each specific issue identified by analysis module267, for each performance metric. For example, correction module268may obtain feedback from application developers that have metrics which form the basis for benchmarks or have otherwise found ways to improve their application's performance and use the feedback to provide a solution or a fix for a specific issue. Correction module268may obtain information from a computing platform developer of work-arounds that he or she may have uncovered after identifying ways that other similarly situated applications executing on the computing platform have improved their metrics relative to the benchmarks. In some examples, correction module268may determine a fix for an issue automatically and without any assistance from a user, by performing automatic bug fixing techniques. For example, correction module268may generate one or more potential software patches that might improve performance of an application and through trial-and-error, try the different patches to see which if any result in a fix. Correction module268may perform search-based program mutation, machine learning, and/or genetic programming techniques to identify one or more suitable fixes. After determining a potential fix, correction module268may validate the fix by testing the fix or delivering the fix to an application developer so that he or she may validate the fix. FIG.3is an example screen shot of a display screen of a computing device accessing an application performance evaluation service provided by an example computing system, in accordance with one or more aspects of the present disclosure.FIG.3shows user interface314.FIG.3is described in the context of system100ofFIG.1. Computing system160may cause computing device110to present user interface314at UIC112when developer client module120requests access to the application performance evaluation service provided by developer service module162. User interface314is just one example of such a user interface, many other examples may exist. The purpose of user interface314is to provide an application developer with insights into specific performance metrics obtained about a target application in order to understand how the target application is performing relative to the top performing applications executing on a computing platform. In some examples, the relative performance of an application may be based on top performing applications in a similar category or genre as the target application. In other examples, the relative performance of an application may be based on top performing applications that are considered peers. In any event, a benefits of user interface314includes causing observable or measurable improvements to an overall computing platform or application ecosystem, not just improvements to an individual application executing in that ecosystem. For example, developers can view user interface314to understand where an application's performance may be lagging relative to peers or other applications and therefore be motivated to improve the application's performance, which results in an overall performance improvement of the computing platform. User interface314is divided into several tabs or sections, with each tab being associated with a different performance metric or group of performance metrics. For example, as shown inFIG.3, the start-up metric tab is in the foreground of user interface314with metrics B-E and a group of vital metrics appearing on tabs that re hidden from view in the background of user interface314. Each tab may include a results section (e.g., shown as a graph, table, chart, or other data format) for providing information about an application's relative performance as compared to the benchmark and an acceptable threshold level within the benchmark. The start-up performance metric of the application in the example ofFIG.3is outside an acceptable threshold amount of the benchmark. In some examples, the results section includes information about where an application's performance is compared to other applications executing on the computing platform. For example, in the example ofFIG.3, the application's start-up metric is in line with the bottom twenty-five percent of applications executing on the computing platform. User interface314may designate an application's performance using color or other formatting. For example, in cases where a performance metric is outside an acceptable threshold level of a benchmark, user interface314may use a red color font or line color to emphasize that the application may have a potential performance issue. In cases where the performance metric is inside the acceptable threshold level of the benchmark, user interface314may use a green color font or line color to emphasize that the application is performing similarly to other top performing applications on the computing platform. Each tab of user interface314may include a summary and resolution section where potential issues are clearly identified and recommended fixes for the potential issues are displayed. For example, user interface314includes a detailed summary of the start-time issue along with potential fixes that a developer may implement to improve the start-time metric of the application. Also included in user interface314may be selectable elements that cause computing system160to perform various actions to further aid a developer in improving the performance of an application, relative to performance of other applications executing on the computing platform. As one example, a user may select a graphical element that causes computing system160to link to a source file editor for viewing highlighted sections of source code that computing system160suspects may be a cause of the slow start time. As another example, a user may select a graphical element that causes computing system160to automatically modify the suspect sections of source code automatically, without further user input. FIG.4is a flowchart illustrating example operations performed by an example computing system configured to identify performance issues with an application and determine ways to improve performance of the application, relative to performance of other applications that execute on similar or corresponding computing platforms, in accordance with one or more aspects of the present disclosure. Operations400-490may be performed by a computing system, such as computing systems160,260. In some examples, a computing system may perform operations400-490in a different order than that shown inFIG.4. In some examples, a computing system may perform additional or fewer operations than operations400-490. For ease of description,FIG.4is described in the context of computing system260ofFIG.2. As shown inFIG.4, in operation, computing system260may obtain consent from user to make use of application performance data collected during execution of applications on their respective devices (400). For example, before computing system260or any of computing devices116,118stores or transmits application performance data, a user of computing devices116,118will be offered an opportunity to give or not give computing system260permission to collect and make use of such data. Only if a user clearly and unambiguously consents to such data collection will computing system260make use of application performance data from the user's device. Computing system260may obtain first performance data collected during execution of a first application at a first group of computing devices (410) and computing system260may obtain second performance data collected during execution of one or more second applications at a second group of computing devices (420). For example, analysis module267may receive performance data being transmitted via network130from computing devices116,118. Analysis module267may store the performance data at data store264. In some examples, each computing device from the first group of computing devices and the second group of computing devices executes a respective instance of a common computing platform. In other words, each of the first and second groups of computing devices may operate a common operating system or computing platform so that analysis module, when determining performance of a particular application, determines the application performance relative to other applications executing on the same or similar computing platform as the particular application. The first application and each of the one or more second applications may in some examples be associated with a common category or genre of application. For example, the first and second applications may be travel category applications or navigation genre applications within the same travel category. In some examples, the first application and each of the one or more second applications are peer applications, and the first application differs from each of the one or more second applications by at least one of: functionality, title, or developer. For example, the first and second applications may be a specific type of game (e.g., first-person shooter, crossword puzzle, etc.) and while the applications may share some overlap in functionality, the applications are not the same and differ in functionality, title, game play, developer, appearance, or other feature. Computing system260may determine, based on the first performance data, at least one metric quantifying performance of the first application (430). For example, analysis module267may compute a permissions denial rate metric based on the performance data for application122A that indicates how often users prevent application122A from accessing a particular permission of computing devices116(e.g., camera, microphone, etc.). Computing system260may determine one or more benchmarks based on the second performance data (440). For example, analysis module267may determine an average permissions denial rate for one or more other applications executing on computing devices116,118(e.g., application122B). Analysis module267may use the average permissions denial rate as a benchmark for determining whether an application has too high of a permissions denial rate relative to other applications executing on the computing platform. Computing system260may compare the at least one metric to a corresponding benchmark derived from the second performance data (450). For example, analysis module267may compare the permissions denial rate for application122A to the benchmark established during operation440. If the metric is within a threshold amount of the benchmark, computing system260may determine that the first application does not have a performance issue with that particular metric (460, YES branch). Conversely, if the metric is not within the threshold amount of the benchmark, computing system260may determine that the first application has a performance issue with that particular metric (460, NO branch). Analysis module267may determine that the permissions denial rate for application122A exceeds the benchmark established during operation440by ten percent or some other amount that exceeds a tolerable threshold. Computing system260may determine a fix to the first application (470). For example, analysis module267may trigger correction module268to determine a way for application122A to improve the permission denial rate metric such that overall performance of application122A on the computing platform is closer to the performance of other top performing applications on the computing platform. Correction module268may inspect source files or other attributes of application122A and determine that when application122A requests permission to use a device location or camera, no explanation or reason is provided to the user for the requests. Correction module268may determine that the permission denial rate may be improved if application122A included such information when making future requests. Computing system260may output an indication of the fix (480). For example, correction module268may send a command to UI module266that causes a user interface, such as user interface114or314, to provide information to a developer user of computing system260that he or she may wish to modify application122A to provide more information when making a permission request. Computing system260may implement the fix (490). For example, after outputting an indication of the recommended fix, UI module266may receive, from computing device110or other developer device, an indication of user input that authorizes the fix to be automatically implemented by computing system260. In response to receiving the indication of user input, correction module268may output to computing devices116instructions for automatically implementing the fix to application122A. The instructions may include updated source files, an updated executable, or an updated configuration file generated by correction module268and that are used during execution of application122A to cause the fix. In addition to implementing a fix, or instead of implementing a fix, computing system260may take other actions in response to determining a performance deficiency in an application relative to performance of other application on a computing platform. For example, a cause of a performance issue may not be readily apparent from existing application performance data. Computing system260may send instructions via network130that cause computing devices116to start collecting more detailed performance data when applications122A executes at computing devices116than what is normally collected. For instance, computing system260may cause computing devices'116performance data collection rate to increase or may cause computing devices116to collect additional data beyond the normal data collected. Clause 1. A method comprising: obtaining, by a computing system, first performance data collected during execution of a first application at a first group of computing devices; determining, based on the first performance data, at least one metric for quantifying performance of the first application; comparing, by the computing system, the at least one metric to a corresponding benchmark derived from second performance data collected during execution of one or more second applications at a second group of computing devices, each of the one or more second applications being different than the first application; determining whether the at least one metric is within a threshold amount of the corresponding benchmark; responsive to determining that the at least one metric is not within the threshold amount of the corresponding benchmark, determining, a fix to the first application; and outputting, by the computing system, for presentation at a developer device, an indication of the fix to the first application. Clause 2. The method of clause 1, wherein each computing device from the first group of computing devices and the second group of computing devices executes a respective instance of a common computing platform. Clause 3. The method of any of clauses 1 or 2, wherein the first application and each of the one or more second applications are associated with a common category or genre of application. Clause 4. The method of clause 3, wherein the first application and each of the one or more second applications are peer applications, and the first application differs from each of the one or more second applications by at least one of: functionality, title, or developer. Clause 5. The method of any of clauses 1-4, further comprising: receiving, from the developer device, an indication of user input that authorizes the fix to be automatically implemented by the computing system; and in response to receiving the indication of user input, outputting, by the computing system, to one or more computing devices from the first group of computing device, instructions for automatically implementing the fix to the first application. Clause 6. The method of any of clauses 1-5, wherein the fix to the first application comprises one or more modifications to source code or a configuration file associated with the first application. Clause 7. The method of any of clauses 1-6, wherein the fix to the first application comprises disabling a library, function call, or service utilized by the first application. Clause 8. The method of any of clauses 1-7, wherein the fix to the first application comprises replacing a first library, function call, or service utilized by the first application with an alternative library, function call, or service. Clause 9. The method of any of clauses 1-8, wherein the first group of computing devices and the second group of computing devices comprise a single computing device. Clause 10. A computing system comprising at least one processor configured to perform any one of the methods of clauses 1-9. Clause 11. A computing system comprising means for performing any one of the methods of clauses 1-9. Clause 12. A computer-readable storage medium comprising instructions that, when executed, cause at least one processor to perform any one of the methods of clauses 1-9. Clause 13. A method comprising: outputting, by a computing device, to a computing system, first performance data collected during execution of a first application; receiving, by the computing device, from the computing system, instructions to execute a fix of the first application that improves performance of the first application relative to performance of one or more second applications executing at a group of computing devices, each of the one or more second applications being different than the first application; and executing, by the computing device, the instruction to execute the fix of the first application. Clause 14. A method comprising: receiving, by a computing device, from a computing system, a recommendation for a fix of a first application executing at a first group of computing devices that improves performance of the first application relative to performance of one or more second applications executing at a second group of computing devices, each of the one or more second applications being different than the first application; receiving, by the computing device, user input authorizing the fix of the first application; responsive to sending, to the computing system, an indication of the user input authorizing the fix of the first application, receiving, by the computing device, from the computing system, instructions to execute the fix of the first application; and executing, by the computing device, based on the instructions, the fix of the first application. Clause 15. A computing device comprising at least one processor configured to perform any of the methods of clauses 13 or 14. Clause 16. A computing system comprising means for performing any of the methods of clauses 13 or 14. Clause 17. A computer-readable storage medium comprising instructions that, when executed, cause at least one processor to perform any of the methods of clauses 13 or 14. By way of example, and not limitation, such computer-readable storage media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage, or other magnetic storage devices, flash memory, or any other storage medium that can be used to store desired program code in the form of instructions or data structures and that can be accessed by a computer. Also, any connection is properly termed a computer-readable medium. For example, if instructions are transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. It should be understood, however, that computer-readable storage mediums and media and data storage media do not include connections, carrier waves, signals, or other transient media, but are instead directed to non-transient, tangible storage media. Disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk and Blu-ray disc, where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable medium. Instructions may be executed by one or more processors, such as one or more digital signal processors (DSPs), general purpose microprocessors, application specific integrated circuits (ASICs), field programmable logic arrays (FPGAs), or other equivalent integrated or discrete logic circuitry. Accordingly, the term “processor,” as used herein may refer to any of the foregoing structure or any other structure suitable for implementation of the techniques described herein. In addition, in some aspects, the functionality described herein may be provided within dedicated hardware and/or software modules. Also, the techniques could be fully implemented in one or more circuits or logic elements. The techniques of this disclosure may be implemented in a wide variety of devices or apparatuses, including a wireless handset, an integrated circuit (IC) or a set of ICs (e.g., a chip set). Various components, modules, or units are described in this disclosure to emphasize functional aspects of devices configured to perform the disclosed techniques, but do not necessarily require realization by different hardware units. Rather, as described above, various units may be combined in a hardware unit or provided by a collection of interoperative hardware units, including one or more processors as described above, in conjunction with suitable software and/or firmware. Various embodiments have been described. These and other embodiments are within the scope of the following claims.
61,833
11860759
DETAILED DESCRIPTION The NFR testing process may be improved by implementing an augmented decisioning engine employing a combination of artificial intelligence and machine learning algorithms with developer annotations to generate a recommendation of a configuration of a production infrastructure (e.g., the infrastructure on which the NFR testing is being performed). The recommendation is based on a comparison of one or more metrics data values indicating how the production infrastructure is operating, even “in real time”, with expected performance values indicating how the production infrastructure should be operating under a given set of operating conditions or under a given load. To generate the recommendation, the augmented decisioning engine may train itself to improve subsequent generations of configuration of the production infrastructure through a feedback process. The feedback process may include input from a developer annotating the output of the test environment. The feedback process adjusts the augmented decisioning engine based on an indication of whether the configuration of the production infrastructure satisfies a threshold metric data value in response to the production infrastructure running the system operating in a production environment. Utilizing the augmented decisioning engine of the present application, a developer need only indicate whether the configuration of the production infrastructure recommended by the augmented decisioning engine satisfies a threshold metrics data value by accepting or rejecting the recommendation. The remainder of the testing process, resulting in recommended configurations of infrastructure which can be reapportioned and/or reconfigured, can be driven by the training engine described in further detail below. By enabling a configuration of a production infrastructure to be optimized, one or more embodiments improve operation of a computing system. The developer is no longer tasked with conducting tests, analyzing the outcomes of the tests conducted, determining how to configure the production infrastructure based on the outcome of the test, or actually configuring the production infrastructure based on the outcome. FIG.1is a high-level block diagram of a general testing environment100in which systems, methods, and/or media may run (e.g., execute operations) to automatically generate a recommendation. As illustrated, general testing environment100(also referred to as simply “environment100”) includes multiple sub-systems. In some examples, environment100may be a cloud-based computing environment, an on-premises computing environment, or a hybrid computing environment using partially cloud-based and partially on-premises computing environments. Environment100may be an environment in which NFR testing is performed by the sub-systems included therein. A learning engine110, a test infrastructure130, a production environment140implementing/hosting a production infrastructure150, a user device160, and a network180are among the sub-systems included in the environment100. As illustrated, production infrastructure150includes a system152. System152may include various computing resources155which may or may not be communicatively coupled to each other depending on the configuration of production infrastructure150. According to the present disclosure, learning engine110may retrieve data from production infrastructure150and/or any other computing systems (e.g., platforms, servers, mainframes, databases, etc.) implemented or hosted in production environment140. Further, learning engine110may receive data from user device160via user input. Moreover, the learning engine110may be trained (with an augmented decisioning engine described inFIG.2below) based on the data received to automatically generate a recommendation. In some examples, the recommendation may be used to configure production infrastructure150, system152, or any other computing systems/infrastructure included in the production environment140. In some embodiments, the learning engine110may configure the production infrastructure based on the generated recommendation(s). In addition to the above, learning engine110and the various sub-components included therein may spin-up (e.g., power-up, launch, or otherwise instantiate) test infrastructure130to perform a test, or series of tests, on the data retrieved/received, the tests performed being based on the type of data. The outcomes of the tests are compared to generate a confidence score, and the confidence score is used as the basis of subsequent recommendations generated by learning engine110. As illustrated, test infrastructure130may include an assortment of computing clusters132, servers134, databases136, and applications138(collectively “computing resources13X”). In accordance with the present disclosure, learning engine110may configure computing resources13X based on the type data retrieved from production infrastructure150and/or the type of data received via user device160. Also, test infrastructure130may perform a test, or series of tests, on the data retrieved/received by learning engine110based on the configuring. Computing resources13X included in test infrastructure130may be used to perform the test(s). Once testing is complete, computing resources13X may be used to transmit an outcome of the tests performed to learning engine110, via network180. Based on the type of data retrieved/received by learning engine110, test infrastructure130may be configured to perform a test, or series of tests, on the data. Further, to maintain “normal” (e.g., expected) operation of any infrastructure included in production environment140(e.g., production infrastructure150) and any systems (e.g., system152) and/or computing resources (e.g., computing resources155) included therein, computing resources13X may be allocated or reapportioned to the infrastructure/systems based on the outcome of the tests performed. Generally, a production environment, such as production environment140, is an environment, or setting, in which software (e.g., applications, programs, components, etc.) are implemented “in the real world,” and are operated by an end-user (e.g., a customer). The software may execute locally on a computing device (e.g., a computer, laptop, and/or server) of the end-user, the software may be hosted by a cloud-based computing environment, or a combination thereof. In some embodiments, production environment140may be a cloud-based computing environment. In others, production environment140may not be cloud-based, or production environment140may be a hybrid environment. Various infrastructure, such as production infrastructure150, front-end platforms, storage arrays, memory arrays, data management, synchronization, and/or long duration data transfers may be included in/hosted by production environment140. Production environment140may include multiple instances of a single production infrastructure, single instances of production infrastructures that are all unique from each other, or a combination thereof. And although embodiments and examples described herein are primarily directed to a production environment that is at least partially cloud-based, it is to be understood that any discussion of a production environment above or below extends and applies equally to production environments that are on-premise (e.g., a production environment which is entirely locally implemented or hosted). Production infrastructure150may include a computing system (e.g., system152), each system including a combination of computing resources (e.g., computing resources155). In accordance with the present disclosure, production infrastructure150, system152, computing resources155included therein, and/or any other computing systems/resources included in production infrastructure150may be hosted by production environment140. Further, production infrastructure150or any systems/computing resources included therein may provide computation, software, data access, storage, and/or other services that do not require end-user knowledge of a physical location of configuration of a system and/or a device that delivers the software. In various embodiments, namely those in which production environment140is at least partially cloud-based, computing as a service may be delivered to/distributed throughout environment100, whereby shared resources, services, etc. may be provided to learning engine110, test infrastructure130, production infrastructure150, and/or user device160via network180. Further, user device160may be used to coordinate, orchestrate, or otherwise influence the delivery/distribution of the shared resources. Examples of resources shared throughout environment100may include computing resources13X and/or computing resources155. In accordance with the present disclosure, each of computing resources155may include one or more personal computers, workstations, computers, server devices, or other types of computation and/or communication devices. In some examples, computing resources155may be cloud computing resources that communicate with other cloud computing resources (e.g., other portions of computing resources155) via a wired connection, a wireless connection, or a combination thereof. Computing resources155, which may be substantially similar/identical to the computing resources13X, may include a group of cloud resources, such as one or more applications (“APPs”)155-1, one or more virtual machines (“VMs”)155-2, virtualized storage (“VS”)155-3, and one or more hypervisors (“HYPs”)155-4. Application155-1may include one or more software applications that may be provided to or accessed by user device160. Alternatively, application155-1may eliminate a need to install and execute software applications on user device160. Application155-1may include software associated with production infrastructure150and/or any software configured to be provided across production environment140. Application155-1may transmit information from one or more other applications155-1via a virtual machine155-2. Virtual machines155-2may include a software implementation of a machine (e.g., a computing device) that executes programs like a physical machine. VMs155-2may be a system VM or a process VM, depending upon the use and degree of correspondence to any real machine by VMs155-2. A system VM may provide a complete system platform supporting execution of a complete operating system (OS). A process virtual machine may execute a single program and may support a single process. VMs155-2may execute on behalf of a user (e.g., user device160) and/or on behalf of one or more other production infrastructures150. Further, VMs155-2may manage additional infrastructure/functionality included in production environment140. Virtualized Storage155-3may include one or more storage systems and/or one or more storage devices utilizing virtualization techniques within the storage systems or devices of computing resources155. With respect to a storage system, types of virtualizations may include block virtualization and file virtualization. Block virtualization may refer to abstraction (or separation) of logical storage from physical storage so that the storage system may be accessed without regard to physical storage or heterogeneous structure. The separation may permit administrators of a storage system flexibility in how administrators manage storage for end users. File virtualization may reduce/eliminate dependencies between data accessed at a file level and location where files are physically stored. Reduction or elimination of such dependencies may: enable optimization of storage use, permit server consolidation, and/or improve performance of non-disruptive file migrations. Hypervisors155-4may provide hardware virtualization techniques, allowing multiple operations systems (e.g., “guest operating systems”) to execute concurrently on a host computer, such as a computing resource(s)155. Further, HYPs155-4may present a virtual operating platform to the guest operating systems, may manage multiple instances of the execution of a variety of operation systems (execution of the guest operating systems), and may share virtualized hardware resources. Computing resources155may be communicatively coupled to each other based on the configuration of production infrastructure150. Computing resources155may be multiple instances of the same resource or various combinations of any of the computing resources discussed above and below. User device160may include a communication and/or computing device, such as a desktop computer, mobile device, smartphone, tablet, subnotebook, laptop, personal digital assistant (PDA), gaming device, device integrated with a vehicle, a wearable communication device (e.g., a smart wristwatch, smart eyeglasses, and the like), any other suitable communication device, or a combination thereof. User device160may be configurable to communicate with learning engine110, test infrastructure130, production infrastructure150, any other infrastructure/computing system included in production environment140, or any combination thereof, via network180. One or more portions of the network180may be an ad hoc network, an intranet, an extranet, a virtual private network (VPN), a local area network (LAN), a wireless LAN (WLAN), a portion of the Internet, a portion of the Public Switched Telephone Network (PSTN), a cellular telephone network, a wireless network, a WiFi network, a WiMax network, a Bluetooth network, any other type of network, or any combination thereof. FIG.2depicts a testing system200that may be implemented, for example, as part of learning system110in the general testing environment ofFIG.1, according to an embodiment of the present disclosure. Testing system200includes a developer system210, which produces an output240, an augmented decisioning engine250(which receives output240and a developer annotation245), a testing initiation and execution platform260(or simply “testing platform260”), a database270, and a production environment280. Developer system210may include test environment components220(or simply “test components220”), which are applications, functions within applications, or segments of computer-executable code under development by the developer. Test environment components220are combined with or are intended to adhere to infrastructure metrics222and/or application and business metrics226. According to embodiments, infrastructure metrics222may include parameters such as CPU usage, bandwidth parameters, memory usage, timing requirements, and the like. Application and business metrics226may include parameters related to service level agreements and variables corresponding to the process of conducting business (e.g., the average time to complete and return an online form). The result of executing test environment components220produces application logs224, which contain details regarding the execution of the code (e.g., timestamps, variables, outputs, values, addresses, routing information, etc.). Test components220may be communicated with sub-systems228to enable transmission to core230. Processing by core230results in output, which may be sent as output240or may be communicated directly with testing platform260. The constituent portions of sub-systems228and core230may be substantially similar to those local and/or network-accessible devices described above and below with regard to the general testing environment100. Output240may be communicated to augmented decisioning engine250without developer input. Output240may include a user interface or graphical user interface enabling examination, validation, annotation, or other modification by the developer. In embodiments, output240may result in a developer annotation245, which may be communicated to augmented decisioning engine250as well. As an example, developer annotation245may be an indication by the developer that test components220adhere to desired infrastructure metrics222as expected by a selected configuration of devices. On the other hand, developer annotation245could indicate that test components220are not in compliance with stated performance levels (or stated service levels as a result of performance levels) and that testing with a selected configuration should be re-executed, altered, reconfigured, or terminated. Output240may be communicated to database270for persistence and later computation or reference. Production environment280, which includes production components and production metrics, may also communicate with database270. Database270thus may store near real-time as well as historical data regarding performance of production environment280and development system210, such that a developer may compare realized and expected values for production components and test components220. Stored values in database270may be used by the developer in order to make determinations for developer annotation245. Augmented decisioning engine250may receive output240, developer annotation245, and real-time or historical data from production environment280and developer system210. Augmented decisioning engine250may supply one or more configurations for testing platform260as a result of one or more of the potential inputs (i.e., output240, developer annotation245or stored data from database270). As an example, augmented decisioning engine250may take output240, developer annotation245, and historical data from database270, and process each input through a neural network to supply a configuration to testing platform260. Other examples of machine learning and the operation of augmented decisioning engine250are addressed in further detail below. As a further example, test components220may include an application or portion thereof for a payment system which processes credit card payments. In this example, application and business metrics226may include a stated service level agreement (SLA), wherein the application is capable of processing each payment in less than a specified timeframe (e.g., a millisecond) with an acceptable throughput of concurrent transactions (e.g.,1000simultaneous payments). Based on infrastructure metrics222and/or production metrics stored in database270in conjunction with production environment280, it may be expected that certain CPU and/or memory usage values result in the stated SLA values. Continuing with the example, a configuration that results in higher-than-expected CPU usage may cause the developer to terminate the test and provide a developer annotation245that the selected configuration is not to be used. Conversely, a configuration that does execute as intended for the above-mentioned payment system may be output to augmented decisioning engine250, without developer intervention or developer annotation245, and categorized by augmented decisioning engine250as a successful test. Testing system200allows for configurations, metrics and performance to be monitored in real-time, or near real-time, so that tests can be completed to eliminate certain configurations that are not compliant with stated metrics. FIG.3illustrates in greater detail augmented decisioning engine250ofFIG.2. According to the present disclosure, augmented decisioning engine250may be running in an environment substantially similar or identical to environment100or testing system200and may include a power control engine310, a data engine315, a comparison engine320, an environment engine325, a test engine330, a confidence engine335, and a recommendation engine340. In embodiments, power control engine310may spin-up (e.g., power-up, launch, or otherwise instantiate) test infrastructure130or testing platform260so that augmented decisioning engine250may automatically generate a recommendation. Further, power control engine310may shut down test infrastructure130or testing platform260responsive to the recommendation being generated or implemented in a production environment. Data engine315may retrieve/receive data from the various sub-components included in environment100or testing system200. For instance, data engine315may retrieve, from production environment140/280, metrics data indicative of performance of system152, or any other system, operating in production environment140/280. Moreover, data engine315may receive data describing a configuration of production infrastructure150. In various embodiments, production infrastructure150is running system152operating in production environment140. Data engine315may also receive expected performance values of system152and/or of any other computing system/infrastructure included in production environment140. According to the present disclosure, augmented decisioning engine250may compare, via comparison engine320, metrics data with expected performance values (i.e., from infrastructure metrics222, application and business metrics226, or metrics stored in database270produced by production environment280). The comparison of the metrics data with the expected performance values may train the augmented decisioning engine250to provide a recommended configuration of production infrastructure150. The recommendation may suggest a configuration of any computing systems (e.g., system152) and/or computing resources (e.g., computing resources155) included in production infrastructure150. Responsive to comparing the metrics data and the expected performance values performed by comparison engine320, augmented decisioning engine250may be further trained to improve subsequent recommendations of the configuration of the production infrastructure150through a feedback process adjusting augmented decisioning engine250based on an indication of whether the configuration of production infrastructure150meets a specified threshold related to metrics values. For example, augmented decisioning engine250may use one or more of active learning algorithms, supervised learning algorithms, backpropagation algorithms, clustering algorithms, regression algorithms, decision trees, reduction algorithms, and neural network algorithms for training. Environment engine325may configure test infrastructure130based on the type of data retrieved/received by data engine315. In embodiments, environment engine325may configure test infrastructure130to include a specified set of computing resources13X. To configure test infrastructure130, environment engine325may provision computing resources13X in a cloud computing environment. The computing resources13X provisioned may include data repositories, computing services, and/or the like. Provisioning computing resources13X in a cloud computing environment may further include allocating or reapportioning computing resources13X based on the metrics data, the data describing the configuration of the production infrastructure, and/or the expected performance values received by data engine315, and/or the annotation of any generated recommendations. In some examples, environment engine325may use any combination of advanced provisioning, dynamic provisioning, or user self-provisioning to provision computing resources13X. Additionally and/or alternatively, to configure test infrastructure130, environment engine325may transmit instructions to computing resources13X over network180. The instructions may be a script or other computer-executable file for automatically configuring test infrastructure130and/or for managing computing resources13X included therein. Responsive to environment engine325configuring test infrastructure130, test engine330may determine a test (or series of tests) to perform on the data retrieved/received by date engine315. The test may be performed by test engine330itself, or the test may be performed by test infrastructure130. Additionally and/or alternatively, test engine330may coordinate with test infrastructure130, via network180, to perform the test(s). In accordance with the present disclosure, the tests performed or coordinated by test engine330may include NFR testing. In some examples, test engine330may transmit data on which a test is to be performed to test infrastructure130over network180. While the test is being performed by test infrastructure130, test infrastructure130may communicate, via network180, information corresponding to utilization of computing resources13X included therein to test engine330. The information is communicated to enable test engine330to manage, allocate, or otherwise reapportion computing resources13X as the test is being performed. For example, if the test performed by test infrastructure130indicates that system152is running without memory resources capable of performing operations within an allotted timeframe, the test infrastructure may reapportion memory included amongst computing resources13X to production infrastructure150and/or system152to support the continued operation of system152. Once the test is complete, test infrastructure130communicates the outcome of the test performed to test engine230via network180. Confidence engine335may receive an annotation for the recommendation of configuration of production infrastructure150generated based on comparing the metrics data to expected performance values. Based on the annotation, confidence engine335generates a confidence score, the confidence score being used to generate subsequent recommendations for the configuration of production infrastructure150. Recommendation engine340may generate the recommendation based on the comparing the metrics data to the expected performance values. Further, recommendation engine340may also generate a subsequent recommendation based on the confidence score of previously generated recommendations. In an example embodiment, a system for automatically generating a recommendation, such as augmented decisioning engine250ofFIG.2, may be running (e.g., executing operations) in environment100or testing environment200. Augmented decisioning engine250may include a memory and a processor in communication with the memory (the memory and the processor are not shown inFIG.1). Augmented decisioning engine250may be configured to retrieve from a production environment (such as production environment140) metrics data indicative of performance of a system (such as system152) operating in production environment140. Data describing a configuration of a production infrastructure (such as production infrastructure150) may be received by augmented decisioning engine250via a data engine, such as data engine315. In embodiments, production environment150may include any combination of computing clusters, servers, databases, applications, or other computing resources (e.g., computing resources155). Data engine315may also receive expected performance values for system152. In some examples, the data describing the configuration of the production environment and the expected values are received as one or more pieces of data supplied via a user input from a non-production environment. The user input may be supplied by user device160. For example, a software developer (i.e., a user) may be tasked with designing a production infrastructure included in, or hosted by, a production environment to a particular set of specifications. In embodiments, the production infrastructure can be an enterprise's website (as implemented by any combination of computing clusters, servers, databases or other computing devices configured to provide the website), while a production environment can be the same or similar to production environment140. The specifications can include accommodating a specified number of concurrent visitors (e.g., 10,000), handling a maximum number of simultaneous user requests (e.g., up to 1,000 visitors to the website clicking on the same link at the same time), and so on. The developer may also be tasked with meeting certain overhead criteria. As an example, even when under maximum load, the infrastructure hosting the website can be configured not to exceed 75% RAM/CPU usage. Once a desired configuration/specification for the production infrastructure is determined, the developer may design and test the production infrastructure in a non-production environment (e.g., a code-testing or quality assurance environment). The design and testing can use dedicated non-production infrastructure (e.g., test infrastructure130or testing platform260). Via a user device, such as user device160, the developer may be able to modify the number of concurrent visitors and/or the number of simultaneous requests experienced by the website along with the amount of RAM, the amount of storage space and/or the amount of available processing capacity of the CPU provided by the non-production infrastructure hosting the website. Augmented decisioning engine250may be further configured to compare the metrics data to the expected performance values. A comparison engine, such as comparison engine320, included in augmented decisioning engine250may perform the comparison. Based on the results of the comparison, augmented decisioning engine250may be trained to improve subsequent recommendations of configuration of production infrastructure150through a feedback process adjusting augmented decisioning engine250based on an indication of whether the configuration of production infrastructure150performs at the stated threshold level. A developer may supply, via a user device (e.g., user device160), the indication of whether the configuration of production infrastructure150meets the threshold. Augmented decisioning engine250may be further configured to automatically spin up an infrastructure, such as test infrastructure130or testing platform260. Power control engine310included in augmented decisioning engine250may be used to spin up test infrastructure130, and test infrastructure130may include any combination of computing clusters, servers, databases, applications, or other computing resources (e.g., computing resources13X). Environment engine325included in augmented decisioning engine250may be used to configure, automatically, test infrastructure130based on a type of metrics data retrieved by data engine315from production environment140. The type of metrics data retrieved from production environment140may correspond to one or more of CPU usage, memory usage, other system overhead limitations, network downlink rate, network uplink rate, other network bandwidth limitations, application logs, overall speed, responsiveness, or stability of a system in production environment140or production environment280(e.g., production infrastructure150and/or any other computing system/resource that may be included therein). Augmented decisioning engine250may be further configured to perform, based on the configuring, a test on the test components (i.e., software or code under development). In some examples, a test engine, such as test engine330, may be used to perform the test. In other examples, test engine330may select a test, or a series of tests, to perform on the test components based on the configuring. In these other examples, test engine330included in augmented decisioning engine250may be configured to manage, allocate, or otherwise reapportion any combination of the computing clusters, servers, databases, applications, or other computing resources included in test infrastructure130to production infrastructure150based on the recommended configuration of production infrastructure150. Network180enables communications between augmented decisioning engine250(and all sub-components thereof, such as test engine330) to communicate with test infrastructure130/testing platform260to execute, orchestrate, and/or otherwise enable the test to be performed on the test components. The outcome of the test is communicated from test infrastructure130/testing platform260to test engine330over network180. Responsive to receiving the outcome of the test performed, comparison engine320included in augmented decisioning engine250may compare the outcome received to an expected outcome. In some examples, the expected outcome may be received by test engine330as one or more pieces of data supplied via a user input. In accordance with the present disclosure, the user input may be supplied by user device160. In other examples, the expected outcome of various tests performed in environment100or testing environment200may be pre-loaded into test engine330. Additionally and/or alternatively, test engine330may learn outcomes to expect for the tests performed throughout training augmented decisioning engine250. Based on the comparison, a recommendation engine340included in augmented decisioning engine250may generate a recommendation based on a comparison of the outcome of the test performed to the expected outcome. In some examples, the recommendation generated may be to configure infrastructure/systems (e.g., production infrastructure150/system152) implemented/hosted in production environment140. Next, an annotation for the recommendation based on the comparison of the outcome of the test performed to the expected outcome is received. The annotation may be received by confidence engine335as one or more pieces of data supplied by a user input in a non-production environment. The user input may be supplied by user device160. Confidence engine335generates a confidence score for the recommendation based on the annotation. The annotation may, in some examples, be indicative of whether the recommendation of configuration of the production infrastructure meets a threshold (e.g., is accepted/rejected by a user of the system a specified percentage of test instances). Indication of the annotation of the recommendation of the configuration of the production infrastructure may be received via a user input as one or more pieces of data supplied by user device160. Recommendation engine340may generate a subsequent recommendation based on the confidence score of previously generated recommendations. For instance, if the confidence score of recommendations previously generated when production infrastructure150was configured/performing in a particular way was above a certain threshold (e.g., an acceptance ratio between 0.7 and 1.0), then, when production infrastructure150is configured in a substantially similar/identical way in the future, recommendation engine340may subsequently generate the same recommendation of configuration of production environment150. If the confidence score did not meet the threshold value (e.g., an acceptance ratio less than 0.7), the recommendation engine340may adjust subsequent recommendations for configuration of production environment150. Responsive to recommendation engine340generating the subsequent recommendation, augmented decisioning engine250may be configured to automatically shut down test infrastructure130/testing platform260. In various embodiments, augmented decisioning engine250may be configured to use one or more of active learning algorithms, supervised learning algorithms, backpropagation algorithms, clustering algorithms, regression algorithms, decision trees, reduction algorithms, and neural network algorithms. Further, augmented decisioning engine250may be configured to run and test infrastructure130may be configured to spin up in a non-production environment. FIG.4is a flowchart outlining the steps of training an augmented decisioning engine (e.g., learning engine110or augmented decisioning engine250) within an environment that is substantially similar/identical to environment100or testing environment200. Training augmented decisioning engine250may include, at step410, automatically spinning up a test infrastructure (e.g., powering on and making available for use by a computing system, such as test infrastructure130). In accordance with the present disclosure, the augmented decisioning engine may be running and the test infrastructure spinning up may occur in a non-production environment. Examples of a non-production environment may include, but are not limited to, a code-testing and/or a quality assurance (QA) environment. The infrastructure spun-up by the training engine may include a combination of computing clusters, servers, databases, applications, or other computing resources. The augmented decisioning engine may communicate with the infrastructure and/or other sub-components of the environment via a communications network, such as network180. Training the augmented decisioning engine may further include, at step420, automatically configuring the test infrastructure based on a type of metrics data retrieved from the production environment. For example, if a particular minimum number of simultaneous executions may be specified, the test infrastructure can be configured to involve a commensurate number of servers or computing clusters to ensure the ability to deliver that minimum. As noted above, the developer can compare the results of the tests executed on the test infrastructure to the specifications and a coefficient representing a correlation between the expected values and the tested values (e.g., a confidence score) may be determined. At step430, training includes performing, based on the metrics-based configuring, a test on the test components (i.e., application or code under development). The test infrastructure may be configured by an environment engine that is substantially similar/identical to environment engine325. The test may be performed, orchestrated, or otherwise enabled by a test engine that is substantially similar/identical to the test engine330. In some examples, the test engine may perform the test. In other examples, the test engine may orchestrate/coordinate performance of the test with the test infrastructure. In these other examples, once a test has been performed, the test infrastructure communicates the outcome to the augmented decisioning engine via the network. For example, past configurations/iterations of both production and non-production infrastructure and confidence scores may be logged and used, at least in part, to predict the success of future iterations of a web application under development. Other applications and systems with which the confidence scores may be used will be apparent to one of skill in the relevant art. As a non-limiting example, at step420, metrics data corresponding to memory utilization may be retrieved by the augmented decisioning engine from the production environment. Then, at step430, the augmented decisioning engine may automatically configure the infrastructure to perform a test, or series of tests, to assess the memory utilization of the system operating in a production environment. Such a test on memory utilization may, for instance, be performed to determine if there is a memory leak within the system. Other types of metrics data the augmented decisioning engine may retrieve from the production environment also include data corresponding to CPU utilization, memory utilization, hard disk utilization, other overhead limitations, network downlink rate, network uplink rate, port configuration, event routing, other network bandwidth limitations, applications logs, overall speed, responsiveness, or stability of the system executing operations in the production environment. The application logs may include time stamps corresponding to the application launching/shutting down, an indication of whether a task (e.g., data transfers) executed by the application was successful, and other information corresponding to how a user of the application interacts with the application. Further, based on the type of metrics retrieved, the infrastructure may be automatically configured to perform load testing, stress testing, soak testing, spike testing, breakpoint testing, configuration testing, isolation testing, internet testing, and/or the like, or a combination thereof. Training the augmented decisioning engine further includes, at step440, comparing the outcome of the test (or series of tests) performed to an expected outcome. The comparison may be performed by a comparison engine substantially similar/identical to comparison engine320. A user device substantially similar/identical to user device160may be used to supply the user input. The outcome of the test will result in one or more metrics data values which can be compared to corresponding expected metrics data values for a given configuration. At step450, the training further includes generating a recommendation for configuring the production infrastructure based on the comparison. A recommendation engine substantially similar/identical to recommendation engine340may be used to generate the recommendation based on the comparison. Next, at step460, training the augmented decisioning engine includes receiving, during the feedback process, an annotation for the recommendation. The annotation may be received as one or more pieces of data supplied by a user input in the non-production environment. The annotation may be received by a confidence engine substantially similar/identical to the confidence engine335, and the user input may be supplied by a user device substantially similar/identical to user device160. The training further includes, at step470, generating a confidence score for the recommendation based on the annotation. In some embodiments, the confidence score is the indication of whether the configuration of the production infrastructure is above a certain threshold value. For example, the confidence score may be a ratio indicating whether the recommendation generated based on comparing the metrics data with the expected values was accepted or rejected by a user/developer in the non-production environment. For example, if ten total recommendations were generated, based on the comparison, and seven were accepted, the confidence score generated would be 0.7. At step480, training the augmented decisioning engine further includes generating a subsequent generation based on the confidence score of previously generated recommendations. For instance, previously generated recommendations receiving a high confidence score (e.g., 0.8-1.0) may be generated subsequently, provided the metrics data retrieved and expected performance values received while generating the present recommendation are substantially similar to those compared when generating previous recommendations. Responsive to generating the subsequent recommendation, the training concludes, at step490, by shutting down the infrastructure. This includes shutting down the combination of computing clusters, servers, databases, applications, or other computing resources included in the test infrastructure. In various embodiments, training the augmented decisioning engine may use one or more of active learning algorithms, supervised learning algorithms, backpropagation algorithms, clustering algorithms, regression algorithms, decision trees, reduction algorithms, and/or neural network algorithms. The augmented decisioning engine may be configurable to reapportion any combination of the computing clusters, servers, databases, applications, or other computing resources included in the test infrastructure to the production infrastructure based on the recommended configuration of the production infrastructure. Computing resources13X and155are examples of computing resources which the augmented decisioning engine may be configured to reapportion. In other exemplary embodiments of the present disclosure, an environment substantially similar/identical to environment100or testing environment200, including sub-systems substantially similar/identical to those included in environment100/testing environment200, further include a non-transitory computer-readable medium storing instructions that when executed by one or more processors of a device operating in the environment cause the one or more processors to perform a method of automatically generating a recommendation. The method executed by the non-transitory computer-readable medium may be substantially similar/identical to method300. For instance, the method executed by the non-transitory computer-readable storage medium may include the steps of retrieving, from a production environment, metrics data indicative of performance of a system operating in the production environment, receiving data describing a configuration of a production infrastructure, the production infrastructure running the system operating in the production environment, and receiving expected performance values of the system. The method executed by the non-transitory computer-readable medium may further include comparing, by a augmented decisioning engine, the metrics data to expected performance values; and training, based on the comparing, the augmented decisioning engine to improve subsequent recommendations through a feedback process adjusting the augmented decisioning engine based on an indication of whether the configuration of the production infrastructure meets a threshold related to one or more metrics values. Training the non-transitory computer-readable medium may include spinning up, automatically, a test infrastructure and configuring, automatically, the infrastructure based on a type of metrics data retrieved from the production environment. The training may further include performing, based on the configuring, a test on the test components, comparing the outcome of the test performed to an expected outcome, and generating a recommendation for configuring the production infrastructure based on the comparing. The training further includes receiving, during the feedback process, an annotation for the recommendation—the annotation being the indication of whether the configuration of the production infrastructure performed above a certain threshold value. The training further includes generating, based on the annotation, a confidence score for the recommendation. According to the present disclosure, the training process executed by the non-transitory computer-readable medium concludes by generating a subsequent recommendation based on the confidence score of previously generated recommendations and shutting down, automatically, the infrastructure responsive to generating the subsequent recommendation. The non-transitory computer-readable medium may configure the augmented decisioning engine, via training, to reapportion any combination of the computing clusters, servers, databases, applications, or other computing resources included in the test infrastructure to the production infrastructure based on the recommended configuration of the production infrastructure. In some examples, the data describing the configuration of the production infrastructure, the expected performance values, and the annotation may be received as one or more pieces of data supplied via a user input in the non-production environment. FIG.5depicts an example computer system useful for implementing various embodiments. Various embodiments may be implemented, for example, using one or more well-known computer systems, such as computer system500shown inFIG.5. One or more computer systems500may be used, for example, to implement any of the embodiments discussed herein, as well as combinations and sub-combinations thereof. Computer system500may include one or more processors (also called central processing units, or CPUs), such as a processor504. Processor504may be connected to a communication infrastructure or bus506. Computer system500may also include user input/output device(s)503, such as monitors, keyboards, pointing devices, etc., which may communicate with communication infrastructure506through user input/output interface(s)502. One or more of processors504may be a graphics processing unit (GPU). In an embodiment, a GPU may be a processor that is a specialized electronic circuit designed to process mathematically intensive applications. The GPU may have a parallel structure that is efficient for parallel processing of large blocks of data, such as mathematically intensive data common to computer graphics applications, images, videos, etc. Computer system500may also include a main or primary memory508, such as random access memory (RAM). Main memory508may include one or more levels of cache. Main memory508may have stored therein control logic (i.e., computer software) and/or data. Computer system500may also include one or more secondary storage devices or memory510. Secondary memory510may include, for example, a hard disk drive512and/or a removable storage device or drive514. Removable storage drive514may be a floppy disk drive, a magnetic tape drive, a compact disk drive, an optical storage device, tape backup device, and/or any other storage device/drive. Removable storage drive514may interact with a removable storage unit518. Removable storage unit518may include a computer usable or readable storage device having stored thereon computer software (control logic) and/or data. Removable storage unit518may be a floppy disk, magnetic tape, compact disk, DVD, optical storage disk, and/any other computer data storage device. Removable storage drive514may read from and/or write to removable storage unit518. Secondary memory510may include other means, devices, components, instrumentalities or other approaches for allowing computer programs and/or other instructions and/or data to be accessed by computer system500. Such means, devices, components, instrumentalities or other approaches may include, for example, a removable storage unit522and an interface520. Examples of the removable storage unit522and the interface520may include a program cartridge and cartridge interface (such as that found in video game devices), a removable memory chip (such as an EPROM or PROM) and associated socket, a memory stick and USB port, a memory card and associated memory card slot, and/or any other removable storage unit and associated interface. Computer system500may further include a communication or network interface524. Communication interface524may enable computer system500to communicate and interact with any combination of external devices, external networks, external entities, etc. (individually and collectively referenced by reference number528). For example, communication interface524may allow computer system500to communicate with external or remote devices528over communications path526, which may be wired and/or wireless (or a combination thereof), and which may include any combination of LANs, WANs, the Internet, etc. Control logic and/or data may be transmitted to and from computer system500via communication path526. Computer system500may also be any of a personal digital assistant (PDA), desktop workstation, laptop or notebook computer, netbook, tablet, smart phone, smart watch or other wearable, appliance, part of the Internet-of-Things, and/or embedded system, to name a few non-limiting examples, or any combination thereof. Computer system500may be a client or server, accessing or hosting any applications and/or data through any delivery paradigm, including but not limited to remote or distributed cloud computing solutions; local or on-premises software (“on-premise” cloud-based solutions); “as a service” models (e.g., content as a service (CaaS), digital content as a service (DCaaS), software as a service (SaaS), managed software as a service (MSaaS), platform as a service (PaaS), desktop as a service (DaaS), framework as a service (FaaS), backend as a service (BaaS), mobile backend as a service (MBaaS), infrastructure as a service (IaaS), etc.); and/or a hybrid model including any combination of the foregoing examples or other services or delivery paradigms. Any applicable data structures, file formats, and schemas in computer system500may be derived from standards including but not limited to JavaScript Object Notation (JSON), Extensible Markup Language (XML), Yet Another Markup Language (YAML), Extensible Hypertext Markup Language (XHTML), Wireless Markup Language (WML), MessagePack, XML User Interface Language (XUL), or any other functionally similar representations alone or in combination. Alternatively, proprietary data structures, formats or schemas may be used, either exclusively or in combination with known or open standards. In some embodiments, a tangible, non-transitory apparatus or article of manufacture comprising a tangible, non-transitory computer useable or readable medium having control logic (software) stored thereon may also be referred to herein as a computer program product or program storage device. This includes, but is not limited to, computer system500, main memory508, secondary memory510, and removable storage units518and522, as well as tangible articles of manufacture embodying any combination of the foregoing. Such control logic, when executed by one or more data processing devices (such as computer system500), may cause such data processing devices to operate as described herein. Based on the teachings contained in this disclosure, it will be apparent to persons skilled in the relevant art(s) how to make and use embodiments of this disclosure using data processing devices, computer systems and/or computer architectures other than that shown inFIG.5. In particular, embodiments can operate with software, hardware, and/or operating system implementations other than those described herein. It is to be appreciated that the Detailed Description section, and not the Abstract section, is intended to be used to interpret the claims. The Abstract section may set forth one or more but not all exemplary embodiments of the present application as contemplated by the inventor(s), and thus, are not intended to limit the present application and the appended claims in any way. The present application has been described above with the aid of functional building blocks illustrating the implementation of specified functions and relationships thereof. The boundaries of these functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternate boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. The foregoing description of the specific embodiments will so fully reveal the general nature of the application that others can, by applying knowledge within the skill of the art, readily modify and/or adapt for various applications such specific embodiments, without undue experimentation, without departing from the general concept of the present disclosure. Therefore, such adaptations and modifications are intended to be within the meaning and range of equivalents of the disclosed embodiments, based on the teaching and guidance presented herein. It is to be understood that the phraseology or terminology herein is for the purpose of description and not of limitation, such that the terminology or phraseology of the present specification is to be interpreted by the skilled artisan in light of the teachings and guidance. The breadth and scope of the present application should not be limited by any of the above-described exemplary embodiments, but should be defined only in accordance with the following claims and their equivalents.
55,241
11860760
DETAILED DESCRIPTION Implementations are described herein according to the following outline:1.0 Terms2.0 General Overview3.0 Data Collection3.1 Logs, Traces and Metrics4.0 Multiple Modalities for Performing Application Performance Monitoring (APM)4.1 Metric Time Series4.1.1 Generating Metric Data Streams Using Span Identities4.1.2 Real-Time Monitoring Using Metric Time Series Data4.2 Metric Events4.2.1 Metric Events Data Generation and Persistence4.3 High-Fidelity Data5.0 Multiple Modalities for Performing Real User Monitoring (RUM)5.1 End-to-End Visibility of a Real User Session5.1.1 Aggregating Metrics for Workflows Associated with a Real User Session 1.0 Terms The term “trace” as used herein generally refers to a record of the manner in which a single user request, also referred to as a transaction, propagates from one microservice (hereinafter interchangeably referred to as “service”) to the next in a distributed application. A transaction is generally described as an end-to-end request-response flow, from the making of the user's initial request to receiving the final response. A transaction often involves the interaction of multiple services. A trace is a record of a transaction and each trace may be identified using a unique trace identifier (“Trace ID”). The trace follows the course of a request or transaction from its source to its ultimate destination in a distributed system. In one implementation, a trace may be conceptualized as a highly dimensional structured log that captures the full graph of user-generated and background request execution within an application, and includes valuable information about interactions as well as causality. The term “span” as used herein generally refers to the primary building block of a trace, representing an individual unit of work done in a distributed system. A trace is composed of one or more spans where a span represents a call within the request. It is appreciated that a call may be to a separate microservice or a function within a microservice. The trace represents the work done by each microservice which is captured as a collection of linked spans sharing the same unique Trace ID. Each component of the distributed system may contribute a span—a named, timed operation representing a piece of the workflow. A span may also include a unique span ID, a service name (e.g., “analytics”), an operation name (e.g., “start”), duration (latency), start and end timestamps and additional annotations and attributes (e.g., tags such as key:value pairs). The annotations and attributes can describe and contextualize the work being done under a span. For example, each span may be annotated with one or more tags that provide context about the execution, such as the client instrumenting the software, a document involved in the request, an infrastructure element used in servicing a request, etc. The term “tags” as used herein generally refers to key:value pairs that provide further context regarding the execution environment and enable client-defined annotation of spans in order to query, filter and comprehend trace data. Tag information is typically included with each span and there may be different levels of tag information included in a span. Tag information (including the ‘key’ and corresponding ‘value’) is typically included with each span and there may be different levels of tag information included in a span. “Global tags” generally represent properties of a user-request (e.g., tenant name, tenant level, user location, environment type, etc.) and may be extracted from any span of the trace based on configured rules. A global tag for a particular span in a trace may be attributed to the other spans in a trace, because each span within a single trace may comprise the same global attributes. For example, if one span within a trace comprises a tag relating it to a request from a “gold” level “tenant,” it may be inferred that other spans in the same trace are associated with the same request and, accordingly, from the same “gold” level “tenant.” Consequently, the “tenant:gold” key-value pair or tag may be attributed to the other spans in the same trace. “Span-level tags” comprise attributes that are specific to a particular span. The term “root span” as used herein generally refers to the first span in a trace. A span without a parent is called a root span. The term “child span” as used herein generally refers to a span that follows a root span, including a child of a child. The term “parent span” as used herein generally refers to a span that executes a call (to a different service or a function within the same service) that generates another span, wherein the span executing the call is the “parent span” and the span generated in response to the call is the “child span.” Each span may typically comprise information identifying its parent span, which along with the Trace ID, may be used to consolidate spans associated with the same user-request into a trace. A “metric” as used herein generally refers to a single quantifiable measurement at a specific point in time. Combining the measurement with a timestamp and one or more dimensions results in a metric data point. A single metric data point may include multiple measurements and multiple dimensions. Metrics are used to track and assess the status of one or more processes. A metric typically comprises a numeric value that is stored as a timeseries. A timeseries is a series of numeric data points of some particular metric over time. Each time series comprises a metric plus one or more tags associated with the metric. A metric is any particular piece of data that a client wishes to track over time. 2.0 General Overview One of the fundamental shifts in modern day computing has been the shift from monolithic applications to microservices-based architectures. As previously mentioned, this is the shift from an application being hosted together (e.g., on a single system) to each piece of an application being hosted separately (e.g., distributed).FIG.1Aillustrates an exemplary monolithic multi-layer architecture. A monolithic application is traditionally built as a single unit. The monolithic application consists of a single self-contained unit in which code exists in a single codebase100and in which modules are interconnected. At deployment time, the entire codebase is deployed and scaling is achieved by adding additional nodes. FIG.1Billustrates an exemplary microservices architecture. A microservices architecture involves the building of modules (e.g., modules104,106and108) that address a specific task or business objective. As a result, these modules tend to exhibit low coupling and high cohesion. A microservices architecture is often achieved by decoupling a monolithic application into independent modules that each include the components necessary to execute a single business function. These services typically communicate with each other using language agnostic Application Programming Interfaces (“APIs”) such as Representational State Transfer (REST). Microservices were created in order to overcome the issues and constraints of monolithic applications. Monolithic applications have a tendency to grow in size over time. As applications become larger and larger, the tight coupling between components results in slower and more challenging deployments. Because of the tight coupling, the potential for a failure of the entire application due to a recently deployed feature is high. In some cases, deployments may take several months to a year, greatly reducing the number of features that may be rolled out to users. This tight coupling also makes it difficult to reuse and replace components because of the effect they may have on other components throughout the application. Microservices address these issues by being small in scope and modular in design. Modular design results in components being loosely coupled, which offers enormous benefits from the standpoint of being both fault tolerant and independently deployable. This results in functionality that may be frequently deployed and continuously delivered. The attribute of loosely coupled modules without a central orchestrator in a microservices architecture, however, leads to considerable challenges in terms of monitoring, troubleshooting and tracking errors. These challenges have led to the rise of observability, a new generation of monitoring, the foundation for which is built, in part, on distributed tracing. Distributed tracing, also called distributed request tracing, is an application performance monitoring (APM) method used to profile and monitor applications, especially those built using a microservices architecture. Distributed tracing helps pinpoint where failures occur and what causes poor performance. Distributed tracing, as the name implies, involves tracing user requests through applications that are distributed. A trace represents a single user request, also referred to as a transaction, and represents the entire lifecycle of a request as it traverses across the various services or components of a distributed system. While distinct from the methodologies employed for APM, real user monitoring (RUM) is considered one of the critical strategies employed for performance monitoring by focusing on the manner in which end users' experiences might inform application optimization strategies. RUM surfaces meaningful diagnostic information on frontend performance so developers can optimize frontend code and deliver the best possible user experience. APM meanwhile typically monitors the performance of server-side code and offers detailed insight on improving it to reduce infrastructure costs and creating faster applications for users. RUM utilizes data related to the end users' experiences to help developers track and improve a website or application's performance. RUM focuses on measuring the experience of real users of a website or an application. It does this by tracking and reporting on several metrics including time-to-first-byte, full page load time, load time of specific elements, DNS timing, transaction paths, JavaScript errors, etc. With RUM, real user data can be tracked across browser versions, operating systems and end-user configurations. Tracking real users allows RUM to provide critical real-world measurements and helps developers identify whether certain user engagements or activities are triggering a lag in performance or causing errors. RUM, therefore, contributes to successful performance monitoring by analyzing how the end users' experiences might inform application-optimization strategies. RUM-based and APM-based methods together monitor the speed at which both frontend and backend transactions are performed both by end-users and by the systems and network infrastructure that support a software application, providing an overview of potential bottlenecks and service interruptions. This typically involves the use of a suite of software tools—or a single integrated SaaS or on-premises tool—to view and diagnose an application's speed, reliability, and other performance metrics to maintain an optimal level of service. Computing operations of instrumented software may be described by spans and traces. The spans and traces are produced by various instrumented services in an architecture and are communicated to an analysis system that analyzes the traces and spans to enable a software developer to monitor and troubleshoot the services within their software. FIG.2Aillustrates an exemplary trace tree. The first span in the trace tree, Span A202, is known as the root span. A trace tree typically comprises a root span, which is a span that does not have a parent. It may be followed by one or more child spans. Child spans may also be nested as deep as the call stack goes. Span B206and Span E204are child spans of the parent span, Span A. Further, Span C208and Span D210are child spans of the parent Span B208. FIG.2Billustrates an alternate view of the trace fromFIG.2Aadjusted for timeline. The trace starts with the Span A202, the root span, where the request starts. When the trace starts, a Trace ID is generated (e.g., Trace ID:1as shown inFIG.2B), which follows the request as it propagates through the distributed system. A new span is generated for each logical chunk of work in the request, where the new span includes the same Trace ID, a new Span ID and a Parent Span ID, which points to the span ID of the new span's logical parent. The Parent Span ID creates a parent-child relationship between spans. A given request typically comprises one span (e.g., the root Span A202) for the overall request and a child span for each outbound call made to another service, database, or a function within the same microservice etc. as part of that request. For example, in the example ofFIG.2B, the Span A202is the root span for the overall request and generates several child spans to service the request. The Span A202makes a call to the Span B206, which in turn makes a call to the Span C208, which is a child span of the Span B206. The Span B206also makes a call to the Span D210, which is also a child span of the Span B206. The Span A202subsequently calls the Span E204, which is a child span of the Span A202. Note, that the spans in a given trace comprise the same Trace ID. The Trace ID along with the Parent Span ID may be used to consolidate the spans together into a trace. 3.0 Data Collection Distributed tracing data is generated through the instrumentation of browsers, microservices-based applications, libraries and frameworks. Software may be instrumented to emit spans and traces. The spans and traces may be generated according to an industry standard, such as the OpenTracing standard. Other common open source instrumentation specifications include OPENTELEMETRY and OpenCensus. Each span may be annotated with one or more tags that provide context about the execution, such as the client instrumenting the software, a document involved in the request, an infrastructure element used in servicing a request, etc. The instrumentation handles the creating of unique session IDs, trace and span IDs, tracking duration, adding metadata and handling context data. Handling context data, also known as context propagation is critical and is responsible for passing context (e.g. Trace ID) between function/microservice calls, thereby, enabling an observer to view the entire transaction at each step along the way. Context propagation may, for example, be based on REST. REST is header-based and requires a transaction to pass headers between service-to-service calls. In order to work properly, services within a request use the same context propagation format. Once the code has been instrumented and context propagation has been implemented using a standard format, the trace data generated by the services may be collected and analyzed to monitor and troubleshoot the microservices-based applications generating the trace data. FIG.3is a flow diagram that illustrates the manner in which trace data may be collected and ingested for further analysis within a computer system, in accordance with an implementation of the monitoring service disclosed herein. Tasks301represent client applications that execute within a client data center for Client A. Similarly, tasks302represents client applications that execute within a client data center for Client B. The tasks301or302may comprise services or applications within a client's on-premises (“on-prem”) software. Alternatively, they may comprise services or applications running in the cloud computing environment, e.g., in an AMAZON WEB SERVICES (AWS) Virtual Private Cloud (VPC). The tasks301and302may be instrumented using open source or common commercial tracing libraries, from tracing applications (e.g., Jaeger or Zipkin), in-house formats, or auto-instrumentation. Each task may be configured to generate spans that describe the processing of a portion of a request as the request traverses through the various tasks (or services) on the client-side. It should be noted that while the tasks301and302may comprise instrumented application software, the techniques disclosed herein are not limited to application software but are applicable to other kinds of software, for example, server software, software executing on customer devices, websites and so on. Furthermore, a client device (e.g., a device at a data center for Client A or Client B) may include any computing system that is configured to execute instrumented software, whether or not it is used for development of improved software. For example, the client device may be a computing system used for testing purposes, staging purposes, or any production system executing in an enterprise. An agent303is typically configured at the client-side host or service for receiving spans collected from the various tasks on the client-side and transmitting the spans to a collector304. An agent may receive generated spans locally using, for example, User Datagram Protocol (UDP). The tasks302may comprise instrumented tasks that are not using an agent and may be configured to span directly to the collector304. The tasks may include various front-end tasks such as those performed by a web browser running on a client's computer. While spans may be collected from the client-side tasks without configuring an agent (e.g., in the case of Client B), using an agent may provide benefits including batching, buffering and updating trace libraries. Batches of span data collected by the agent303are periodically received at the collector304. The collector may be implemented within a client's on-prem software or in the cloud computing environment (e.g., in an AWS VPC). The collector304may also, for example, be implemented in a cloud computing environment by the same entity as the one implementing monitoring service306. Traces often generate duplicative data that is not relevant for monitoring or troubleshooting. The collector304may avoid redundancies by sampling the data before processing and storing it. The collector304runs the span data through a processing pipeline and may store it in a specified storage or analytics backend such a monitoring service306. It should be noted that the collector304may interact with the monitoring service306through a network (not shown). In an implementation, the collector304may consolidate data from several client devices and combine the data to send to the monitoring service306(e.g., without sampling). For example, the collector304may comprise a server that receives data streams internally from different client devices and, periodically, sends the combined data (in batch form) to the monitoring service306. The data streams may comprise trace related or metrics information. This improves efficiency of external communications from the enterprise. In one implementation, the collector304may comprise a beacon module388configured to collect all data associated with RUM sessions, e.g., users' browsing sessions, users' interactions with an application or data generated by users' web browsers, etc. The beacon module388may, for example, be configured to collect all the spans generated by browser instrumentation configured on a client's device or a client's web-browser. The beacon may, among other functions, enrich the spans generated at the frontend (e.g., by a browser) with additional information (e.g., with HTTP client's IP address) before forwarding the information to be ingested by the monitoring service306. Note that the beacon module388may not necessarily be a component within the collector304but may also be implemented as a standalone module. Further note that similar to the collector304, the beacon module388may be implemented within a client's on-prem software or in the cloud computing environment (e.g., in the same environment in which monitoring service306is implemented.). In an implementation, the monitoring service306receives and analyzes the span data for monitoring and troubleshooting purposes. It should be noted that, in addition to monitoring service306, span and tracing data might also be simultaneously transmitted to other types of storage and monitoring back-end services, e.g., a data ingestion and query system326. In one implementation, the monitoring service306may be a Software as a Service (SaaS) based service offering. Alternatively, in another implementation, it may also be implemented as an on-prem application. The monitoring service306receives the observability data collected by the collector304and provides critical insights into the collected trace data to a client of the monitoring service, who may be an application owner or developer. In an implementation, the monitoring service306may be hosted on a computing system that includes one or more processors, memory, secondary storage and input/output controller. The computing system used for hosting the monitoring service306is typically a server class system that uses powerful processors, large memory resources and fast input/output systems. The monitoring service306may comprise an instrumentation analysis system322(also referred to herein as an “analytics engine”) and a query engine and reporting system324. The instrumentation analysis system322receives data comprising, for example, trace information, span information and/or values of metrics sent by different clients. As noted previously, a task or software program may be instrumented to generate spans with a common field in their data structures to designate spans that are part of a common trace. For example, the spans may include a trace identifier such that spans with the same trace identifier are a part of the same trace. The tasks (or software) executing on the client device are configured to send information generated as a result of instrumenting the software to the instrumentation analysis system322of the monitoring service306. For example, the tasks may send span information collected from the various services at the client end to the instrumentation analysis system322. Alternatively, traces may be sampled to generate metric values, and the tasks may send values corresponding to various metrics as they are generated to the instrumentation analysis system322. The tasks may send group values of metrics periodically to the instrumentation analysis system322. Different tasks may send the same metric or different metrics at different rates. The same task may send different metrics at different rates. In an implementation, the tasks (e.g., tasks301and302) and the collector304may send data to the monitoring service306by invoking an API supported by the monitoring service306and the instrumentation analysis system322. In one implementation, a customer name may be specified for the instrumented software. The instrumented software includes the customer name when it identifies a data stream associated with that particular customer. The ability to associate a data stream with a customer allows the instrumentation analysis system322to perform customer specific analysis, for example, report on usages of systems for each customer, identify customers reporting more than a threshold number of errors and so on. In one implementation, an application owner or developer may submit queries to the query engine and reporting system324to gain further insight into the spans and traces (or metrics) received and analyzed by the instrumentation analysis system322. For example, the query engine and reporting system324within the monitoring service306may be configured to generate reports, render graphical user interfaces (GUIs) and/or other graphical visualizations to represent the trace and span information received from the various clients. The query engine and reporting system324may, for example, interact with the instrumentation analysis system322to generate a visualization, e.g., a histogram or an application topology graph (referred to interchangeably as a “service graph” herein) to represent information regarding the traces and spans received from a client. Alternatively, the query engine and reporting system324may be configured to respond to specific statistical queries submitted by a developer regarding one or more services within a client's application. 3.1 Logs, Traces and Metrics As mentioned above, the shift from monolithic applications to microservices-based architectures has increased the usefulness of analyzing traces in a distributed system. In one or more implementations, the tracing data may be coupled with log data and/or metrics data, in order to provide clients with a more complete picture of the system. For example, the trace data may be coupled with log or other data from the data ingestion and query system326. In one implementation the data ingestion and query system326may be comprised within the monitoring service306. One example of a data ingestion and query system326is the event-based data intake and query SPLUNK® ENTERPRISE system developed by Splunk Inc. of San Francisco, California. The SPLUNK® ENTERPRISE system is the leading platform for providing real-time operational intelligence that enables organizations to collect, index and search machine-generated data from various data sources328, for example, websites, applications, servers, networks and mobile devices that power their businesses. In one implementation the other data sources328may be associated with the same clients (e.g., Client A and Client B) that generate the trace data received by the monitoring service306. The SPLUNK® ENTERPRISE system is particularly useful for analyzing data which is commonly found in system log files, network data and other data input sources. In another example, the data ingestion and query system326may be an on-premises application or based on a distributed or cloud-based service. In one implementation, the trace data may be ingested into the data ingestion and query system326, or may be coupled with outputs from the data ingestion and query system326e.g., from searches that may be based on trace data and run on the data ingestion and query system326. In some implementations, the data ingestion and query system326described above may be integrated with or into the monitoring service306that analyzes trace data, e.g., the monitoring service306. The monitoring service306may, accordingly, comprise a full suite of services including, for example, analyzing spans generated by users' browsing sessions and other frontend activities, analyzing trace data, generating metrics data from the trace data, ingesting and analyzing log data, ingesting metrics data and providing insights generated from the metrics data, including aggregating and/or correlating trace data, log data and metrics data, in order to gain insights into a computing platform. As described above, the span, trace and other data received from the collector304may be sent to systems configured to ingest and search data, such as the data ingestion and query systems326described above. In some implementations data ingestion and query system326may be configured to generate metrics data from the trace data received from the collector304. Additionally, other implementations may use a stream processor that may perform transformations and other operations on incoming data prior to, concurrently with, and/or as an alternative to, ingestion of the data. In some implementations, the system may also be configured to ingest metrics data and may be optimized to ingest, query and generate insights from metrics data. In other implementations, metrics may be generated by instrumentation (e.g., from instrumenting client software and tasks, e.g., tasks301,302etc. as described above) and sent to a SaaS-based processing system, e.g., the monitoring service306. For example, software may be instrumented to send metrics to a gateway or to a instrumentation analysis engine, where metrics may be aggregated, queried and alerted. As above, the trace data may be paired with data from the data ingestion and query system326, metrics generated by instrumentation, and other data sources, and correlated in various ways to provide insights. For example, as a broad-based correlation example, the metrics data may be used in a thresholding comparison to determine that there is an issue that needs attention, the trace data may be used to determine which component or microservice requires attention, and log data from the data ingestion and query system326may be used to determine exactly why the component or microservice needs attention. Other correlations and uses for the combination of metrics data, log data and event data are also contemplated herein. As noted above, the various features and services may be provided within an integrated monitoring platform (e.g., the monitoring service306), wherein the platform comprises, among other things, an instrumentation analysis system (e.g., the instrumentation analysis system322), a query engine and reporting system (e.g., the query engine and reporting system324) and a data ingestion and query system (e.g., the data ingestion and query system326). 4.0 Multiple Modalities for Performing Application Performance Monitoring (APM) As noted previously, APM methods such as distributed tracing are used to profile and monitor applications, especially those built using a microservices architecture, at the backend of a website or application. Historically, there have been several challenges associated with implementing an analytics tool such as the monitoring service306within a heterogeneous distributed system. One of the challenges associated with APM for example, is efficiently ingesting and aggregating significant amounts of span and trace data generated by various services in an architecture. Conventional tracing and monitoring systems are typically unable to ingest the vast amounts of span and tracing data generated by clients' application and have to resort to sampling the data intelligently to reduce the volume of stored trace data. Using sampling exclusively, however, results in loss of data and, as a result, conventional monitoring tools do not allow clients access to all the traces generated by their application. Furthermore, conventional monitoring tools may calculate metrics (e.g., requests, errors, latency, etc.) based on the sampled set of data and, accordingly, the calculations may be approximate at best and inaccurate at worst. Advantageously, implementations of the monitoring service (e.g. monitoring service306) disclosed herein allow clients of the monitoring service the ability to ingest up to 100% of the spans and create streams of metric data using the ingested spans prior to consolidating the spans into traces (through a sessionization process). The metric time series provide valuable real-time information pertaining to services or endpoints within an application and also allow alerts to be configured to manage anomalous behavior on the endpoints. Implementations of the monitoring service disclosed herein also sessionize and store up to 100% of the spans received from the client in real time. Implementations of the monitoring service disclosed herein comprise an ingestion streaming pipeline that is able to ingest and consolidate the incoming spans into traces, and is further able to use advanced compression methods to store the traces. Additionally, because incoming trace and span information may be efficiently ingested and aggregated in real time, a monitoring platform is able to advantageously convey meaningful and accurate information regarding throughput, latency and error rate (without the need for sampling) for the services on the backend in the microservices-based application. High-cardinality metrics pertaining to throughput, latency and error rate may be calculated with a high degree of accuracy because all incoming data is accounted for and there is no data loss as a result of sampling. It should be noted that that monitoring service disclosed herein is able to ingest and store up to 100% of the spans for both APM and RUM. Implementations of the monitoring service disclosed herein further allow a client to store and analyze the span and trace data using multiple modalities of analysis for both APM and RUM. The manner in which multiple modalities of analysis are supported for RUM will be discussed further below in connection withFIG.17. In one implementation, a first modality comprises converting incoming spans from one or more clients into a plurality of metric data streams (also referred to as metric time series) prior to sessionizing the spans. Each metric time series is associated with a single span identity, where a base span identity comprises a tuple of information corresponding to an associated type of span. Each metric time series in this modality (referred to herein as “metric time series modality”) represents a plurality of tuples, each tuple representing a data point. Key performance metrics (KPIs) can be extracted directly from the metric time series in real-time and reported to a client. Because the metric time series are created without paying a time penalty associated with sessionization, they can be used to perform real-time monitoring with sub-second resolution and generate alerts within two to three seconds if a condition is violated. In one or more implementations, a second modality of analysis sessionizes the incoming spans and supports deriving higher-cardinality metrics (as compared with metric time series data) for a selected set of indexed tags, e.g., user-selected tags, global tags of the trace, etc. over selected time durations (referred to herein as the “metric events modality”). This modality is particularly useful for clients that need accurate SLI information for a larger set of high-value indexed tags. The metric events modality enables developers to aggregate metrics that have been pre-generated using the sessionized trace data to efficiently respond to queries submitted by a client. The aggregated metrics provide a client visibility into the performance of services within a microservices-based application. The metric events modality may deprioritize speed as compared to the metric time series to provide a client resolution into a larger set of indexed tags. As such, responses provided by the metric events modality are typically slightly slower as compared with the sub-second response rates of the metric time series. In one or more implementations, the metric events modality may also keep track of exemplary traces associated with a pre-configured set of indexed tags. The tags to be indexed may be pre-selected by the client or the monitoring platform. The Trace IDs may be used to retrieve the associated traces and analysis on the actual traces may be performed to generate more particularized information, e.g., span duration, span count, span workload percentage, etc. for each span in a given trace. In one implementation, once the traces are retrieved, an analysis may be run on an arbitrary set of tags (in addition to the pre-configured indexed tags). Additionally, in one or more implementations, a third modality of analysis may comprise a “full-fidelity” modality where a full-fidelity analysis may be conducted on any dimension or attribute of data to gauge the performance of services in the microservices-based application. The full-fidelity modality allows clients to search most or all of the incoming trace data (including all the tag data) that was ingested by the monitoring platform without relying on sampling. The full-fidelity modality may sacrifice speed for accuracy, and may be used by clients that need a more thorough analysis of the services across every dimension or attribute. In an implementation, the three modalities may be supported by the monitoring platform simultaneously by storing ingested trace data using three different formats, wherein each format corresponds to one of the three available modalities of analysis. Note that implementations of the monitoring service disclosed herein are not restricted to three discrete data sets. The data sets for the different modalities may overlap or may be saved as part of a single data set. When a client submits a query, the monitoring platform may determine which of the data sets is most suitable for addressing the query. Thereafter, the monitoring platform executes the query against the selected data set to deliver results to the client. By comparison, conventional monitoring systems typically focus on a single modality and do not provide clients the ability to seamlessly navigate between different modalities. Conventional monitoring systems also do not provide the ability to automatically select the most appropriate modality based on the content, structure, syntax or other specifics pertaining to an incoming query. FIG.4illustrates the backend components of an exemplary microservice application for an online retailer that are monitored using APM. A user needing to conduct a transaction may visit the website of the online retailer which would initiate a call to the retailer's Front-end service404on a server. The call to the Front-end service404may subsequently trigger a chain of calls on the retailer's back-end that would not be transparent to the client. For example, if the user proceeds to complete the transaction by checking out, several calls may be made to the back-end to services such as a CheckOutService406, a PaymentService408, an EmailService410, a ShippingService412, a CurrencyService428and a CartService414that may be involved in processing and completing the user's transactions. Note, that a given request submitted by a user to the website would involve a subset of the services available and, typically, a single request would not result in a call to each of the services illustrated inFIG.4. As mentioned above, a request that the user initiates would generate an associated trace at the backend. It is appreciated that each user request will be assigned its own Trace ID, which will then propagate to the various spans that are generated during the servicing of that request. Each service may process a portion of the request and generate one or more spans depending on the manner in which instrumentation is configured for a respective service. The Trace ID may then be used by the server to group the spans together into a trace with that Trace ID. So, for example, the user's checkout transaction may generate a call at the Front-end service404, which may in turn generate calls to various microservices including the CheckoutService406. The CheckoutService406may, in turn, generate calls to other services such as the PaymentService408, the EmailService410and the ShippingService412. Each of these calls passes the Trace ID to the respective service being called, wherein each service in the call path could potentially generate several child spans. It should be noted that a service does not necessarily need to make calls to other services—for instance, a service may also generate calls to itself (or, more specifically, to different operations and sub-functions within the same service), which would also generate spans with the same Trace ID. Through context propagation then, each of the spans generated (either by a service making a call to another service or a service making a call to various operations and sub-functions within itself) is passed the Trace ID associated with the request. Eventually, the spans generated from a single user request would be consolidated (e.g., by the collector304or the monitoring service306ofFIG.3) together using the Trace ID (and the Parent Span IDs) to form a single trace associated with the request. As noted above, conventional distributed tracing tools are not equipped to ingest the significant amounts of span and tracing data generated by clients' application and have to resort to sampling the data intelligently to reduce the volume of stored trace data. Further, conventional distributed tracing tools do not provide application owners multiple modalities of storing and querying trace data with the flexibility of switching between the different modalities depending on the level of detail required to respond to a client's query. ReferencingFIG.4again, an owner of the application400may, for example, need varying degrees of detail regarding the services in the application. For example, the application owner may need to monitor certain metrics (e.g., RED metrics associated with Request, Errors, Durations) in real-time associated with a particular service, e.g., CheckoutService406. Assuming there are errors generated by a call made from the Frontend service404to the CheckoutService406, the owner may require further information pertaining to additional tags (indexed or non-indexed) associated with CheckoutService406. The application owner may also need to access each of the spans or full trace(s) associated with the request from the Frontend service404to the CheckoutService406to perform a more detailed analysis. Each of the requests requires a different degree of detail extracted from the span and trace information. In one implementation, the metric time series modality allows the client to monitor RED metrics associated with a given service, e.g., CheckoutService406in the online retailer's application in real-time. In one implementation, the metric time series modality can also be configured to deliver real-time alerts to a client based on each of the RED metrics, e.g., anomalies related to the request rate, error rate, or latency (duration). If the client needs Service Level Indicators (SLIs) pertaining to certain indexed tags related to the call between Frontend service404and CheckoutService406for a given time duration, the metric event modality may enable the client to perform aggregations of metrics data computed from the indexed tags associated with the spans generated by the call between the Frontend service404and the CheckoutService406. The metrics aggregation may be a numeric summation, for example, and may be performed relatively quickly. The metric event modality, in accordance with implementations of the monitoring service disclosed herein, associates the selected tags indexed from the incoming span data (e.g., the same indexed tags used for performing metrics extraction) with Trace IDs (or Span IDs in the case of RUM data as will be discussed in connection withFIG.17below) for exemplary traces. The Trace IDs (or Span IDs in the case of RUM data) may be used to retrieve the exemplary traces (or spans) associated with indexed tags. Thereafter, the monitoring platform may analyze the exemplary spans or traces to generate more particularized information, e.g., span duration, span count, span workload percentage, etc. for each span in a given trace. For the example ofFIG.4, if the client requires a performance summary for the spans generated by the call made from the Frontend service404to the CheckoutService406, the associated query submitted by the client may access the data set associated with the metric event modality. Using the Trace IDs (or Span IDs for RUM) corresponding to the indexed tags, the monitoring platform may then perform the computations necessary on the corresponding exemplary traces (or spans) to provide the client further information regarding the span performances. In an implementation, the client may also be able to extract meaningful information from the unindexed tags associated with the spans generated by the call using the exemplary traces. If the client wants to search all the incoming trace data associated with the call between Frontend service404to the CheckoutService406, implementations of the monitoring service disclosed herein provide a third modality of analysis. In the full-fidelity modality, a full-fidelity analysis may be conducted on any dimension, tag or attribute of the span or trace data. For example, the client may be able to search previously indexed or unindexed tags across each of the traces associated with the call the between the Frontend service404and the CheckoutService406. The full-fidelity modality allows an analysis to be performed across any relevant span or trace. Conventional tracing systems are unable to provide that level of flexibility and detail for developers or application owners needing to investigate performance issues with their applications. Note that this modality of analysis may be more time-consuming because trace data may be detailed and require significant storage space. Implementations of the monitoring service disclosed herein ingest and aggregate the span information from the online retailer's application (for APM) and from a user's interactions with a browser or other interface on the frontend (for RUM). Further, implementations of the monitoring service disclosed herein extract information from the incoming span data and store the information using multiple formats to support multiple modalities of data analysis for a client. Each modality is configured to allow the clients access to a different format in which incoming trace information may be represented and stored, where each format conveys a different degree of resolution regarding the ingested traces to a client and, accordingly, may occupy a different amount of storage space. FIG.5is a flow diagram that illustrates an exemplary method of ingesting and aggregating span information to support multiple modalities of analysis for APM, in accordance with implementations of the monitoring service disclosed herein. As mentioned in connection withFIG.3, span information is received at the monitoring service306from the collector (e.g., the collector504inFIG.5). As noted previously, in one implementation, incoming spans from one or more clients are converted into a plurality of metric data streams prior to consolidating the spans into traces through a sessionization process. The incoming spans are received and the metric data streams are generated by module520prior to the spans being sessionized. Because the metric time series are created without paying a time penalty associated with sessionization, they can be used to perform real-time monitoring and alerting. The incoming spans for APM (e.g., monitoring microservices at the backend of an application) are also sessionized where the span information is combined into traces in a process called sessionization. The APM sessionization module506is responsible for stitching together or combining the traces508using, among other things, the Trace IDs associated with each user-request (and typically also the Parent Span IDs of each span). Note that, in one implementation, the sessionized traces may also be inputted to the module520to create metric time series to track traces (separately from the time series created to track spans). The spans associated with RUM (ingested, for example, from the beacon567), are, in one implementation, ingested and analyzed separately from the spans associated with APM. In one implementation, RUM-related spans may need to be treated differently from APM-related spans. For example, the spans related to RUM may need to be ingested and sharded by a session identifier (session ID) (and, optionally, an organization identifier) instead of using the Trace ID. A session ID is an identifier that connects a series of traces. RUM data is typically organized into page views (which show details of a page visit) and sessions (which group all the page views by a user in a single visit). A session ID is typically used to filter for all the views in a specific session. For RUM, a developer is typically more interested in the behavior of a user over the course of a session, e.g., a user session interacting with a particular website or application. Accordingly, spans associated with RUM are usually sharded and tracked using a session identifier (or session ID). Spans associated with RUM that are received from the collector504are, therefore, ingested using a separate RUM ingestion module588(details of which will be covered inFIG.17). In an implementation, information extracted from the traces508may also be transmitted to the RUM ingest module588in order to facilitate a connection between the frontend RUM traces and the backend APM traces. In this implementation, a RUM span on the RUM frontend may comprise the associated Trace ID/Span ID of an APM span, so the RUM frontend would initiate the retrieval of the connection information from the APM backend. In addition to a Trace ID, each trace also comprises a time-stamp; using the time-stamps and the Trace IDs, the APM sessionization module506, which is associated with APM-related spans, creates traces508from the incoming spans in real time and sessionizes them into discrete time windows. For example, the sessionization process may consolidate traces (from spans) within a first time window (associated with time window Y580) before transmitting the traces to modules520,522or524. Each of the modules520,522and524support a different modality of analysis for APM. Thereafter, the sessionization process may consolidate traces within the subsequent time window (associated with time window “Y+M”585) before transmitting those traces to the modules520,522, or524. It should be noted that the time windows associated with each of the modules520,522, and524may be different. In other words, the metric time series data may be collected over short time windows of 10 seconds each. By comparison, traces for the metric events modality (associated with the module522) may be collected over 10 minute time windows. In some implementations of the monitoring service disclosed herein, the sessionization module is able to ingest, process and store all or most of the spans received from the collector504in real time. By comparison, conventional monitoring systems do not accept all of the incoming spans or traces; instead, they sample incoming spans (or traces) to calculate SLIs at the root level of a trace before discarding the spans. Implementations of the monitoring service disclosed herein, by comparison, comprise an ingestion streaming pipeline that is able to ingest and consolidate all the incoming spans into traces in real time, and is further able to use advanced compression methods to store the traces. Further, implementations of the monitoring service disclosed herein are able to generate metric time series from the span data (prior to sessionizing the spans) to provide real-time monitoring and alerting of certain KPIs. As noted above, the APM sessionization module506has the ability to collect all the traces within a first time window Y580using the time-stamps for the traces. Subsequently, the sessionized traces are fed to the modules522and524, for the respective modalities (metric events and full-fidelity) for extraction and persistence. In one implementation, subsequent to consolidation, the trace data is indexed by an optional tag indexing module507, which indexes one or more tags in the trace data. The tags may be client-selected tags or tags that the monitoring platform is configured to index by default. In a different implementation, tag indexing may be performed as part of data aggregation, e.g., by module522associated with metric events. In an implementation, data sets associated with each of the modalities may be persisted in one or more databases555. As noted previously, the data sets for the respective modalities may be separate data sets, overlapping data sets or a single data set that supports all the modalities. Note that the databases555may be a single database that stores data sets corresponding to all three modalities. Alternatively, the databases555may represent different respective databases for each of the three modalities. Furthermore, the databases555may also represent distributed databases across which relevant information for each of the three modalities is stored. In one implementation, data associated with each of the three modalities is generated at the time of ingestion and stored separately from each other. The structure, content, type or syntax of query submitted by a client will typically dictate which of the three modalities and corresponding data set will be selected. In one implementation, an interface through which the query is submitted may also determine which of the three modalities and corresponding data set is selected. In an implementation, there may be some commonality in the data for the three modalities in which case the storage for the data may overlap. An alternative implementation may also comprise one or two of the three modalities (instead of all three) described above. A client may send in a request to retrieve information pertaining to an application through query interface582. The underlying querying engine (e.g., the query engine and reporting system324fromFIG.3) will analyze the structure, content, type and/or syntax of the query, and also the interface through which the query is submitted, to determine which of the three modalities and respective data set to access to service the query. In an implementation, the three data sets corresponding to the three modalities are structured in a way that allows the querying engine to navigate between them fluidly. For example, a client may submit a query through the query interface582, which may potentially result in the query engine accessing and returning data associated with the metric events modality. Thereafter, if the client requires more in-depth information, the querying engine may seamlessly navigate to data associated with a different modality (e.g., full-fidelity) to provide the client with further details. Conventional monitoring systems, by comparison, do not provide more than a single modality or the ability to navigate between multiple modalities of data analysis. 4.1 Metric Time Series Implementations of the monitoring service disclosed herein allow trace data associated with APM to be stored and analyzed using multiple modalities of analysis. In one implementation, incoming spans from one or more clients are converted into a plurality of metric data streams (also referred to as metric time series) and transmitted to the analytics engine (e.g., the instrumentation analysis system322) for further analysis. Most of the metric data streams are created directly from the incoming spans prior to the sessionization process to generate metric time series related to spans. Each metric time series is associated with a single “span identity,” where a base span identity comprises a tuple of information corresponding to an associated type of span. Each metric time series in the metric time series modality represents a plurality of tuples with each tuple representing a data point. KPIs can be extracted in real-time directly from the metric time series and reported to a client. Because the metric time series are created without paying a time penalty associated with sessionization, they can be used to perform real-time monitoring with sub-second resolution and generate alerts within two to three seconds if some condition is violated. 4.1.1 Generating Metric Data Streams Using Span Identities A client application associated with, for example, an online retailer's website may potentially generate millions of spans from which a monitoring platform may need to extract meaningful and structured information. To organize the significant amounts of incoming span data, in an implementation, incoming spans may be automatically grouped by mapping each span to a base “span identity,” wherein a base span identity comprises some key attributes that summarize a type of span. An exemplary span identity may be represented as the following exemplary tuple: {operation, service, kind, isError, httpMethod, isServiceMesh}, where the operation field represents the name of the specific operation within a service that made the call, the service field represents the logical name of the service on which the operation took place, the kind field details relationships between spans and may either be a “server” or “client,” the isError field is a “TRUE/FALSE” flag that indicates whether a span is an error span, the httpMethod field relates to the HTTP method of the request for the associated span and the isServiceMesh field is a flag that indicates whether the span is part of a service mesh. A service mesh is a dedicated infrastructure layer that controls service-to-service communication over a network. Typically, if software has been instrumented to send data from a service mesh, the trace data transmitted therefrom may generate duplicative spans that may need to be filtered out during monitoring. Accordingly, the ‘isServiceMesh’ flag allows the analytics engine to filter out any duplicative spans to ensure the accuracy of the metrics computations. In some implementations, the tuple used to represent the span identity may include other identifying dimensions as well. For example, if a client needs visibility into metadata tags from the spans in addition to the dimensions extracted for a base span identity by default (e.g., service, operation, kind, etc.), an extended identity may be created. An extended identity supports custom dimensionalization by a client, where dimensionalization refers to the ability to extract information pertaining to additional tags or metadata in a span. An extended identity provides a customer the ability to dimensionalize the span using pre-selected dimensions. Conventional methods of monitoring by comparison did not offer customers the flexibility to add custom dimensions to streams of metric data. An extended identity comprises the span's base identity and additionally a map of the span's tag key:value pairs that matched a client's configuration settings. An exemplary extended identity may be represented as the following exemplary tuple: {operation, service, kind, isError, httpMethod, isServiceMesh, keyValueMap . . . }, where the keyValueMap field represents one or more additional tags or dimensions configured by the client to be extracted as part of the span's identity, e.g., customer name, member ID, etc. By extracting information related to additional tags, higher cardinality metrics may be computed using the metric time series modality. Further, a client is able to configure alerts on the custom dimensions as well, wherein the alerts inform a client if a particular dimension has crossed some critical threshold. In alternate implementations of the monitoring service disclosed herein, the tuple used to represent a span's base or extended identity may contain fewer elements. If the tuple of information of an incoming span happens to be the same as another span, both spans relate to the same identity. In an implementation, spans with the same base identity may be grouped together. A fixed size bin histogram is generated for each span identity to track metrics associated with the span identity. In this way the same type of spans are organized together and the client can track one or more metrics associated with each group of spans sharing a common identity. In an implementation, a fixed size bin histogram is generated for each unique span identity. The fixed size bin histogram may be a data structure, for example, that is preserved in memory. As noted above, each span identity may be tracked with a respective histogram. The histograms associated with the corresponding span identities, in one implementation, are generated and updated in fixed time duration windows. For example, histogram data may be generated for the incoming spans in memory every 10 seconds. At the end of each fixed duration, metrics associated with the histograms are emitted and the histogram is reset for the next time window. By emitting metrics for each time duration, data streams of metrics may be generated from the histogram data. The streams of metric data associated with each span identity, in one implementation, may be aggregated by a monitoring platform to provide a client of the monitoring platform meaningful information regarding the application being monitored. FIG.6illustrates the manner in which span metrics and trace metrics are automatically generated, in accordance with implementations of the monitoring service disclosed herein.FIG.6illustrates 5 unique spans (A-E) including a root span (an initiating span) A. In an implementation, each group of spans identified by the same span identity is associated with one or more span metrics650. For example, a minimum span duration630, a median span duration631, a maximum span duration632, a p90 latency value633, a p99 latency value634and a span count (how many times a particular identity was counted) may be tracked for each span identity. A histogram corresponding to the span identity may track these metrics over fixed sized durations, e.g., 10 seconds. For example, over a 10 second window, the histogram may comprise fixed size bins that track a minimum span duration, a median span duration, a maximum span duration, a p90 value, a p99 value and a count of all spans received corresponding to a given identity. At the end of each duration, the metrics are emitted and the histogram is reset. The emitted metrics are used to generate streams of metrics data corresponding to each span identity. Each data point on a metric data stream comprises the span identity dimensions or the extended identity dimensions if the client has configured additional metadata to be extracted from the spans. As shown inFIG.6, in an implementation, the initiating span A comprises a trace identity that is used to emit trace metrics640. The initiating span A helps define an identity for a trace which allows the monitoring platform to logically group together all traces that represent the same flow through an endpoint of the application. The duration of the identity for a trace is calculated as the end time of the latest span in the trace minus the start time of its initiating span. An exemplary trace identity may be represented as the following exemplary tuple: {operation, service, isError, httpMethod, isServiceMesh}, where the operation field represents the name of the specific operation within a service that made the call, the service field represents the logical name of the service on which the operation took place, the isError field is a “TRUE/FALSE” flag that indicates whether the trace is associated with an error, the httpMethod field relates to the HTTP method of the request for the associated trace and the isServiceMesh field is a flag that indicates whether the trace is part of a service mesh. The trace metrics640are computed after the spans have been consolidated into a trace following a sessionization process. The trace metrics are also turned into streams of metric data similar to the metric time series associated with the spans. FIG.7is a flow diagram that illustrates an exemplary computer implemented method of generating metric time series from ingested spans, in accordance with implementations of the monitoring service disclosed herein. As mentioned previously, incoming spans are received at a monitoring service from a collector704. Prior to being sessionized, span identities are generated for the spans and the spans with identical base identities are grouped together by module740. In one implementation, a histogram generation module722generates a histogram respective to each span identity. The histogram may represent a distribution of durations for a set of spans. Information from each incoming span (e.g., span duration information) corresponding to a given span identity is added to the fixed size bins of the respective histogram for the identity. The histogram is maintained for a fixed sized time window Y780(e.g., 10 seconds) after which the histogram generation module722emits the aggregated metrics and resets all the counters in the histogram for the next segment. Subsequently, the histogram generation module722generates metrics for the next duration of time Y+M785, and emits metrics corresponding to that time window. In this way, histogram generation module periodically emits one or more metrics (e.g., six span metrics as seen inFIG.6), including client-configured custom metrics, corresponding to each type of span to the analytics engine. In one implementation, the span information is also combined into traces708using a sessionization module707as discussed in connection withFIG.5. The sessionization process may consolidate traces (from spans) within a first minute window (associated with time window Y780). Thereafter, the sessionization process may consolidate traces within the subsequent window (associated with time window “Y+M”785). Trace identities are determined for the sessionized traces708using module742after which the trace metrics (as discussed in connection withFIG.6) are determined using the histogram generation module722in a process similar to the manner in which span metrics are generated. In an implementation, an aggregation module724may aggregate the periodic metric data from the histogram generation module722and create metric time series from the data for each span identity. In some implementations, the aggregation module724may generate quantized data streams from the metric data received from the histogram generation module722. The quantized data stream has data values occurring periodically at fixed time intervals. In one implementation, the aggregation module724may identify a function for aggregating the metric for which values are provided by one or more input data streams. The aggregation module724generates the quantized data streams by determining an aggregate value for each input data stream for each fixed time interval by applying the identified function over data values of the input data stream received within the fixed time interval. The aggregation module724may further receive a request to evaluate an expression based on the data values from the input data streams. The system periodically evaluates the expression using the data values of the quantized data streams. In one implementation, the aggregation module724may, for example, perform aggregations on the various metric time series to provide real-time monitoring of certain higher priority endpoints in the application. For example, aggregations may be performed to determine request, error and latency metrics for certain designated services. In order to do that, the aggregation module724may, for example, aggregate values across all span identities that are associated with the designated service. Further, in some implementations, alerting module782may monitor one or more metric time series from the aggregation module724and may be configured to generate alerts if certain metrics being monitored exhibit anomalous behavior. For example, if a maximum span duration associated with a given span identity crosses over a certain threshold, an alert configured using the alerting module782may be triggered. The alert may, for example, be responsive to a metric time series associated with span metric632fromFIG.6, wherein the alert is triggered if the maximum span duration exceeds a given threshold. In one implementation, the histograms generated by the histogram generation module722may be stored in database777. In an implementation, the histogram data may be stored as parquet-formatted files. 4.1.2 Real-Time Monitoring Using Metric Time Series Data FIG.8illustrates an exemplary on-screen GUI for APM illustrating a monitoring mode for an application displaying metric values aggregated from metric time series data, in accordance with implementations of the monitoring service disclosed herein. In one implementation, the GUI ofFIG.8displays a monitoring mode indication when a corresponding monitoring mode option802is selected. The monitoring mode displays a panel888listing services804comprised within the application being monitored. Each service is displayed alongside metrics pertaining to requests/second806, error rate812and P90 latency values810. The metrics data displayed in the panel888is computed in real-time and is aggregated using the metric time series data. In an implementation, an aggregation module similar to the aggregation module724discussed in connection withFIG.7performs the necessary aggregations from the various metric time series to display metrics associated with each of the services. The service level KPIs may be computed through the real-time aggregation pipeline discussed in connection withFIG.7before the histogram metadata is stored in the backend of the analytics engine. The monitoring mode also comprises an application topology graph830. An application topology graph (or service graph) typically decomposes an application into all its component services and draws the observed dependencies between the services so a client can identify potential bottlenecks and get a better understanding of the manner in which data flows through the software architecture. The service graph830also facilitates visualizing cross-service relationships between services comprised within the application and external to the application (as will be discussed further in connection with the metric events modality). In an implementation, the service graph may be created using information gleaned from the metric time series data aggregated by the aggregation module724discussed in connection withFIG.7. By ingesting up to 100% of the incoming spans from the client software and implementing monitoring service306as a Software as a Service (SaaS) based service offering, implementations of the monitoring service disclosed herein advantageously retain valuable information pertaining to the spans that is further analyzed in the SaaS backend. Span identities and histogram information (e.g., various counts and metrics data) associated with the incoming spans that are stored may be used to conduct further analysis. For example, metadata may be analyzed to identify certain offending services or operations, and data regarding those services or operations may be surfaced for further analysis. Conventional monitoring systems typically expunged the span data after extracting the relevant metrics from them. By comparison, implementations of the monitoring service disclosed herein retain high-fidelity information related to all the incoming spans for deeper analysis. The metadata retained provides a client the ability to filter based on certain dimensions and services that would not have been possible using conventional monitoring systems. Further, the metadata retained may be used in conjunction with data sets for other modalities such as metric events and full-fidelity to allow a client to provide a thorough investigation of an alert. In one implementation, using, for example, the “service,” “operation,” and “kind” fields in the tuple, the aggregation module724(fromFIG.7) may be able to determine span identities associated with cross-service calls. Spans associated with inter-service calls are of interest to a client because they provide client information regarding the manner in which two services within an application are interacting. Implementations of the monitoring service disclosed herein are able to advantageously use the metadata saved for the metric time series to perform post-processing and determine services associated with inter-services calls. For example, the value of the “kind” field related to a span identity may be either “client” or “server” where the analytics engine may be able to use that information in post-processing to determine if the span is related to a cross-service call. If it is determined that a particular span is related to a cross-service call, those spans could be processed through the analytics engine to discover further information regarding the dependencies. For example, in one implementation, if a client identifies a span identity associated with a cross-service call or a span identity associated with a high value operation, the client may create an extended identity for the corresponding span identities and supplement those identities with additional custom dimensions to be monitored. For example, the client may want to monitor a customer name association with such spans. The client may simply reconfigure the analytics engine to extract the additional customer name dimension as part of the spans' extended identity. Retaining span information associated with incoming spans provides a client additional metadata to perform intelligent processing. In an implementation, the client may only collect data pertaining to select operations. In other words, the client may filter out data pertaining to select operations that are of less interest to a client. The number of unique span identities may typically roughly correlate with the number of unique operation names present on the span. In an implementation, the client is able to turn off or filter out span identities associated with certain operations if they are not particularly useful. In other words, the monitoring platform can be configured to turn off metric generation related to selected span identities. This advantageously reduces loads on the metrics analytics engine because it does not need to track and store metric time series for spans that are of little interest to a client. For example, spans associated with calls that a service makes to operations internal to the service do not convey information and can be filtered. Accordingly, additional resources can be directed towards processing spans associated with services and operations that are of greater interest to a client. Conventional monitoring systems by comparison would not have the flexibility to selectively focus on spans associated with high value services or operations by filtering out the less valuable spans. 4.2 Metric Event Modality The metric event modality generates and stores aggregated rows of metrics values for selected indexed tags from the incoming trace data for given time durations. The selected tags may, for example, be indexed from the incoming spans when the spans are ingested. Metrics data may, for example, comprise, but is not limited to, number of requests (e.g., between two services), number of errors and latency. The aggregated rows of metrics data are stored efficiently for fast aggregation. The metric events data may be rapidly vectorized and aggregated in response to queries from a client. Implementations of the monitoring service disclosed herein use the aggregated rows of metrics data created in association with the metric events modality to generate a full-context application topology graph using the metric events data (e.g., by module522inFIG.5). As noted above, an application topology graph (or service graph) typically decomposes an application into all its component services and draws the observed dependencies between the services so a client can identify potential bottlenecks and get a better understanding of the manner in which data flows through the software architecture.FIG.9illustrates an exemplary on-screen GUI comprising an interactive topology graph for an application created from the aggregated metric events data, in accordance with implementations of the monitoring service disclosed herein. The service graph facilitates visualizing cross-service relationships between services comprised within the application and external to the application. The exemplary GUI ofFIG.9also enables customers to track the causal chain of operations resulting in an error. It should be noted that the service graph may also be generated using the metric time series data as noted earlier, however, storage for the metric events data set may be significantly less because it does not need to store as much metadata as metric time series data. Accordingly, generating the service graph using metric events data is more efficient from a storage standpoint. FIG.9illustrates an on-screen GUI comprising an interactive full-context service graph900, which is constructed for an exemplary microservices-based application using the metrics data generated in connection with the metric events modality. Each circular node (e.g., nodes associated with services902,904and906ofFIG.9) represents a single microservice. Alternatively, in an implementation, a circular node may also represent a group of multiple microservices, where the GUI for the monitoring platform (associated with, for example, the monitoring service306) provides a client the ability to expand the node into its sub-components. In an implementation, services that are part of the client's application may be represented differently from services that are external to the client's application. For example, circular nodes (e.g., nodes associated with services902,904and906) of the exemplary application represented by service graph900are associated with services comprised within the client's application. By contrast, squarish nodes (e.g., nodes associated with databases dynamodb915, Cassandra920, ad-redis912) are associated with services or databases that are external to the client's application. A user may submit a request at the front-end service902; the user's request at the front-end service902may set off a chain of subsequent calls. For example, a request entered by the user at the front end of the platform may generate a call from the front-end service902to the recommendation service904, which in turn may generate a further call to the product catalog service906. As noted previously, a chain of calls to service a request may also comprise calls that a service makes to internal sub-functions or operations within the same service. Each edge in the service graph900(e.g., the edges922,924and926) represents a cross-service dependency (or a cross-service call). The front-end service902depends on the recommendation service904because it calls the recommendation service904. Similarly, the recommendation service904depends on the product catalog service906because it makes a call to the product catalog service906. The directionality of the edge represents a dependency of a calling node on the node that is being called. Each of the calls passes the Trace ID for the request to the respective service being called. Further, each service called in the course of serving the request could potentially generate several spans (associated with calls to itself or other services). Each of the spans generated will then carry the Trace ID associated with the request, thereby, propagating the context for the trace. Spans with the same Trace ID are, thereafter, grouped together to compose a trace. In some implementations, the GUI comprising service graph900may be configured so that the nodes themselves provide a visual indication regarding the number of errors that originated at a particular node versus errors that propagated through the particular node but originated elsewhere. In an implementation, the high-cardinality metrics data aggregated in association with the metric events modality may be used to compute the number of errors that are used to render the nodes of the service graph. For example, as shown in the service graph ofFIG.9, the front-end service902makes calls to the recommendation service904. Errors may be generated at the recommendation service904not only in response to calls from the front-end service902, but also in response to calls that the recommendation service904makes to itself (e.g., in response to sub-functions or operations that are part of recommendation service). For such errors, the recommendation service904would be considered the “originator” for the error. The recommendation service904also makes calls to the product catalog service906and these calls may result in their own set of errors for which the product catalog service906would be considered the error originator. The errors originating at the product catalog service906may propagate upstream to the front-end service902through the recommendation service904; these errors would be observed at the recommendation service904even though the recommendation service904is not the originator of those errors. It is appreciated that conventional monitoring technologies would not provide adequate means for a client to distinguish between errors that originated at the recommendation service904versus errors that propagated through the recommendation service904but originated elsewhere. By performing computations using the metrics data associated with the metric events modality, implementations of the monitoring service disclosed herein are able to render a service graph that visually indicates critical information regarding the services in an architecture, e.g., number of requests between services, the number of errors generated by a service, number of errors for which the service was the root cause, etc. The service graph900allows clients the ability to visually distinguish between errors that originated at the recommendation service904as compared with errors that simply propagated through the recommendation service904. As shown inFIG.9, the node associated the recommendation service904comprises a solid-filled circular region966and a partially-filled region962, where the region966represents errors that originated at the recommendation service904while the region962represents errors that propagated through the recommendation service904but originated elsewhere (e.g., at the product catalog service906). Similarly, solid-filled region960within the node associated with the product catalog service906represents the errors that originated at the product catalog service. Note that the errors returned by the product catalog service906originated at the product catalog service. In other words, the product catalog service906does not have errors from another downstream service propagating through it because it does not make calls to another service that is further downstream in the execution pipeline. Conversely, the front-end service902comprises a partially-filled region964because the errors observed at the front-end service902propagated to it from other downstream services (e.g., the recommendation service904, the currency service930, the product catalog service906, etc.) The front-end service902was not the originator of errors in the example shown inFIG.9. Note that in other implementations solid-filled regions (e.g., region966) and partially-filled regions (e.g., region964) may be represented differently. For example, different shades, patterns, or colors may be used to distinguish these regions from each other. Implementations of the monitoring service disclosed herein use the aggregated rows of metrics data created for the metric events modality to determine full-fidelity SLIs associated with the services in an application (e.g., by the module522inFIG.5). An SLI is a service level indicator—a defined quantitative measure of some aspect of the level of service that is provided. The SLIs are aggregated and extracted for the various services in a microservices architecture so that the behavior of applications may be understood. Most clients consider request latency—how long it takes to return a response to a request—as a key SLI. Other common SLIs include the error rate (often expressed as a fraction of all requests received) and system throughput, typically measured in requests per second. The measurements are often aggregated over a measurement window using the metrics data associated with the metric events modality and then turned into a rate, average, or percentile. In one implementation, the GUI comprising service graph900is interactive, thereby, allowing a developer to access the SLIs associated with the various nodes and edges within the application by interacting with respective portions of the service graph. Referring toFIG.9, in an implementation, a client may be able to hover their cursor over various regions of the on-screen displayed service graph900, including but not limited to the nodes (e.g., the nodes associated with services904,906etc.) and edges (e.g., the edges922,926, etc.), to receive SLI-related information for the associated microservices through a pop-up window or other interface. FIG.10illustrates an exemplary on-screen displayed GUI showing the manner in which a client may access SLIs pertaining to a service within an interactive topology graph, in accordance with implementations of the monitoring service disclosed herein. As shown inFIG.10, when a client hovers the cursor over the node associated with, for example, the recommendation service1006, a pop-up window1008is overlaid on the service graph1000comprising SLIs pertaining to the recommendation service1006. Specifically, SLIs pertaining to Requests1010, Errors1012and Latency percentiles1014are provided. Furthermore, in an implementation, information pertaining to Root Cause1016is also provided to the client. For example, the SLIs related to Requests1010comprise information regarding the rate of requests and number of requests serviced by the recommendation service1006during a specific time duration. The time duration over which the SLIs are calculated may be adjusted using drop-down menu1022. The time duration over which SLIs are calculated may vary, for example, from 1 minute to 3 days. As indicated by the time axis on hover chart1028, for this example, a time window of 30 minutes (from 9:09 to 9:39 a.m.) is selected. In an implementation, the pop-up window1008also provides the client information pertaining to SLIs related to Errors1012. In the example ofFIG.10, the pop-up window1008provides information regarding the error rate and the total number of errors that occurred during the specified time duration. The client is also provided information regarding what percentage of the total number of requests resulted in errors. In an implementation, the pop-up window1008also provides the client information pertaining to Latency Percentiles1014and a graphical representation1020of the same. For example, SLI p95 indicates that for 95% of the users, the latency for servicing the requests was less than 467 ms. Latency-related SLIs also include information regarding p90 and p50 percentiles. The graphical representation1020, in the example ofFIG.10, shows the latency information regarding the p95 percentile graphically. In one implementation of the monitoring service disclosed herein, the pop-up window1008also displays information pertaining to errors for which the selected service was the root-cause. The Root Cause information1016includes the number of errors for which the selected service (e.g., the recommendation service1006in the example ofFIG.10) was the originator, the associated error rate and the percentage of the total number of requests that represents. In this way, implementations of the monitoring service disclosed herein, in addition to providing clients visual cues for identifying root cause error originators, are also able to provide meaningful and accurate quantitative information to help clients distinguish between root cause-related errors and errors associated with downstream causes. Note that the SLIs displayed in the pop-up window1008are computed accurately using the metrics data gathered for the metric events modality. Because implementations of the monitoring service disclosed herein are able to ingest up to 100% the incoming span data (without sampling), the SLIs are computed factoring in all the incoming data, which results in accurate measurements. For the example ofFIG.10, there were a total of 2.6 million requests served by the recommendation service1006at a rate of 1445.23 requests/second (“sec”). Of these, 1.2 million of the requests resulted in errors at a rate of 714.83/sec, which represents approximately 49% of the total number of requests. In this way, implementations of the monitoring service disclosed herein provide a modality of analysis that enables a client to gather critical SLIs pertaining to the recommendation service1006including an indication of how many of the errors originated at the recommendation service1006. FIG.11illustrates an exemplary on-screen GUI showing the manner in which a client may access SLIs pertaining to an edge within an interactive topology graph, in accordance with implementations of the monitoring service disclosed herein. The SLIs pertaining to edges are also computed using the metrics data associated with the metric events modality. As shown inFIG.11, if a client hovers over or selects a particular edge, e.g., the edge924(as shown inFIG.9) (which represents the cross-service dependency of the front-end service902on the product catalog service906) a pop-up dialog box1108opens up on-screen that reports SLIs specific to the dependency. The “From” field1112represents the service that executes the call and the “To” field1114represents the service that is called (the service that the calling service depends on). As shown in the dialog box1108, SLIs pertaining to the number of requests (or calls) that were made, the number of those that returned in errors, and the latency associated with servicing the requests are provided. It should be noted that a latency value1120of 49 ms shown inFIG.11for this particular dependency may be annotated directly on the edge of the service graph. For example, as shown in service graph900ofFIG.9, edge924of the service graph900inFIG.9indicates the latency value970(e.g., 49 ms) directly on the edge in the service graph allowing a client to efficiently gather information regarding latency associated with the dependency. In an implementation, as shown inFIG.9, the edges within the application topology graph are annotated with their corresponding latency values. In this way, implementations of the monitoring service disclosed herein efficiently compute SLI data from the metrics information aggregated for this modality and advantageously enable developers to gather meaningful and accurate information regarding cross-service dependencies directly from the service graph900. In one implementation, the metrics data associated with the metric events modality are used to compute accurate SLIs across multiple dimensions. Further, implementations of the monitoring service disclosed herein support high dimensionality and high cardinality tags for the metric events modality. In one implementation, the GUI ofFIG.10may display one or more attribute (or tag) categories that comprise dimensions that may be varied across the service graph1000. In other words, the metrics data and the service graph may both be scoped by one of the various dimensions and also a time-range, which is helpful for keeping track of an architecture that is evolving. For example, attribute categories (e.g., Workflow1030, environment1032, incident1034and tenant-level1036) may be depicted within the GUI, each of which may correspond to attributes that may be varied to compute SLIs and error-related information for different combinations of the attributes. The categories of dimensions across which the SLIs may be computed, include, but are not limited to, workflow1030, environment1032, incident1034and tenant-level1036. Each of the categories comprises a drop-down menu with options for the different dimensions. Using the drop-downs to select a different scope may result in a re-drawing of the service graph or a re-calculation of the metrics data to correspond with the selected scope. The metrics events data allows clients to easily and rapidly compute measurements across various cross-combinations of tags or attributes. In an implementation, the GUI may include a panel1050that may display SLIs across the various workflows. Further, the GUI allows clients the ability to break down the workflows across multiple different attributes using drop down menu1051. The computations for each of the break-downs may be efficiently determined using the metrics data aggregated for the metric events modality. FIG.12illustrates on-screen displays that represent exemplary categories of dimensions across which SLIs may be computed, in accordance with implementations of the monitoring service disclosed herein. The exemplary category of dimensions corresponds to the categories associated with drop-down menus (e.g.,1030,1032,1034and1036) discussed in connection withFIG.10. The metrics data aggregated using the metric event modality allows clients to easily and rapidly compute measurements across various cross-combinations of attributes. As noted above, using the drop-downs to select a different scope may result in a re-drawing of the service graph or a re-calculation of the metrics data to correspond with the selected scope. Drop-down on-screen menu1230, for example, corresponding to workflow, illustrates different workflows specific to the application discussed in connection withFIG.9. A “workflow” is a type of category of dimension of the request that was processed. A workflow may be conceptualized as a type of “global tag” that is attributed to each span in a given trace. A workflow may, for example, be associated with a type of client process, e.g., “checkout,” that is generated on the back-end in response to a request. Similarly, drop down on-screen menus1234,1236and1232, relating to incident, tenant-level and environment respectively, provide further categories of dimensions across which SLIs may be computed. Each of the drop down on-screen menus1230,1232,1234and1236comprises various dimensions (associated with the respective categories) across which aggregations may be scoped. For example, the client may submit a query asking for the number of requests in a trace where “Workflow=frontend:/cart” and “incident=instance errors” and “tenant-level=gold.” By aggregating metrics data associated with the indexed tags, the metric events modality is able to respond to the client's query rapidly and efficiently. Note that SLIs may be computed for each attribute of the categories inFIG.12and also for each combination of attributes associated with the categories. In an implementation, for each combination of attributes selected using one or more of the drop-down menus, the client may be able determine the computed SLIs (e.g., by hovering a cursor over the various nodes and edges of the graph after the dimensions have been selected using, for example, the drop-down menus shown inFIG.10). In this way, implementations of the monitoring service disclosed herein enable a client to use the metric events modality to slice the application topology graph across several different attributes. It should be noted that clients might have different attributes or dimensions that may be of interest for their respective application. In an implementation, the monitoring platform may be configured to provide insight into client-specific dimensions. Consequently, the specific attributes or dimensions available in each of the drop-down menus may vary by client. For APM, in certain instances, a user may want to monitor the interactions between services in a microservices architecture on the backend that are related to a particular user-interaction or to a particular client process. An example of a client process may be a checkout process on a website of an online retailer. Conventional tracing and monitoring tools do not provide users the ability to effectively isolate and monitor a group of services on the backend of a microservices architecture that is associated with a particular client process, e.g., pertaining to a checkout process on an online retailer's website. Implementations of the monitoring platform disclosed herein (e.g., monitoring service306) allow users to monitor a path or sequence of events that occur on the backend of a distributed application in response a particular user-interaction or a client process. For example, a user may need to monitor a chain of calls and associated services that are invoked on the backend of the application in response to a user electing to conduct a checkout transaction on a website for an online retailer. Based on metadata extracted from the tags of one or more spans ingested into the monitoring platform, implementations of the monitoring service disclosed herein extract a unique workflow dimension from spans and traces associated with a particular user-interaction or a client process. Extracting a workflow name from traces associated with a particular user-interaction or a client process advantageously allows implementations of the monitoring service disclosed herein to track metrics associated with the respective user-interaction or process more efficiently. Further, implementations of the monitoring service disclosed herein are also able to construct a topology graph that facilitates visualizing cross-service relationships between services associated with the workflow. Implementations of the monitoring service disclosed herein also extract a unique workflow identifier for traces within a workflow that are associated with a discrete user-interaction over time. For user-interactions that span multiple traces, implementations of the monitoring service disclosed herein are able to use the workflow identifier to reconstruct a client-side view for a user of the monitoring platform based upon the shared workflow identifier between the traces. As noted earlier, software developed by a client (e.g., clients A and B inFIG.3) may be instrumented in order to monitor different aspects of the software. Instrumentation results in each span being annotated with one or more tags that provide context about the execution, e.g., a client process related to the span. The workflow dimension is one of the attributes (or tags) that a span may be annotated with through instrumentation to provide contextual information regarding the client process to which the span relates. For example, if a user clicks the checkout button on the website of an online retailer, the user-interaction may generate spans that are tagged with the workflow attribute where the attribute comprises a value of “checkout.” Any metrics extracted for the online retailer's application can then be filtered to compute metrics associated specifically with the checkout workflow (as also discussed in connection withFIGS.10and12). A workflow for APM is, therefore, a dimension that may be extracted from spans or traces associated with a user-interaction or a client process. In an implementation, the workflow dimension may be included within a set of tags (e.g., global tags) attributed to each span in a given trace. In an implementation, the workflow tag is included in each span within a trace associated with a user-interaction or process. A workflow may, for example, be associated with a type of client process, e.g., “checkout,” “update cart,” “add to cart,” etc. that originates in response to a user request. Attributing the workflow dimension to traces (e.g., as part of the instrumentation process) allows the monitoring platform to create logical groupings of services involved in a particular client process. The workflow dimension therefore allows ingested traces to be grouped based on the respective value of the workflow dimension and allows for metrics to be calculated for each type of workflow (e.g., each value associated with the workflow dimension). Typically, for APM, each trace is associated with a single workflow. As noted previously, a trace generally refers to a record of the manner in which a single user request, also referred to as a transaction, propagates from one service to the next in a distributed application. The spans resulting from one user request may be consolidated into a trace in the backend of the monitoring platform where the trace correlates with a single workflow. In an implementation, the workflow dimension may be instrumented as a global tag, where a workflow dimension associated with one span in a trace is attributed to all the other spans in the trace. For example, if one span in a trace is associated with a value of “checkout” for the workflow attribute, other spans in the trace would also be assigned the same value. In an implementation, for APM, the monitoring service306(referenced inFIG.3) will extract the tags related to the workflow dimension and their respective values from the ingested spans and will analyze them to determine the paths associated with each respective workflow. The path for a workflow comprises the various services and cross-service dependencies in the backend of the application that are invoked by a particular user-interaction or client process. In one or more implementations, the monitoring service306may be programmed to execute a complex set of rules to extract information pertaining to the workflows and their associated values from the tags of ingested spans. Further, pattern matching may also be employed to extract workflow information from ingested spans. In different implementations, the workflow names or values are explicitly written into the instrumentation by the clients and can be extracted from the spans after they are ingested into the monitoring service306. FIG.12Billustrates an exemplary on-screen GUI showing the manner in which a client may filter a service graph for an application and associated metrics by the workflow dimension, in accordance with implementations of the monitoring service disclosed herein. As shown inFIG.12B, the GUI may display a workflow dimension1242in addition to other dimensions1244that may be varied across the service graph1200. For example, a workflow may be selected from menu1240in order to display services and dependencies associated with a selected workflow. The workflow dimension1242and other dimensions1244may be varied to compute metrics, SLIs and error-related information for different combinations of the attributes over a selected time duration1249. Further, as shown inFIG.13, the GUI may include a side-panel1240that may display metrics across the various workflows (or any other selected dimension). For example, side-panel1240may display Request, Error, Latency (RED metrics) related metrics for each of the workflows aggregated over the selected time duration1249. Further, the GUI allows users the ability to break down the workflows across multiple different attributes using drop down menu1245. The computations for each of the break-downs may be efficiently determined using the metrics data aggregated for the metric events mode. 4.2.1 Metric Events Data Generation and Persistence FIG.13is a flow diagram that illustrates an exemplary method of aggregating metrics data from ingested traces for the metric events modality, in implementations according to the present disclosure. As mentioned previously, span information is received at a monitoring service from a collector1304. The span information is then combined into traces1308in real time using module1306in a process called sessionization as discussed in connection withFIG.5. The sessionization process may consolidate traces (from spans) within a first time window (associated with time window Y1380) before transmitting the traces to the collection module1320. Thereafter, the sessionization process may consolidate traces within the subsequent window (associated with time window “Y+M”1385). Subsequent to consolidation, the trace data is indexed by tag indexing module1307, which indexes one or more tags in the trace data. The tags may be client-selected tags or tags that the monitoring platform is configured to index by default. In one implementation, the metric events modality indexes a subset of tags associated with the spans of a trace, but indexes that set of tags with perfect accuracy because the metrics calculated take into account all the ingested spans. In one or more implementations, collection module1320receives one or more traces1308generated within a predetermined time window Y1380, and traverses the traces to identify and collect cross-service span pairs that represent cross-service calls. To collect the cross-service span pairs, the collection module1320identifies parent-child span pairs in a given trace where the service name for the parent and the child are different. Stated differently, the collection module1320will collect each pair of spans that has a parent-child relationship and where each of the two spans in the pair are associated with a different service. The service name of a span may be identified in a span-level tag included with each span. Alternatively, there may be other conventions for identifying a service name associated with a span, e.g., a special field within the span for the service name. Identifying and collecting the cross-service span pairs from the incoming spans are advantageous because they enable the monitoring platform to track information that will be most relevant to a client, e.g., to render the service graph and display the SLIs associated with the various dependencies between services. Spans associated with calls to internal operations that a service might make may not be of interest to an application owner and may, therefore, be ignored by the collection module1320when determining the cross-service span pairs. It should be noted that, in one implementation, once the cross-service span pair is identified, indexed tags may be extracted for the cross-service span pair by determining a service tier for the respective parent and child spans of the span pair. A service tier is a subset of spans in a trace that logically identifies a single request to a service. Accordingly, both a parent span and a child span in the cross-service span pair are associated with a respective subset of related spans known as a service tier. Indexed tags are extracted by the collection module1320from service tiers associated with a cross-service span pair. In a different implementation, however, the tags may be extracted directly from the parent span and child span in a cross-service span pair rather than the respective service tier associated with the parent span or child span. In one or more implementations, once the cross-service span pairs are collected and the indexed tags extracted from the respective service tiers, the collection module1320maps one or more selected tags for each service in the cross-service span pair to tag attributes, e.g., selected tags in a parent span (associated with the originating service) are mapped to a “FROM” tag attribute and selected tags in a child span (associated with the target service) are mapped to a “TO” tag attribute. This enables directionality information for the cross-service calls to be preserved. It will be appreciated that while the discussion herein focuses on “FROM” and “TO” tag attributes to indicate the direction of the dependency between services in a cross-service call, there may be several different ways to record dependency information between the two services. In one implementation, the aggregation module1366of the monitoring platform aggregates across the cross-service span pairs by maintaining a count for each unique set of “FROM” tag attributes (and their corresponding values) to “TO” tag attributes (and their corresponding values) for a cross-service pair. It should be appreciated that in this implementation, counts are maintained at the tag level for the cross-service span pair (rather than at the service level). Accordingly, a separate count is maintained for each set of parent span tags (mapped to a “FROM” tag attribute) and child span tags (mapped to a “TO” tag attribute) for a cross-service pair. The count is increased each time the aggregation module encounters the same unique set of “FROM” tag attributes (associated with tags of a parent span) and “TO” tag attributes (associated with tags of a child span) for the same cross-service span pair in one or more traces. In a different implementation, the count may be maintained at the service level. Accordingly, the count may be increased each time the same cross-service span pair is encountered within the trace information ingested from the client. The aggregation module1322advantageously prevents duplication by storing a single instance of each unique set of “FROM” tag attributes and “TO” tag attributes for a given cross-service span pair with an associated count in the storage module1366. The information in the storage module1366may be accessed by querying module1382where the querying module1382determines that the query is associated with the metric events modality. The querying module1382may, for example, be associated with the query engine and reporting system324discussed inFIG.3. The aggregated cross-service “FROM” and “TO” tag attribute sets and associated count values stored in the storage module1366may be used by the querying module1382to respond to queries in accordance with the metric events modality. Note that the collection and aggregation process is repeated for subsequent time windows (including window Y+M1385) after time window Y1380. In this way, the aggregation process is performed over time. This allows the metric events modality to deliver query results over varying time durations (as discussed, for example, in connection with the drop-down menu1022inFIG.10). FIG.14is a table illustrating an exemplary manner in which selected tags for each service in a cross-service span pair may be mapped to tag attributes and stored as part of a memory-resident data object associated with an edge in the service graph, in implementations according to the present disclosure. As noted above, in one or more implementations, once the cross-service span pairs are collected, the monitoring platform maps selected tags associated with each service in the cross-service span pair to tag attributes, e.g., selected tags in a parent span are mapped to a “FROM” tag attribute and selected tags in a child span are mapped to a “TO” tag attribute. The mapping is performed to allow directionality information for the cross-service calls to be preserved. For example, a data object for an “edge” (corresponding to an edge or dependency in the topology graph) may be created that comprises both the FROM-type of tag attributes and the TO-type of tag attributes. In one implementation, one or more edge data objects similar to the one shown inFIG.14is used to persist the data for the metric events modality (in addition to node data objects which will be discussed in connection withFIG.15B). The table ofFIG.14illustrates an exemplary manner of storing a data object associated with an edge in the service graph. The table comprises two services, Service A and Service B, in an application. Both Service A and Service B comprise indexed tags “span.kind” and “region.” Tag “span.kind” may have two possible values, “client” and “server.” Similarly, tag “region” may have two possible values, “us-west” and “us-east.” If all possible combinations exist in Service A, there may be 4 unique tag combinations associated with the “FROM” tag attribute, e.g., {(span.kind=client, region=us-west) (span.kind=client, region=us-east) (span.kind=server, region=us-west) (span.kind=client, region=us-east). Similarly, if all possible combinations exist in Service B, there may also be 4 unique tag combinations associated with the “TO” tag attribute. Assuming there is a complete interaction between Service and Service B, there may be 16 (4×4) different edges between the two services based on the unique set of “FROM” and “TO” type tag attributes. Note that the example inFIG.14illustrates information for two unique sets of “FROM” and “TO” tag attributes. Edge1490is associated with a TO-type attribute of “region=us-east” while edge1492is associated with a TO-type attribute of “region=us-west.” Because the two sets of “FROM” and “TO” attributes are not identical, a separate count is maintained for each. The edge1490has an associated count of 2, while the edge1492has an associated count of 1. To determine the total number of requests or total count associated with the cross-service call from Service A to Service B, the number of counts for each set of “FROM” and TO” tag attributes for an associated cross-service call may be summed up. In the example ofFIG.14then, a total of 3 requests is computed to occur between Service A and Service B. In one implementation, data sets for the metric events modality are stored as row of metrics extracted from the indexed tags in the service tiers, where each row is associated with either an edge or a node in the service graph. In an implementation, the edges on the service graph (e.g., the edges922and926ofFIG.9) are rendered using both the “FROM” and “TO” tag attribute sets because rendering the edges requires information regarding directionality. The counts for the “FROM” and “TO” tag attribute sets for a given cross-service span pair are summed up to yield the number of requests made between the two services associated with the span pair. In other words, edges are rendered in the service graph by grouping “FROM” and “TO” tag attribute sets associated with a cross-service call and summing up the request counts associated with the cross-service call. In an implementation, this grouping may be performed using “group by” statements in a query language, e.g., SQL. In one implementation, the value of the number of requests between two services may be used to determine the thickness of the edge between the two services in the service graph. In one implementation, the nodes (e.g., nodes associated with services902,904,906) on the service graph are also rendered using the aggregated cross-service “FROM” and “TO” tag attribute sets. However, rendering the nodes does not require directionality information and, therefore, the nodes may be rendered by collecting and extracting information from the “TO” type tag attributes. Stated differently, the nodes are rendered by grouping the “TO” tag attributes associated with a given service and summing up the request counts associated with the service. In an implementation, this grouping may be performed using “group by” statements in a query language, e.g., SQL. The “TO” tag attributes represent new services being called within the microservices architecture. Accordingly, the counts associated with “TO” tag attributes for a given service may be summed up to determine the total number of requests made to the service. In one implementation, the value of the number of requests may also be used to determine the size of the node when rendering the service graph. In an implementation, the “TO” type tag attributes for rendering the nodes may be aggregated separately from the “FROM” and “TO” tag attribute sets aggregated for rendering the edges. In the exemplary table ofFIG.14, information for Service B may be determined, for example, by analyzing the “TO” type tag attributes in the table. FIG.15Aillustrates an exemplary on-screen GUI showing a visual representation of a portion of an exemplary trace illustrating a cross-service call, in implementations according to the present disclosure. As shown inFIG.15A, front-end service1539makes a call to product catalog service1538. Accordingly, the front-end service1539and the product catalog service1538comprise a cross-service span pair. Note that spans1540,1546and1547may be part of the service tier for front-end service1539. Accordingly, even though the call is made by the span1547(‘frontend: request/GetProduct’) to span1545(‘productcatalogservice: /GetProducts), indexed tags associated with the front-end service1539may also be extracted from the spans that are part of the service tier for the front-end service1539. In one implementation, the first matching tag within a service tier is extracted. For example, indexed tag “environment=prod”1550may be extracted from the span1540, even though it is repeated in the spans1546and1547because the span1540comprises the first matching instance of the tag1550. Assuming tags “environment” (referred to herein as “env”), “http.status_code” (referred to herein as “code”) and “k8s.io/pod/name” (referred to herein as “pod”) are indexed, then tags1550,1551and1552are extracted from the front-end service1539while tags1560,1561and1562are extracted from the product catalog service1538. In an implementation, the extracted indexed tags are mapped to tag attributes. The extracted tags1550,1551and1552in the parent span (associated with the front-end service1539) may be mapped to a “FROM” tag attribute while the extracted tags1560,1561and1562in the child span may be mapped to a “TO” tag attribute. In one implementation, the mapped tags may be used to create node and edge data objects that are used to persist data for the metric events modality as shown inFIG.15B. FIG.15Billustrates the manner in which data in the metric events modality is persisted using an edge data object comprising a memory-resident table of tag attributes with associated counts and using a node data object comprising a memory-resident table of tags with associated counts, in implementations according to the present disclosure. In one implementation of the monitoring service disclosed herein, a memory-resident table1501titled “Edge Health” may be maintained to keep track of the various dependencies in the application. The table1501may be stored in, for example, in the storage module1366(inFIG.13). A memory-resident table1500titled “Node Health” may be maintained to keep track of the various service nodes in the application. Both tables comprise aggregated rows comprising metrics values. In one implementation, these rows are stored efficiently for fast aggregation. For example, the table1501may comprise one or more exemplary rows related to the cross-service span pair discussed in connection withFIG.15A. Row1506is one exemplary row that may be generated for the cross-service span pair ofFIG.15A. Note that for simplicity, only tag attributes “from pod” and “to pod” are illustrated in row1506inFIG.15B, but row1506would typically also comprise information for tag attributes associated with indexed tags “code” and “env” that are also indexed. As discussed above, each row for the cross-service span pair ofFIG.15Awill comprise a unique set of “FROM” and “TO” tag attributes. For example, if the front-end service (e.g., front-end service1539inFIG.15A) makes multiple calls to the product catalog service (e.g., product catalog service1538ofFIG.15A), but any of the calls are associated with different values for the “pod” tag from the values shown in row1506, the information would be recorded in a new row. In other words, each row records a single unique combination of tag attributes and service names. If the value of either the “from pod” or “to pod” tag attribute changes, a new row is created to record the information. Accordingly, there may be multiple rows in the table1501for the cross-service call discussed in connection withFIG.15A, where each row would comprise a unique combination of “FROM” and “TO” tag attributes for a given cross-service span pair. Each row in the table1501comprises a count value for number of requests1504, errors1505and latency1511. The requests1504are incremented each time the same cross-service call with the same unique set of attributes for a respective row is observed on a trace. The errors1505are incremented each time a request associated with a respective row is observed on a trace that has an error. The latency1511metric relates to a histogram of the duration that a respective request took. Further, each row comprises a timestamp1503to record the time of the cross-service call. Using the metrics associated with the requests1504, errors1505and latency1511and the timestamp1503, aggregations on the rows may be performed quickly and efficiently to determine SLIs for varying ranges of time. In response to a client query then, the numeric rows in the tables1500and1501may be summed into either timeseries buckets or into a single number depending on the query. In one implementation, the metric events modality may maintain a separate memory-resident table1500titled “Node Health” in system memory associated with the service nodes in the application. Each row in the memory-resident table1501comprises a unique combination of service names and associated tags. For example, row1508is associated with the front-end service (e.g., service1539inFIG.15A) and comprises corresponding tag values for “env,” “pod” and “code.” Similarly, row1507is associated with the product catalog service (e.g., product catalog service1538ofFIG.15A) and comprises corresponding tag values for “env,” “pod” and “code.” Each unique combination of service name and corresponding tag values is associated with metrics that are maintained in the memory-resident table1500, e.g., request, error and latency (as discussed in connection with table1501). These metrics may be used to perform fast and efficient aggregations. For example, if the client queried the number of times “env=prod” in the application, assuming the two exemplary services illustrated in table1500are the only ones where “env=prod,” the request counts in each row would be aggregated to provide a result of 2. Note that the memory-resident table1500may also comprise a “root cause” metric1509which tracks the number of times the corresponding service was the root cause of an error. For example, the “root cause” metric may be aggregated using the memory-resident table1500across multiple rows to determine the number of times each given service in an application was the root cause for an error. In one implementation, a software tool may be employed to perform faster aggregations across the rows of tables1500and1501. For example, Apache Druid, which is an open-source data store designed for sub-second queries on real-time and historical data, may be used to perform the aggregations rapidly and efficiently. In different implementations, other tools may also be used to perform aggregations. In one implementation, the information in the memory-resident tables1500and1501may be used in the metric events modality to perform the metrics aggregations for rendering the service graph (e.g., graph900ofFIG.9) and computing the associated SLIs. In one implementation, the metrics event modality may also store Trace IDs associated for each unique combination of cross-service span pairs and corresponding indexed tags. In one implementation, the aggregation module1322(ofFIG.13) of the monitoring platform aggregates across the cross-service span pairs by maintaining one or more exemplary Trace IDs for each unique set of “FROM” tag attributes (and their corresponding values) to “TO” tag attributes (and their corresponding values) for a cross-service pair. Accordingly, exemplary Trace IDs may be maintained for each unique cross-service call. The exemplary Trace IDs stored with each unique set of “FROM” and “TO” tag attributes for a cross-service span pair may be used by the querying module1382to respond to queries requesting more particularized information pertaining to non-indexed tags associated with the spans. For example, if a client needs particularized information regarding span performance or span duration, the querying module1382may be able to use the aggregated rows of information stored in a database associated with the storage module1366to access one or more exemplary Trace IDs associated with the call. Using the Trace IDs then, the querying module may be able to access the sessionized traces1308and perform analytics on the retrieved exemplary traces to deliver the requisite span performance and span duration information. In one implementation, the full trace information may be accessed from a storage set associated the full-fidelity modality, which stores the entire traces as ingested following sessionization. In a different implementation, however, the metric events modality may save full trace information for traces associated with the exemplary Trace IDs in a separate storage from the data set associated with the full-fidelity modality. In one implementation, because the metric events modality allows clients to retrieve raw trace data, it also allows clients to run an analysis on the retrieved data for an arbitrary set of tags (instead of being limited to the tags pre-indexed by indexing module1307). The metric events modality is particularly advantageous in circumstances where the client has identified a problem from the information provided by the metric time series. Having identified a problem either by manual monitoring of RED metrics or through an automatically generated alert, the client may be able to traverse deeper using the metric events data set and access relevant traces to receive more specific information regarding the problem. Also, the metric events modality allows the client to run an arbitrary analysis on the traces, e.g., on a set of tags that has not previously been indexed, which provides the client with specific information that may be used to diagnose and resolve the problem. FIG.15Cillustrates the manner in which data in the metric events modality is persisted using an edge data object comprising a memory-resident table of extracted indexed tag attributes with associated Trace IDs and using a node data object comprising a memory-resident table of extracted tags with associated Trace IDs, in implementations according to the present disclosure. In one implementation of the monitoring service disclosed herein, a memory-resident table1531created to persist data is associated with the various dependencies in the application. Also, a memory-resident table1530created to persist data for the metric events modality is associated with the various service nodes in the application. Note that table1531is created in a similar way to table1501inFIG.15Band that table1530is created in a similar way to table1500ofFIG.15B. Instead of tracking RED metrics, however, the tables inFIG.15Ccomprise a column for Trace IDs1590and Exemplar Type 1591. It should be noted that, in one implementation, memory-resident table1531may be maintained in combination with memory-resident table1501and that memory-resident table1530may be maintained in combination with memory-resident table1500. Row1597in table1531is one exemplary row that may be generated for the cross-service span pair ofFIG.15C. Note that for simplicity, only tag attributes “from pod” and “to pod” are illustrated in row1597inFIG.15C, but row1597would typically also comprise information for tag attributes associated with indexed tags “code” and “env.” As discussed previously, each row for the cross-service span pair ofFIG.15Awill comprise a unique set of “FROM” and “TO” tag attributes. Accordingly, there may be multiple rows in table1531for the cross-service call discussed in connection withFIG.15A, where each row would comprise a unique combination of “FROM” and “TO” tag attributes for a given cross-service span pair. Each row in table1531comprises a Trace ID1590, which keeps track of one or more Trace IDs associated with the unique combination of service names (and operation names) and tag attributes for the given row. In other words, the combination of service names (and operation names) and tag attributes in each row may comprise an index to access the associated Trace IDs. In one implementation, the Exemplar Type column1591tracks the type of exemplary trace associated with the Trace ID. Types of exemplars may be request, error, root cause errors or some latency bucket identifier. The Trace IDs in each row may be accessed to identify and retrieve the full trace associated with the ID for further analysis, e.g., an analysis on an arbitrary set of tags associated with the trace. In one implementation, the monitoring system may maintain a separate table1530associated with the service nodes in the application. Rows1595and1596in table1530are two exemplary rows that may be generated for the cross-service span pair ofFIG.15A. Each row in table1530comprises a unique combination of service and associated tags. For example, row1595is associated with the front-end service (e.g., service1539inFIG.15A) and comprises corresponding tag values for “env,” “pod” and “code.” Similarly, row1596is associated with the product catalog service (e.g., product catalog service1538ofFIG.15A) and comprises corresponding tag values for “env,” “pod” and “code.” Each unique combination of service name and corresponding tag values is associated with a Trace ID and Exemplar type that is maintained in table1530. As noted above, in one implementation, metrics event data may be persisted in tables that consolidate the data shown inFIG.15BandFIG.15C. For example, table1501may comprise an additional column to track Trace IDs and similarly table1500may comprise an additional column to track Trace IDs. The Trace IDs may be used in metrics events modality to retrieve full traces for more detailed analysis. In one implementation, full traces associated with the exemplary Trace IDs may be maintained in a dedicated storage associated with the metric events. In a different implementation, the full traces may be accessed from a data set associated with the full-fidelity modality. It should be noted that the metric events modality can comprise higher-cardinality metrics information because a higher number of tags may be indexed for the metric events data set as compared to the dimensions associated with the metric time series. However, the metric time series modality may provide higher-fidelity information because it retains metadata associated with incoming spans (e.g., service name, operation name, count values, etc.) that are not collected in the metric events modality. Further, the metric time series modality also allows clients to configure alerts against one of more time series to monitor incoming data in real-time. Because metric events are generated from post-sessionized traces, the metrics data associated with metric events may not be computed as rapidly as compared with the metric time series modality. 4.3 Full-Fidelity Modality In one implementation, the full-fidelity module524ofFIG.5stores all the incoming trace data from the sessionization process in real time. Unlike the prior two modalities, the full-fidelity modality stores the trace data in its raw form. In one implementation, the data is stored in parquet-formatted batches of full traces in an unstructured format (e.g., blob storage) along with some metadata. The metadata may comprise the tags associated with the trace (both indexed and unindexed) and other properties such as service name and operation for more efficient querying. In one implementation, the format of the metadata may comprise a map of a service name to a map of tag names, wherein each tag name may be mapped to a list of tag values. The batches of full traces in unstructured format and the metadata are queried in the full-fidelity modality using a robust data engine to search for any tag across the traces. For example, PRESTO is an open source distributed SQL query engine that may execute queries against data sources of varying sizes. FIG.16is an exemplary on-screen GUI showing the manner in which a client may submit a query to be executed against the full set of traces stored in connection with the full-fidelity modality, in accordance with implementations of the monitoring service disclosed herein. The full-fidelity modality, in one implementation, allows a client to execute a query against arbitrary tags to receive a set of traces that matches the query. For example, in the GUI ofFIG.16, the client enters a query1604for traces where “Request Type=user,” “Service=adservice” and the tag “ad_size_bytes=4092.” In response, the platform returns a list1602of the traces matching the client-entered filters and, further, provides information about the traces, e.g., the Trace ID, duration, start time, root operation, root cause error status code and associated spans. As mentioned previously, the traces retrieved in response to a query may be analyzed to determine performance summaries for the spans comprised therein. Alternatively, the span performance analysis can be computed on all the traces stored as part of the full-fidelity data set. In one implementation, the monitoring platform has the ability to run a full trace search (as shown inFIG.16), and feed the traces collected into other modalities of analysis to get more detailed information about an arbitrary set of traces and an arbitrary set of attributes associated with the set of traces. 5.0 Multiple Modalities for Performing Real User Monitoring (RUM) RUM is the practice of using data from an application or website's real-life users to monitor and understand application performance. RUM tracks metrics such as DNS timing, time-to-first-byte, full page load time, JavaScript errors and the time it takes to load specific elements. These metrics are collected by monitoring actual user sessions. By monitoring real-user data across a variety of end-user configurations, browser versions, operating systems, feature flags, user status, locations, etc., software delivery teams can identify problems that undercut the user's digital experience and user satisfaction. RUM is a specific type of application monitoring that relies on the passive collection of data produced by real users to identify application availability or performance issues. RUM provides insights that are difficult to achieve through other performance monitoring techniques because it synthesizes and reports on data collected from actual human users. While APM is used to monitor backend services and the interaction between them, RUM may be used to monitor activity and provide visibility all the way from the browser through the network down to the backend services. There are several challenges associated with implementing an observability platform (e.g., monitoring service306ofFIG.3) that can perform both APM and RUM-related computations within a heterogeneous distributed system. One of the challenges associated with computing metrics for both RUM and APM, for example, is efficiently ingesting and aggregating significant amounts of span and trace data generated by a website or application. Conventional tracing and monitoring systems are simply unable to ingest vast amounts of span and tracing data and, therefore, have to resort to sampling the data intelligently to reduce the volume of stored trace data. Using sampling exclusively, however, results in data loss and, as a result, conventional monitoring tools do not allow clients access to all the spans and traces generated by real user interactions with a website or application. Furthermore, conventional monitoring tools may calculate real-user metrics based on the sampled set of data and, accordingly, the calculations may be approximate at best and inaccurate at worst. Implementations of the monitoring service (e.g. monitoring service306) disclosed herein advantageously allow clients of the monitoring service the ability to ingest up to 100% of both RUM and APM-related spans and to generate metric data using the ingested spans. For RUM-related spans, for example, streams of metric time series data may provide clients with valuable real-time information pertaining to webpages (e.g. metrics related to accessing a particular endpoint provider) and also allow alerts to be configured to manage anomalous behavior associated with the webpages. Ingesting up to 100% of all span data also allows clients of the monitoring platform to retrieve responses to queries requiring a high degree of resolution, e.g., specific queries pertaining to specific user interactions with an application or browser over specific periods of time. Conventional monitoring tools that sample data simply are not able to provide accurate responses to queries requiring a high degree of resolution because they do not save all the generated span or trace data. Note that as used herein, “users” refers to real-life users of an application or website, whereas “client” refers to a frontend developer of the application or website or site reliability engineer (SRE) (associated with the application or website) using a monitoring platform (e.g. monitoring service306) to monitor the interactions of the real-life users with the application or website. In addition to ingesting and storing up to 100% of the APM-related spans, implementations of the monitoring service disclosed herein also sessionize and store up to 100% of the RUM-related spans (e.g., spans generated as a result of a real user interacting with a website or application) received from the client in real time. Implementations of the monitoring service comprise an ingestion streaming pipeline that can ingest and consolidate the incoming spans into traces, and is further able to use advanced compression methods to store the traces. Within the RUM instrumentation, implementations of the monitoring service may use traces to organize units of activity and may also extract all necessary metrics based on the trace data. In one or more implementations, the monitoring service may use spans to organize units of activity and may also extract all necessary metrics based on the span data. In one implementation, the monitoring platform may also be able to compute certain metrics associated with the entirety of a user session. Because incoming trace and span information may be efficiently ingested and aggregated in real time, the monitoring platform is able to advantageously convey meaningful and accurate information (without the need for sampling) regarding the frontend interactions of a user with a website or an application, e.g., page load times, HTTP requests, time-to-first-byte, etc. High-cardinality metrics may be calculated with a high degree of accuracy because all incoming data is accounted for and there is no data loss as a result of sampling. Implementations of the monitoring service disclosed herein further allow a client to store and analyze the RUM data using multiple modalities of analysis (similar to the modalities for APM data discussed in connection withFIG.5). In one implementation, a first modality comprises converting incoming RUM-related spans (or traces) from one or more clients into a plurality of metric data streams (also referred to as metric time series) prior to sessionizing the spans. The plurality of metric data streams computed for RUM data are similar to the metric data streams created for APM data discussed above. Each metric time series is associated with a single span identity, where a base span identity comprises a tuple of information corresponding to an associated type of span. Each metric time series in this modality (referred to herein as “metric time series modality”) represents a plurality of tuples, with each tuple representing a data point. Key performance metrics (KPIs) can be extracted directly from the metric time series in real-time and reported. Because the metric time series are created without paying a time penalty associated with sessionization, they can be used to perform real-time monitoring with sub-second resolution and to generate alerts within two to three seconds if a condition is violated. In one or more implementations, a second modality of analysis sessionizes the incoming RUM-related spans and supports deriving higher-cardinality metrics (as compared with metric time series data) for a selected set of indexed tags, e.g., client-selected tags, global span tags, etc., over selected time durations (referred to herein as the “metric events modality”). The metric events computed from RUM data are similar to the metric events computed for APM data discussed above. In one implementation, however, the higher-cardinality metrics for the metric events modality for RUM data is generated directly from span data without needing to wait for the traces to be fully sessionized. In other words, generating metric events and extracting higher-cardinality metrics from spans for RUM may be more efficient that for APM because the monitoring service does not need to wait to store and consolidate the incoming spans into traces in order to generate the metric events or compute the higher-cardinality metrics. Because RUM data typically comprises spans generated by a user interacting with a browser or application, the metrics can be computed directly from the span data, as compared with APM data where the spans need to be consolidated into the traces to provide a complete picture of the manner in which the spans traverse through the various services in an application. Accordingly, generating metric events for RUM data is typically faster than generating metric events for APM data. This modality is particularly useful for clients that need accurate SLI information for a larger set of high-value indexed tags. The metric events modality enables developers to aggregate metrics that have been pre-generated using the RUM span data to efficiently respond to queries submitted by a client. The aggregated real-user metrics help a client monitor end-user experience by providing visibility into the performance of a website or an application. Note that the metric events modality may not provide real-time streaming metrics and data retention that the metric time series modality provides. In one or more implementations, the metric events modality track exemplary RUM-related spans associated with a pre-configured set of indexed tags (similar to the manner discussed inFIG.15Cfor APM-related traces). The tags to be indexed may be pre-selected by the client or the monitoring platform. The Span IDs or the Trace IDs may be used to retrieve the associated spans or traces and analysis on the spans or traces may be performed to generate more particularized information regarding an end-user experience of a website or application. In one implementation, once the spans or traces are retrieved, an analysis may be run on an arbitrary set of tags (in addition to the pre-configured indexed tags). Additionally, in one or more implementations, a third modality of analysis may comprise a “full-fidelity” modality where a full-fidelity analysis may be conducted on any dimension or attribute of RUM data to gauge the performance of services in the microservices-based application (similar to the manner discussed in connection withFIG.16). The full-fidelity modality allows clients to search most or all of the incoming span data (including all the tag data) that was ingested by the monitoring platform without relying on sampling. The full-fidelity modality may sacrifice speed for accuracy, and may be used by clients that need a more thorough analysis of the services across every dimension or attribute. In an implementation, the three modalities associated with analyzing RUM-related data may be simultaneously supported by the monitoring platform by storing ingested trace data using three different formats, where each format corresponds to one of the three available modalities of analysis. Note that implementations of the monitoring service disclosed herein are not restricted to three discrete data sets. The data sets for the different modalities may overlap or may be saved as part of a single data set. When a client submits a query, the monitoring platform may determine which of the data sets is most suitable for addressing the query. Thereafter, the monitoring platform executes the query against the selected data set to deliver results to the client of the monitoring platform. By comparison, conventional monitoring systems typically focus on a single modality and do not provide clients the ability to seamlessly navigate between different modalities. Conventional monitoring systems also do not provide the ability to automatically select the most appropriate modality based on the content, structure, syntax or other specifics pertaining to an incoming query. FIG.17is a flow diagram that illustrates an exemplary computer implemented method of ingesting and aggregating span information to support multiple modalities of analysis for RUM, in accordance with implementations of the monitoring service disclosed herein.FIG.17illustrates the manner in which RUM ingest module588(discussed inFIG.5) ingests and aggregates spans associated with RUM data. The RUM ingestion engine is similar to the APM ingestion engine, but may use a separate deployment from the APM ingestion engine. In one implementation, RUM-related spans are received at the monitoring service306ofFIG.3from the beacon1767(which performs substantially the same functions as beacon567ofFIG.5). The ingested spans may be sharded by session ID and organization ID. The spans received from the beacon1767are directed to an ingest routing module1729which may comprise different components, e.g., gateway services, load balancer, etc. In an implementation, ingest routing module1729may comprise a queue (not shown inFIG.17) in which spans are stored prior to being sessionized by the RUM sessionization module1706. In one implementation, the incoming spans are converted into a plurality of metric data streams prior to consolidating the spans into traces. The metric data streams are generated by module1720prior to the spans being sessionized. Because the metric time series are created without paying a time penalty associated with sessionization, they can be advantageously used to perform real-time monitoring and alerting. The incoming spans for RUM may also be sessionized where the span information is combined into traces in a process called sessionization. The RUM sessionization module1706is responsible for stitching together or combining the traces1708. The traces associated with RUM may be used to organize units of activity and the necessary metrics may be extracted based on the trace data. Note that, in one implementation, the sessionized traces may also be input to the module1720to create metric time series to track traces (separately from the time series created to track spans). In addition to a Trace ID, each trace also comprises a time-stamp; using the time-stamps and the Trace IDs, the RUM sessionization module1706creates traces1708from the incoming spans in real time and sessionizes them into discrete time windows. For example, the sessionization process may consolidate traces (from spans) within a first time window (associated with time window Y1780) before transmitting the traces to modules1720,1722or1724. Note that in one implementation, the ingested RUM-related spans may be transmitted to modules1720,1722and1724without consolidating them into traces, wherein metrics and other information may be extracted directly from the span data. Subsequent to consolidating traces for the first time window, the sessionization process may consolidate traces within the subsequent time window (associated with time window “Y+M”1785) before transmitting those traces to the modules1720,1722, or1724. It should be noted that the time windows associated with each of the modules1720,1722, and1724may be different. In other words, the metric time series data may be collected over short time windows of 10 seconds each. By comparison, traces for the metric events modality (associated with the module1722) may be collected over longer time duration windows. Note that while in certain implementations, each modality can persist at different resolutions, in other implementations, the time window durations and resolutions for all modalities may be the same. In some implementations of the monitoring service disclosed herein, the RUM sessionization module1706for RUM-related spans is able to ingest, process and store all or most of the spans received from the beacon1767in real time. By comparison, conventional monitoring systems do not accept all of the incoming spans or traces; instead, they sample incoming spans (or traces) to calculate SLIs at the root level of a trace before discarding the spans. Implementations of the monitoring service disclosed herein, by comparison, comprise an ingestion streaming pipeline that is able to ingest all the incoming spans into traces in real time. Further, in one implementation, the monitoring service is also able to consolidate the spans into traces in real time, and is further able to use advanced compression methods to store the data. Further, implementations of the monitoring service disclosed herein are able to generate metric time series from the span data (prior to sessionizing the spans) to provide real-time monitoring and alerting of certain KPIs. As noted above, the RUM sessionization module1706can collect all the traces within a first time window Y1780using the time-stamps for the traces. Subsequently, the sessionized traces are fed to the modules1722and1724, for the respective modalities (metric events and full-fidelity) for extraction and persistence. Note that the tags analyzed for all three modalities in connection with RUM may be different than the tags analyzed for APM. In other words, each of the modules1720,1722and1724may be configured to perform a RUM-focused tag analysis. In one implementation, the incoming span or trace data is indexed by an optional tag indexing module1707, which indexes one or more tags in the data. The tags may be client-selected tags or tags that the monitoring platform is configured to index by default. In a different implementation, tag indexing may be performed as part of data aggregation, e.g., by the modules1720,1722and1724. Note that in the implementation ofFIG.17, the tag indexing module1707will be configured to index tags that are needed to perform a RUM-focused tag analysis. In an implementation, data sets associated with each of the modalities may be persisted in one or more databases1717. It should be noted that while modules1720,1722and1724perform substantially the same computations on RUM-related spans, as the corresponding modules520,522and524perform on APM-related spans, the schema for persisting the RUM data in database1717may be different from the schema for persisting APM data in one or more databases555ofFIG.5. For example, RUM data may be aggregated at the session level (rather than at the trace level). Accordingly, even though incoming RUM-related spans may be sessionized into traces and trace data may be ingested for the full-fidelity modality (e.g., into module1724), the data is consolidated and persisted based on session IDs. As noted previously, the spans associated with RUM (ingested, for example, from the beacon1767) may need to be treated differently from APM-related spans. For example, the spans related to RUM may need to be ingested and sharded by a session identifier (session ID) (and, optionally, an organization identifier) instead of using the Trace ID. A session ID is an identifier that connects a series of spans or traces. RUM data is typically organized into page views (which show details of a page visit) and sessions (which group all the page views by a user in a single visit). A session ID is typically used to filter for all the views in a specific session. For RUM, a developer is typically more interested in the behavior of a user over the course of a session, e.g., a user session interacting with a particular website or application. Spans associated with RUM are usually sharded and tracked using a session identifier (or session ID). Accordingly, the way full-fidelity data for RUM data is persisted in database1717may be different from the way full-fidelity data is persisted for APM data. For RUM, data is persisted in shards (or chunks) based on time and session ID rather than by traces where each shard or chunk is associated with a window of time in a user session. The full-fidelity modality for RUM data stores the chunks of data in raw form, e.g., in parquet formatted batches of spans in an unstructured format with some metadata. The metadata may comprise the tags associated with the trace (both indexed and unindexed) and other properties such as service name and operation for more efficient querying. In one implementation, the format of the metadata may comprise a map of a service name to a map of tag names, wherein each tag name may be mapped to a list of tag values. The batches of spans in unstructured format and the metadata are queried in the full-fidelity modality using a robust data engine to search for any tag across the traces. For example, PRESTO is an open source distributed SQL query engine that may execute queries against data sources of varying sizes. At query time, data is retrieved in chunks from the memory to be analyzed to generate a response to the query. In one implementation, indexes are created within the full-fidelity data set in database1717to be able to efficiently find relevant data within the data set. Referring back toFIG.17, the data sets for the respective modalities may be separate data sets, overlapping data sets or a single data set that supports all the modalities. Note that the databases1717may be a single database that stores data sets corresponding to all three modalities. Alternatively, the databases1717may represent different respective databases for each of the three modalities. Furthermore, the databases1717may also represent distributed databases across which relevant information for each of the three modalities is stored. In one implementation, a RUM analysis engine1795retrieves information regarding backend traces from APM module1796. APM module1796may extract APM trace information received from, for example, traces508inFIG.5. The RUM analysis engine1795receives APM trace information and forms connections between the fronted RUM traces1708and the backend APM traces508. This allows a client to monitor the manner in which errors or problems arising at the backend propagate to the frontend and vice versa. By connecting the frontend and backend traces, the monitoring platform is able to provide complete visibility into any transaction all the way from a user browser, through the network, and to any backend service. In one implementation, data associated with each of the three modalities is generated at the time of ingestion and stored separately from each other. The structure, content, type or syntax of query submitted by a client will typically dictate which of the three modalities and corresponding data set will be selected. In one implementation, an interface through which the query is submitted may also determine which of the three modalities and corresponding data set is selected. In an implementation, there may be some commonality in the data for the three modalities in which case the storage for the data may overlap. An alternative implementation may also comprise one or two of the three modalities (instead of all three) described above. A client may send in a request to retrieve information pertaining to a website or application through query interface1782. Note that query interface1782may, in one implementation, be a common interface for querying both APM and RUM data. The underlying querying engine (e.g., the query engine and reporting system324fromFIG.3) will analyze the structure, content, type and/or syntax of the query, and also the interface through which the query is submitted, to determine to which of the modalities and respective data set to route the query for servicing. In one implementation, the monitoring service may be able to consolidate both RUM and APM data at query time in order to respond to a user query. The query interface1782may be able to access modalities for both the APM and RUM-related data and provide a client with an appropriate response based on the query. In other words, in one implementation, the query can apply constraints to both the frontend RUM metadata and backend APM metadata, which allows a client to target both RUM and APM data using a single unified query. As noted above, the RUM analysis engine1795may form connections between the frontend and backend traces (as will be further explained below), which allow the query interface1782to target both the RUM and APM metadata with a single unified query. For example, at query time when a user submits a query through the query interface1782, the monitoring service may retrieve RUM related data from the database1717and further retrieve APM data from the APM module1796(via the RUM analysis engine1795). This way the frontend RUM span data may be aggregated with the corresponding APM backend traces in order to provide a response to the user query. Implementations of the monitoring service disclosed herein are, therefore, able to create a query time join of the RUM and APM data in response to a user submitting a query through the query interface1782. In an implementation, the data sets corresponding to the modalities are structured in a way that allows the querying engine to navigate between them fluidly. For example, a client may submit a query through the query interface1782, which may potentially result in the query engine accessing and returning data associated with the metric events modality for RUM data. Thereafter, if the client requires more in-depth information, the querying engine may seamlessly navigate to data associated with a different modality (e.g., full-fidelity) to provide the client with further details. By way of further example, a client may submit a query through the query interface1782, which may potentially result in the query engine accessing and returning metric events associated with both RUM and APM data (using the linkage information from the RUM analysis engine1795). Conventional monitoring systems, by comparison, do not provide more than a single modality or the ability to navigate between multiple modalities of data analysis. Further, conventional monitoring systems do not provide developers the ability to query both APM and RUM data using the same interface, or provide an aggregate analysis of the manner in which the APM and RUM data are connected, for example, by rendering a service graph that shown both frontend browser data and backend microservice data. 5.1 End-to-End Visibility of a Real User Session In certain instances, a frontend developer or site reliability engineer may need an overview of an entire session (e.g., the entire time duration that a user is session-interacting with a particular website or application) of a real user interacting with a website, application or web interface to gain insights into the end-user experience. A session typically groups all successive page views by a single user in a single visit and can be upwards of an hour. The developer may also need end-to-end visibility of a user session for troubleshooting purposes, e.g., to determine where the user experienced lags or had problems with navigation. As noted previously, conventional monitoring tools are unable to provide developers with end-to-end visibility into a user's session nor do they offer the ability to intelligently and more thoroughly explore areas of interest within the session. As noted previously, implementations of the monitoring platform disclosed herein are able to ingest, store and analyze significant amounts of data (e.g., span data, trace data, etc.) generated by both the frontend (e.g., a website) or the backend (e.g., a service on the backend) of an application owner's architecture. Further, implementations of the monitoring platform disclosed herein use the data efficiently to provide the developer with insights into the performance of a website or application or to detect problematic conditions associated with, for example, browser performance, network performance, erroneous processes, failing services, etc. In particular, implementations of the present monitoring platform are able to construct an end-to-end representation of an entire user session by ingesting up to 100% of the incoming spans from the client (e.g., clients' browser, etc.) into the SaaS backend of the monitoring platform and analyzing them. FIG.18illustrates an exemplary on-screen GUI comprising a service graph illustrating an access of one or more endpoints by a page with which a user is interacting, in accordance with implementations of the monitoring service disclosed herein. The service graph1800may comprise one or more nodes (e.g. node1810) that correspond to a page or a view (e.g., associated with a website URL or application) with which a user is interacting. As shown in service graph1800, the page represented by node1810makes calls to several endpoints (e.g., endpoints associated with nodes1820,1830,1840, etc.). The endpoints correspond to resources that the page (represented by node1810) is attempting to access. Several different types of endpoints may be displayed in service graph1800, e.g., endpoints associated with static resources, endpoints associated with third party providers, etc. This allows a client to gain insight into the manner in which different endpoints and endpoint providers (e.g., third party providers) are influencing the end user experience. In one implementation, the size of a node associated with either a page or an endpoint provider conveys the amount of traffic related to the node (as was discussed in connection withFIG.9). In one implementation, an application name and/or activity name for a node may be displayed alongside the node. For example, the URL for the page associated with node1810(http://robotshop.k8s.int.lab0.signalfx.com) may be displayed alongside the node1810in the GUI. As will be explained further below, in one implementation, the URL displayed alongside the node may be normalized. The application name or activity name may be extracted from span tags of spans associated with the node. Note that in one implementation, the connections (e.g., edge1808) shown in service graph1800may comprise metric information regarding the respective access as was discussed in detail in connection withFIG.9. For example, the access from node1810to the endpoint provider associated with node1820takes 223 ms as shown on the edge1808. In one implementation, the connection may also comprise information pertaining to an error rate. The resources or endpoints may be either be internal or external with respect to a client of the monitoring platform. In one implementation, the endpoints may relate to external resources, e.g., an external service such as a payment processor, a content delivery network (CDN), etc. Alternatively, in one implementation of the monitoring platform, the resources may be part of a backend owned by the client. More specifically, the client may own existing backend infrastructure that supports one or more of the endpoints and can, therefore, exercise control over those endpoints. For example, nodes1820and1840may correspond to endpoints that a client's backend infrastructure supports. Because nodes1820and1840correspond to endpoints that a client controls, the client may be able to glean additional information regarding the behavior of those endpoints from its own backend, where the additional information may provide a client further insight into the performance of the endpoints. Note that in one implementation, the service graph1800may be scoped by several constraints as discussed in connection withFIG.10. To scope a service graph or similar visualization entails filtering and displaying the service graph or visualization across one or more constraints. In one implementation the GUI ofFIG.18may include drop-down menus (not shown) that allow service graph1800to be filtered in accordance with different constraints, e.g., tags associated with environment, incident, etc. Further, the service graph1800may be filtered to display a particular type of endpoint or a specific view (or page). In one implementation, the monitoring platform also provides information regarding spans that may be of particular interest to a client. For example, the GUI ofFIG.18provides a list of exemplar spans (and the associated sessions in which they originate) associated with service graph1800in an adjacent panel1801. The list of spans, in one implementation, may provide information regarding a session ID1850, the span ID1855of the exemplary span, a timestamp1860for the respective span and any errors1870associated with the edge, page or endpoint which with the respective span is associated (e.g., HTTP errors, Javascript errors, etc.). As explained previously, the session ID1850is associated with a specific session that a user is actively participating on a platform provided by a client. Note that a span ID1855is displayed in the panel1801as opposed to a Trace ID because for RUM (as compared with APM), the spans provide a higher level of resolution and convey more meaningful information to a client. One of the differences between browser-emitted spans (associated with RUM) and backend spans (associated with APM) is that each browser span has all the metadata needed to analyze it, e.g., sessionId, location.href, activity, all tags, etc. As explained above, in order to analyze events for RUM, a client need not wait for a sessionized browser trace to be able to analyze individual spans or any propagation of metadata. The client can extract necessary information directly from the spans. Accordingly, for example, metric events may be generated for RUM faster than they are generated for APM data because the monitoring platform does not need to wait for the spans to be sessionized into traces. Note that the spans displayed in the panel1801may be exemplars that capture representative spans associated with the service graph1800. Alternatively, the panel1801may list all the spans associated with the service graph1800. Because the monitoring platform ingests all the spans associated with user activity, the monitoring platform has the option of either displaying up to 100% of the spans associated with the service graph1800or exemplar representative spans that provide an overview of the activity in the service graph1800(as shown inFIG.18). In one implementation, the panel1801may list one or more exemplar spans (for each session ID1850) that capture activity in which a client may be interested, e.g., spans associated with a certain threshold number of errors, spans associated with different types of errors, spans associated with specific client selected endpoints or pages, spans associated with a key metric, etc. The monitoring platform may, therefore, be configured to either display one or more representative spans for a session that would be of particular interest to a client. The list in the panel1801may comprise spans that are generated as a result of the calls made by one or more pages (or views) during a user session to various endpoints or resources. Each of the spans in panel1801may be associated with an access to a particular resource or endpoint as depicted in service graph1800. Each span may be the result of an activity that is either user-generated or was generated for the user by a browser (or application) with which the user is interacting. Note that not all the spans shown in the list are the result of direct user-interactions with a browser. Some of the calls depicted in the service graph1800or spans shown in the panel1801may, for example, be generated by background threads or by other processes that support the browsing session. In one implementation, the spans in the panel1801may be categorized to provide a user better insight into the types of resources with which the spans are associated. For example, spans may be categorized, for example, based on whether the resources being accessed are internal or external to a client of the monitoring platform. There may be other criteria for organizing the spans in the panel1801as well, e.g., spans related to CDN traffic, spans categorized by type of endpoint or resource accessed, spans relevant to a key metric, etc. Categorizing the spans advantageously allows a client of the monitoring platform to better understand traffic patterns and user behavior. In one implementation, the service graph1800may also be categorized so a client can visually distinguish between the various types of pages, endpoints and connections. FIG.19illustrates an exemplary on-screen GUI comprising a service graph illustrating an access of an endpoint by multiple pages, in accordance with implementations of the monitoring service disclosed herein. The service graph1900depicts certain pages (e.g., pages associated with nodes1910,1920, etc.) in an application or website accessing a particular endpoint provider (e.g., the/cart endpoint1940). As shown inFIG.19, the service graph1900can be filtered by a particular endpoint, e.g., /cart endpoint1940using drop-down menu1922. WhileFIG.18illustrates a page accessing several different endpoint providers,FIG.19illustrates that the service graph1900may also show several pages accessing a single endpoint provider. In one implementation, similar to the service graph discussed in connection withFIG.9, the service map may indicate the health of the various nodes. For example, certain endpoint nodes such as the/cart endpoint1940may comprise a solid-filled circular region1930indicating errors associated with the particular endpoint, where the size of the region1930indicates an approximate percentage of errors returned when the node is accessed. For example, the endpoint1940returns errors approximately 50% of the time when called. A page node, e.g., node1920with a solid-filled circular region, e.g., region1923indicates that the associated view had an error, e.g., a console error. In one implementation selecting a particular node (e.g., endpoint1940) may highlight corresponding spans associated with the node in an adjacent panel1901. The client may then be able to further examine errors (e.g., HTTP errors, JavaScript errors, etc.) associated with various calls made from or to the node. In a different implementation, the panel1901may list exemplary spans of interest associated with the service graph1900(similar to the panel1801ofFIG.18) regardless of node or edge selection by a client. In one implementation, aggregated metrics associated with a selected node (e.g. the /cart endpoint1940) may be displayed in a side-panel1902. For example, requests and errors1931and latency1933associated with the/cart endpoint1940may be displayed in the side-panel1902. It should be noted that metrics shown in the side-panel1902may be aggregated and computed using the metric events modality (e.g., using aggregated metrics from the metric event aggregation module1722). In a different implementation, real-time metrics may also be computed using the metric time series modality (e.g., using metrics aggregated from the metric time series module1720). In one implementation, the side-panel1902may also provide a tag breakdown1932with values of all the indexed tags. The tag breakdown and analysis surfaces problems that a client may be interested in and prevents the client from going through all the data manually. For example, the tag value1957associated with the browser tag informs a client that most of the tags associated with errors for the/cart endpoint1940correspond to the Chrome browser. In other words, most of the errors for the/cart endpoint1940resulted from an access through a Chrome browser. In one implementation, clicking the full analysis option1989provides the client a full tag analysis. The full tag analysis allows a client to access a breakdown of the errors by the various tags. FIG.20Aillustrates an exemplary on-screen GUI comprising an end-to-end view of a user session, in accordance with implementations of the monitoring service disclosed herein. As noted earlier, implementations of the present monitoring platform construct an end-to-end representation of an entire user session by ingesting up to 100% of the incoming spans from the client (e.g., clients' browser, etc.) into the SaaS backend of the monitoring platform and analyzing them. Further, implementations of the monitoring platform disclosed herein also provide end-to-end visibility of a single user session (e.g. a user session interacting with a particular website or application) with the ability to perform more in-depth investigation of specific chunks of time associated with a user session. The chunks of time may either be client-selected or automatically selected by the monitoring platform based on a determination of the types of behavior a client may be interested in examining. In one implementation, the service graphs illustrated inFIGS.18and19are constructed using spans collected and analyzed during a single user session. The GUI ofFIG.20A, in one implementation, conveys high-level metrics and information about the session including the start time2010, the session duration2020, the session ID2031, the agent2032and the location2035. As noted above, the session duration may be upwards of an hour depending on how long the user is interacting with an application or browser. The session illustrated inFIG.20A, for example, is 6.7 hours. In one implementation, the field for the agent2032may comprise information about the browser and operating system (OS) used by the user. Agent information may enable a client to identify browsers, devices and platforms most used by users which can be beneficial in making informed optimizations to an application or website. In one implementation, the location2035may comprise information about the location of the browser, user, etc. As noted above, a session ID is an identifier that connects a series of spans or traces. The graphic visualization2000is meant to capture the entire user session at an aggregate level along a time axis. In particular, the visualization graphically displays aggregated events and metrics computed for the user session. In one implementation, for example, the visualization graphically displays events2041(e.g., page load events), errors2042, JavaScript errors2043and requests2044associated with the session. Note that the errors2042are aggregated separately from the JavaScript errors2043to provide the client some insight into where the errors occurred, e.g., to distinguish between a frontend JavaScript error and an error that may have surfaced from the backend. The aggregated metrics may comprise metrics aggregated for the duration of the entire session or a selected portion thereof. The graphic visualization2000provides a client with an efficient overview regarding the most active segments of time within the session. In one implementation, the graphic visualization provides visual or other types of indicators to direct the client's attention to the portions of the visualization in which the client would be most interested, e.g., page transitions, errors, etc., that a user experienced during the session. In one implementation, a client of the monitoring platform may zoom into select parts of the user session. For example, a client may select region2005based on some activity of interest during the selected region. The client may be interested in the region2005based on a visual indication of a high number of events, errors, JavaScript errors or requests. For example, a client may select the region2005based on the several spikes in the number of events occurring during that region. In one implementation, the graphic visualization2000may also display an aggregate number of events, errors, or requests associated with a selected region. For example, a pop-up box2045indicates the aggregate number of events associated with the region2005. In one implementation the GUI may provide additional event activity metrics2046pertaining to the transactions in the selected region2005, e.g., the number of document loads, route changes, console errors, document resource loads, clicks, etc. Note that in one implementation, the aggregated metrics related to events, errors and requests may be computed using the full-fidelity module1724fromFIG.17. Because visualization2000represents a single user session, the events2041, errors2042, JavaScript errors2043and requests2044may, in one implementation, all be computed using the set of traces associated with the single user session. These set of traces may be available in the full-fidelity data set corresponding to module1724. The set of traces associated with a single user session may also be persisted in module1724for efficient queries thereafter. Note, however, that as discussed above, the full-fidelity modality persists RUM data in shards based on a session ID (rather than a Trace ID). In one implementation, a client may select a region of interest based on time values. For example, a client may be interested in user activity between the second and third hour of the user session. A client can then select the desired time bounds to define the region of interest and collect aggregated metrics associated with events, requests and errors for that region. In one implementation, the GUI may provide a waterfall view of spans associated with all the events in the user session. In this implementation, all the events in the user session would be viewable in the panel2050. In a different implementation, the waterfall view displayed in the panel2050is scoped to the client selected region2005. In other words, the spans shown in the panel2050are associated with all the events encapsulated within client selected region2005. In one implementation, only exemplar spans associated with the client selection region2005are displayed. In a different implementation, all the spans associated with the client selected region2005are listed in the panel2050. The spans displayed may be root-level or parent spans (e.g., span2052) that can be expanded out to reveal other child spans (e.g., spans2053,2054, etc.). For example, the document load event associated with parent span2052may be a combination of other sub-events, e.g., a document fetch associated with the child span2053and other different resource accesses such as the one associated with child span2054. Although not shown inFIG.20A, in one implementation, each of the spans shown in the waterfall view of the panel2050is displayed adjacent to an icon indicating whether one or more errors are associated with the span. Displaying an error status of a span enables a client to visually identify whether a particular span needs to be explored further. Note that each of the spans (including both the parent and children spans) may be expanded out to get further information regarding the various attributes associated with the spans, as shown in connection withFIG.20B. FIG.20Billustrates an exemplary on-screen GUI illustrating the manner in which a span may be expanded within the waterfall view to get further information regarding the attributes of the span, in accordance with implementations of the monitoring service disclosed herein. As mentioned above, any of the spans shown in the waterfall view of the panel2050may be further expanded to view the span attributes. For example, span2064shown inFIG.20Bmay be expanded to display its various associated tags. The expanded view for the span2064shows the Span ID2063and the Parent ID2062for the span2064. Also, the expanded view shows all the related tags2061. For example, the related tags may include the HTTP URL2088associated with the page that generated the span2064, the session ID2089, and links to a backend span and trace associated with the span2064(e.g., links2067and2068). In one implementation, additional information extracted from the attributes for one or more spans may be aggregated and conveyed to a client in the GUI ofFIG.20A. As mentioned previously, some of the resources accessed by a page may be part of a backend owned by the client. In such cases, the frontend spans, e.g., spans2053,2054, etc. displayed in the GUI ofFIG.20Amay be linked to backend APM traces to allow the client further insight into the performance of related endpoint providers. Where a frontend span associated with the user session can be linked to a backend APM trace, an APM icon2051is provided next to the frontend span in the waterfall view to indicate that linkage information exists for the respective span. The APM icon2051may also comprise a hyperlink to the linked backend trace. The linked trace is a backend representation of a user request for the resource associated with the respective span. A client may then be able to expand the span in the GUI to retrieve the link to the backend trace, where the link is provided as one of the attributes of the respective span. As shown inFIG.20B, the attributes of the span2064comprise a backend trace link2068and a backend span link2067. In one implementation, when the links are accessed, the monitoring platform directs the client to a GUI that provides further information regarding the backend spans or traces. In one implementation where the span2064is associated with an error, the backend trace link2068and the backend span link2067may link directly to the trace or span respectively in the backend where the error originated. In one implementation, the RUM analysis engine1795ofFIG.17performs the analysis necessary to link the frontend span (e.g., the span2064) with the backend trace. For example, for a given span comprising a link to a backend trace, the RUM analysis engine1795may first check the corresponding APM deployment to determine if a backend trace exists that has the corresponding Trace ID included in the span attributes. Once the RUM analysis engine1795determines that the Trace ID exists in the backend deployment and that it is accessible, the monitoring platform can add a hyperlink to the backend trace next to the span in the panel2050. FIG.20Cillustrates an exemplary on-screen GUI illustrating the manner in which hovering a cursor over a backend trace link provided for a span in the waterfall view conveys further information regarding a backend trace in a pop-up window, in accordance with implementations of the monitoring service disclosed herein. In one implementation, a client may hover a cursor over the APM icon2051ofFIG.20A(or even over one of the links2067and2068inFIG.20B) to receive further information regarding a backend trace (or span). For example, hovering over an APM icon2072shown inFIG.20Cmay result in a pop-up window2071on-screen and displaying summary information regarding the backend trace that connects to the associated frontend span2073. The pop-up window may further convey meaningful information regarding the linked backend trace, e.g., a performance summary2078, associated services2079, a Trace ID2077and a service/operation name2075related to the trace. The pop-up window2071provides a client a preview of the trace so the client knows what to expect if the backend trace is accessed and loaded. The pop-up window2071also provides a hyperlinked Trace ID2077to be able to fast-track a client to the actual APM trace on the backend. Because the monitoring platform has access to full-fidelity data, the client can click the hyperlink to directly access the backend trace, which may, for example, be stored in a data set associated with the full-fidelity modality discussed in connection withFIG.17. In one implementation, the client may also access the Workflow link2099to access the aggregate behavior in the system for the transaction related to the span2073. Accessing the Workflow link2099may direct the client to a service graph associated with the backend services monitored by APM, which allows the client to perform an in-depth investigation. Implementations of the monitoring platform therefore, advantageously provide a client with an end-to-end view of an entire user session graphically while indicating regions of interest in which a user may be interested. Implementations of the monitoring platform, for example, may provide a graphical representation of the entire user session which tracks the number of events, errors, and requests in the session. Conventional monitoring methods, for example, did not provide the ability to provide clients with an overview of an entire session while also providing the ability to gain further resolution into specific client-selected portions of the session. In one or more implementations, the monitoring platform allows a client to more intelligently navigate around the user session to discover portions of the session in which a client may be interested. In one implementation, for example, the visualization2000is automatically segmented into various chunks with certain vital statistics provided in the GUI for each chunk of time so a client can assess which regions are of particular interest. Thereafter, a client may be able to select a pre-segmented chunk or region of interest to receive information regarding the spans associated with the respective selected chunk in the panel2050. For example, the region2005may be associated with a chunk of time that the monitoring platform automatically flags as being of interest to a client based on the spikes in event activity during that time period. A client is then able to review the metrics (e.g., event activity metrics2046) associated with the pre-segmented chunk of time to determine if further exploration is warranted and also review the spans associated with the chunk in the panel2050. In one implementation, the monitoring platform is able to intelligently pre-select a region of interest for a client based on spans in which the client is interested. For example, a client may want to investigate the edge1808inFIG.18associated with a call to the endpoint or node1820. Alternatively, a client may want to investigate a node (e.g., a page node or an endpoint node) and explore exemplar spans associated with calls from or to the respective node. In either case, a client may start by selecting an exemplar span associated with either the relevant edge or node from the panel1801. Upon selecting the relevant span, the client, in one implementation, may be directed to the graphic visualization2000ofFIG.20Awith a region of interest (e.g., region2005) pre-selected for the client, where the region of interest would include information regarding the selected endpoint or edge. In this way, a client needing to explore a particular endpoint, page or edge in the GUIs ofFIGS.18and19may be directed to the GUI ofFIG.20Awith the relevant portion of the graphic visualization2000scoped to the portion in which the client is most interested. This allows the client to not only investigate the segment of interest but also advantageously provides the client with an overview of the relative location of the region of the interest within the overall session. Thereafter, the client can inspect the segment of interest more closely while having an overall idea of other proximate events that were taking place during the user session. By way of further example, a client may need to explore errors generated by a page associated with node1920. The client may identify exemplar spans within the panel1901associated with node1920and select one of those spans. Upon selection of the relevant span, the monitoring platform would automatically bring up the GUI ofFIG.20Awith a portion of the graph pre-selected where the selected portion comprises information regarding the span selected through the GUI ofFIG.19. The monitoring platform therefore automatically takes a client to a specific segment on graphical visualization2000associated with the user selection inFIG.19. The client can then visually analyze where the error occurs within the session in relation to other events in the session. For example, the client is able to determine whether the user continued to navigate around the page after experiencing the error or if the user chose to leave the page and transition to a different page. In one implementation, upon selection of an exemplar span in the panel1801ofFIG.18or the panel1901ofFIG.19, in addition to pre-selecting a region of interest within the graphic visualization2000, the monitoring platform may also present the specific span of interest to the client within the waterfall view of the panel2050of the GUI. In other words, the monitoring platform can direct the client to the actual instance of the span that exhibited the behavior in which the client expressed an interest. The client then has the option of expanding the span and analyzing the relevant attributes of interest associated with the span. In one implementation, the span within the waterfall view of the panel2050may either be highlighted or already expanded to allow a client to easily review the span attributes. For example, the span of interest may be expanded similar to the manner shown inFIG.20Bso the client can visually inspect the various attributes associated with the span. In one implementation, the GUI ofFIG.20Amay be configured to visually indicate to the client that a span displayed in the panel2050is related to the specific edge, endpoint or span selected in the interfaces ofFIGS.18and19. Note that a client may also be able to select (or interact in some way with) a node or an edge in the service graph1800ofFIG.18(or service graph1900ofFIG.19) and be directed to the graphical visualization2000ofFIG.20Awith a region of interest pre-selected and a specific instance of a span of interest highlighted (or expanded) in the waterfall view of the panel2050. In this way, a client is provided not only specific information regarding an error span of interest in the waterfall view but also an overview of where in relation to other events the error occurs on the session timeline. An exemplary manner in which a client may approach troubleshooting latency problems experienced by one or more users may start with the client investigating a user's interactions with a page or application at the service graph level. Viewing the service graph ofFIG.19, for example, the client may observe that the/cart endpoint1940is returning errors when invoked in response to a call. The client may select the endpoint1940and view information regarding an associated error span in the panel1901. Subsequently, the client may select or double-click on the entry corresponding to the error span in the panel1901and be directed to the session view ofFIG.20A, where a region (e.g., region2005) associated with the selected error span may be pre-selected for a client. Event activity metrics2046for the pre-selected segment may also be displayed to provide a client with some aggregated metrics associated within the segment. The client's attention may also be directed to the actual error span of interest within the panel2050, for example, by having the error span expanded to display its attributes. If the client controls the backend services to which the/cart endpoint1940maps, the error span will typically contain a link to the backend trace associated with the error span. The client can then access the link to be directed to the backend trace. As noted earlier, implementations of the monitoring platform are, therefore, able to map endpoint provider (or page provider) nodes on the frontend to the backend microservices comprised within a software architecture. In one implementation of the monitoring platform, the spans within the waterfall view of the panel2050may be organized by page access. Instead of laying out the spans in a chronological order, in this implementation, the spans may be organized and grouped by page view. This allows clients to have clearer insight into the manner in which a user navigated around a particular website or application. Any time a user transitions to a new page, either through a document load or a route change, the waterfall is reset and any activity subsequent to the transition is grouped separately from the prior activity. By way of example, selecting the region2005ofFIG.20Aallows a user to view a list of all the spans associated with the region2005in the panel2050grouped by page view. This allows the client to easily determine if the event spikes within the region2005are all associated with a single page view or multiple different page views. The client is also able to easily receive insight into a user's journey, in particular, the manner in which the user navigated from one page to the next. Grouping the spans by page view also allows the client to conveniently determine which of the pages had the most associated errors. For example, if the number of errors within the region2005were high, grouping the activity by page view allows a client to easily determine which of the page views is the most problematic. As noted previously, each of the spans are displayed adjacent to an icon indicating whether the span comprises an error. Accordingly, grouping the spans by page view may also allow a client to visually determine which of the pages were associated with the most error spans. FIG.21illustrates an exemplary on-screen GUI displaying aggregate metrics for a specific page, in accordance with implementations of the monitoring service disclosed herein. In one implementation, the monitoring platform may be able to provide the client with aggregated metrics for a specific page or even an endpoint selected, for example, from the service graphs displayed inFIGS.18and19.FIG.21illustrates aggregated metrics associated with a particular page or endpoint for a select time period. The aggregated metrics can, in one implementation, be calculated using the metric events modality for RUM-based spans discussed in connection withFIG.17. Alternatively, in a different implementation, more real-time metrics may be calculated using the metric time series modality also discussed inFIG.17. In another implementation, the full-fidelity modality may also be used to compute the metrics. The aggregated metrics, among other things, include end user response time2152and a graph2153conveying page load times over a selected time period. Other metrics2154, e.g., network time, response time, server time, requests, load time, etc. may also be displayed for the convenience of a client in the GUI ofFIG.21. In one implementation, the client may be provided additional details regarding how much of the latency associated with a page load was due to network latency versus latency attributable to a client's backend. In one implementation, the metrics (e.g., network time, server time, etc.) may be used to provide a client a way to visualize network time and/or server time across the service graphs shown in the GUIs ofFIGS.18and19. Providing metrics for specific pages and/or endpoints through an interface such as the one shown inFIG.21allows the monitoring platform to advantageously provide a client with more targeted information, especially, in instances where the service graph view (inFIGS.18and19) or the session view (inFIG.20A) may be particularly crowded with information. Session exemplars associated with the page or endpoint associated with the GUI may be displayed in a panel2155. The session exemplars may comprise details regarding the session ID, a timestamp associated with the session, the duration of the session, the agent (e.g., a browser, platform, OS) used during the session, a location of the user and a number of errors encountered during the session. In one implementation, where aggregated metrics for a specific page are displayed (as shown inFIG.21), information extracted from the location field2167associated with each session may be used to construct a geo-map allowing a client to visualize page views broken down by location. Location information, among other uses, helps a client understand regional performance of a website. In one implementation, the location information may also be used to visualize page load times broken down by location (e.g., by city, by country, by zip code, etc.). For example, an exemplary GUI may provide a client a map of the world allowing a client to hover over any country (or region) to see average load times or average page views for that country. In one implementation, page views and other metrics may also be filtered based on URL, browser type and version, operating system, user id, etc. The GUI ofFIG.21may provide visualizations that enable a client to see a breakdown of the page views based on URL, browser type and version, user id or operating system. In one implementation, for each page, a geo-map may be constructed to analyze the endpoint traffic for the respective page, e.g., the map may be able to visually indicate to a client the physical location of the various endpoints. This allows a client to obtain a better understanding of the proximity of the various resources being accessed by any particular page. In one implementation, where aggregated metrics for a particular endpoint are displayed, information from the location field2167may be used to construct a geo-map of pages accessing the particular endpoint. This allows a client to obtain a better understanding of where the pages or related users accessing a particular endpoint or resource are located. FIG.22illustrates an exemplary on-screen GUI displaying a geo-map associated with a particular website or application, in accordance with implementations of the monitoring service disclosed herein. The geo-map2205provides, among other things, a bird's eye view of where the traffic for a particular site or application is coming from. In addition to understanding the regional performance of a site, the geo-map can be used to visualize page load times broken down by location, to analyze resource access for a respective page by location, etc. The GUI ofFIG.22may also provide other relevant information to a client, e.g., high-level metrics2210for an application that are easily accessible, errors by view2215, top endpoints with errors2220, or visit counts by browser2225. 5.1.1 Aggregating Metrics for Workflows Associated with a Real User Session There are several challenges associated with implementing an observability platform (e.g., monitoring service306ofFIG.3) that can perform RUM-related computations within a heterogeneous distributed system. One of the challenges is associated with providing clients meaningful information regarding where work related to a particular user interaction with a website (or a client process) is occurring and, also, regarding application-specific use cases that cannot be easily captured with basic instrumentation techniques. In addition to enabling workflows for APM (as discussed in connection withFIGS.12A and12Bpreviously), implementations of the monitoring platform disclosed herein (e.g., monitoring service306) allow workflows (also known as “custom events”) to be used in connection with RUM to provide clients with meaningful information regarding work performed in relation to a frontend user interaction with a website (or application) or a client process. An example of a user interaction may be a user selecting an “Add to Cart” option on a website of an online retailer. Conventional tracing and monitoring tools do not have the ability to track and consolidate related frontend spans (associated with a workflow) generated in response to a user interaction or a client process during a real user session. Further, conventional tracing tools do not provide clients the ability to extract and aggregate metrics from the related frontend spans associated with the workflow. Implementations of the monitoring platform disclosed herein advantageously allow users to identify and track one or more related RUM spans associated with a workflow that are generated during a real user session in response to a particular user-interaction or a client process. Further, implementations of the monitoring platform are able to aggregate metrics from the one or more related RUM spans associated with the workflow. For example, a user may need to monitor a cumulative duration or error count associated with a particular user interaction, e.g., selecting an “Add to Cart” option on a website of an online retailer. Using metadata associated with a workflow attribute (and corresponding value) extracted from the tags of one or more related spans generated in response to the selection of the “Add to Cart” option, implementations of the monitoring service can identify other spans related to the user action. Subsequently, implementations of the monitoring service are able to compute the error or duration metrics associated with the selection of the “Add to Cart” option using the entire set of related spans. Extracting a workflow attribute from one or more spans associated with a particular user-interaction or a client process on the frontend advantageously allows implementations of the monitoring service to efficiently identify other spans associated with the respective user-interaction or client process. As a result, implementations of the monitoring service are able to identify and track spans ingested during a RUM session that are not explicitly tagged with a workflow attribute but may be related to a particular workflow (and to the spans which are explicitly tagged with a workflow attribute associated with the particular workflow). By associating other spans that may not be explicitly tagged with a workflow attribute to a particular workflow, implementations of the monitoring service disclosed herein are able to more accurately compute metrics for the particular workflow and provide clients with meaningful information about the entire set of spans that may be generated from a given user interaction or a client process. As noted earlier, software developed by a client (e.g., clients A and B inFIG.3) may be instrumented in order to monitor different aspects of the software. Instrumentation results in each span being annotated with one or more tags that provide context about the execution, e.g., a client process related to the span. The workflow dimension is one of the attributes (or tags) with which a span may be annotated through instrumentation to provide contextual information regarding the user interaction or client process to which the span relates. In an implementation, instrumentation for RUM may include the workflow dimension within a set of tags (e.g., global tags) attributed to a span known to relate to a particular user interaction or client process. Referring back toFIG.20B, as shown in the waterfall view therein, a span may be tagged with a workflow attribute2021(“Workflow Name”) and a corresponding workflow value2022(“Chartload”). The workflow attribute and its respective value indicates that the span shown inFIG.20Bmay be associated with loading a chart on a particular website. In RUM, the user may be viewed as a single service, one which interacts with other external services and performs some internal-only work. At this level, clients are typically interested in aggregate activities that might include many different external resource calls working in collaboration. This can make it difficult to narrow down at an aggregate level what the state of the system is when an issue or problem occurs and specifically where in the overall flow an error or performance degradation occurs. By using workflows in connection with RUM, implementations of the monitoring service use workflows to provide clients meaningful information regarding where in an overall flow a particular error or performance degradation occurs. One of the differences between browser-emitted spans (associated with RUM) and backend spans (associated with APM) is that in order to analyze events for RUM, a client need not wait for a sessionized browser trace to be able to analyze individual spans or any propagation of metadata. Further, instrumentation for RUM makes tracking and associating spans to a single causal source challenging. Because of this challenge, the RUM spans are not easily organized into causality-related traces. Implementations of the monitoring service disclosed herein address this challenge by identifying and organizing associated spans for secondary analysis into workflows. This aggregation may be performed at ingest time or at query time. In order to extract meaningful information for a client using workflows, however, implementations of the monitoring service first need to identify the entire set of spans related to a particular workflow. In contrast to APM, where an application can be instrumented so that each span in a trace associated with a particular workflow is tagged with the corresponding workflow attribute, not all frontend (or browser-related) spans generated for RUM that are associated with a particular user interaction or client process may be tagged with the appropriate workflow attribute. For example, while instrumentation for RUM may allow one or more spans generated in response to a user attempting to load a chart on a website to be tagged with the workflow attribute2021(and the associated workflow value2022as shown in the example ofFIG.20B), it may not be possible or convenient for the instrumenter to identify all the spans generated in response to the user-initiated activity based on instrumentation alone. FIG.23presents a flowchart2300illustrating a computer-implemented process for identifying the entire set of RUM-related spans that are associated with a particular workflow, in accordance with implementations of the present monitoring service. At block2302, a tagged span is identified, wherein the tagged span is tagged with an attribute associated with the particular workflow. At block2304, any unassociated span is filtered out. Unassociated spans may be filtered out, for example, by filtering out any span that does not have the same session ID as the tagged span. In other implementations, any spans that do not have the same instance identifier instance ID) may also be filtered out. Filtering out spans that do not have the same instance ID filters out span that may not be part of the same window instance (within a given session) as the tagged span. At block2306, the remaining spans are ordered by their start time. At block2308, any spans that have a start time occurring between the start time and the end time of the tagged span are designated as associated with the particular workflow. At block2310, from the remaining spans, any span that has a start time occurring after the end time of the tagged span is considered associated with the particular workflow provided that the start time of the respective span occurs prior to a predetermined duration of span inactivity. In other words, a span would not be associated with the particular workflow if a predetermined duration of time with no span activity has elapsed prior to the start time of the span. In one implementation, the predetermined duration of time is configurable by a client of the monitoring platform. In a different implementation, the predetermined duration may be a static constant programmed into the monitoring platform. At block2312, a determination is made if the predetermined duration has been exceeded. If the predetermined duration has been exceeded, then the process ends at block2314. If not, the process repeats for each of the remaining spans that were ordered at block2306. FIG.24illustrates an exemplary manner in which spans related to a particular workflow may be identified and grouped, in accordance with implementations of the monitoring service disclosed herein. As noted above, to start identifying spans that may be related to the particular workflow, implementations of the monitoring service will first filter out any unassociated or unrelated spans. Accordingly, spans that are unrelated to the tagged span (Span A2402tagged with a workflow attribute value of “Chartload” inFIG.24) will first be filtered out, e.g., either because they do not share the same session ID or instance ID as the tagged span or both. Thereafter, any span in the example ofFIG.24with a start time between the start time of the tagged span (e.g., Span A2402) and the end time of the tagged span is designated as related to the “Chartload” workflow. Accordingly, Span B2404, Span C2406and Span D2408are designated as related to Span A2402and to the “Chartload” workflow. Subsequently, any spans that have a start time occurring after the end time of Span A2402are considered associated with the “Chartload” workflow provided that the start time of the respective span occurs prior to a predetermined duration of span inactivity. In the example ofFIG.23, the predetermined duration of span inactivity is 100 ms. Accordingly, Span E2410is designated as part of the “Chartload” workflow because the start time of Span E occurs prior to a 100 ms period of inactivity. Because Span E2410starts after a 25 ms period of inactivity following Span D2408, Span E is still considered to be associated with the “Chartload” workflow. Span F2412, however, starts more than 100 ms of inactivity following Span2410. As a result, Span F2412is not included in the group of spans associated with the “Chartload” workflow. In one implementation, other heuristics besides the time-based approach illustrated inFIG.24may also be used to associate spans related to a particular workflow. For example, other attributes or tags of a span (besides the workflow attribute) may be compared to determine whether a particular span is related to or associated with a particular workflow. Implementations of the monitoring service disclosed herein allow a client to define and instrument a “workflow” dimension and to tag one or more spans of interest with the defined workflow attribute. In an implementation, the monitoring service306(referenced inFIG.3) will extract the tags related to the workflow dimension and their respective values from the ingested tagged spans and will analyze the tagged spans (and other associated spans) to determine the paths associated with each respective workflow. The path for a workflow in RUM comprises the various endpoint dependencies that are invoked by a particular user-interaction or client process during a real user session. By allowing a client to define the workflow dimension and track particular spans of interest, implementations of the monitoring service provide a client with more control over conducting dimensional analysis. In other words, clients can slice (or scope) the workflow dimension across other attributes (as discussed in connection withFIGS.12A and12B). Further, implementations of the monitoring service allow clients to identify spans related to the tagged spans and to compute metrics for all related spans. In one implementation, metrics, e.g., a count value, error count, duration, etc. can be aggregated for a particular workflow. The metrics may be computed based on the entire set of spans associated with a particular workflow. Alternatively, the metrics may be computed based on the spans that are explicitly tagged with a particular workflow attribute. Further, any metrics extracted for a real user session can then be filtered to compute metrics associated specifically with a particular workflow (as also discussed in connection withFIGS.10,12A and12B). In one or more implementations, the monitoring service306may be programmed to execute a complex set of rules to extract further information pertaining to the workflows and their associated values from the tags of ingested spans (and their associated spans). Further, pattern matching may also be employed to extract workflow information from ingested spans. FIG.25illustrates an exemplary on-screen RUM monitoring GUI showing an aggregate view of instrumented workflows, in accordance with implementations of the monitoring service disclosed herein. As shown inFIG.25, the GUI may present an aggregate view over many different signals (associated with user interactions). For example, the GUI may provide information about document load latencies2525and endpoint latencies2526. The GUI may also include information about instrumented workflows. For example, the GUI may display the top workflows2510(sorted by duration or error) indicating which workflows need the most attention. Another section in the GUI may display the various workflow categories2529. Further, other sections of the GUI may also illustrate workflow errors2512and workflow performance2515. In one implementation, each of the workflows shown in the GUI would be displayed next to a count, error and latency (or duration) metric aggregated over a selected time period (e.g., over the course of a single session). Further, the GUI may allow users the ability to break down the workflows across multiple different attributes using a drop down menu (similar to the manner discussed inFIGS.10,12A and12B). The computations for each of the break-downs may be efficiently determined using the metrics data aggregated for the metric events mode. In one implementation, each of the workflow names in the various sections shown inFIG.25may be linked to another GUI illustrating further details about a selected workflow.FIG.26illustrates an exemplary on-screen GUI showing a detailed view of a selected workflow, in accordance with implementations of the monitoring service disclosed herein. Because “workflow” is a tag that can be scoped or filtered on (similar to a page or resource/endpoint), the monitoring platform can support the selecting of a workflow and displaying the associated error and latency metrics for the given workflow. As shown inFIG.26, panel2610illustrates a latency graph for a selected workflow. Further, as mentioned previously, the GUI may allow users the ability to break down the workflows across multiple different attributes (or dimensions). In other words, the aggregated workflow metrics may be used by the monitoring platform for dimensional analysis. For example, panel2620allows clients to break down the behavior of the workflow across several different dimensions, e.g., application, component, operation, name, type, browser, etc. The workflow metrics may also be scoped across other dimensions, e.g., current page, geographical location, operating system, etc. FIG.27illustrates an exemplary on-screen GUI showing an aggregated waterfall view of an instrumented workflow, in accordance with implementations of the monitoring service disclosed herein. While panel2710displays the latency graph for the selected workflow (similar toFIG.26), selecting the waterfall view in panel2720displays exemplary spans associated with the workflow. Note that the metric events modality discussed in connection withFIG.17allows for exemplary RUM spans and traces to be stored. Displaying exemplary spans in the panel2720allows clients to better understand, at an aggregate level, the general type of work performed within the workflow. In one implementation, the workflow analysis is performed on RUM spans, but where specific connections can be established to backend traces, the panel2720may contain links to backend APM traces so that a client can perform deeper end-to-end analysis. As mentioned in connection withFIG.20A, where a frontend span associated with the user session can be linked to a backend APM trace, an APM icon (e.g., similar to the APM icon2051) can be provided next to the frontend span in the waterfall view to indicate that linkage information exists for the respective span. The linked trace is a backend representation of a user request for the resource associated with the respective span. A client may then be able to expand the span in the GUI to retrieve the link to the backend trace, where the link is provided as one of the attributes of the respective span. In one implementation, the panel2720would allow the client to view the exemplary APM trace for any backend calls performed during the selected workflow (metrics for which are shown in the panel2710). In one implementation, the linkage may also be workflow-specific. In other words, RUM workflows on the frontend may be linked to APM workflows on the backend. FIG.28illustrates an exemplary on-screen GUI of the manner in which associated spans from an individual session may be grouped into workflows, in accordance with implementations of the monitoring service disclosed herein.FIG.27illustrates events from a user session that may be displayed in a panel adjacent to a graphical view of the user session, e.g., the panel2050shown inFIG.20A. In one implementation, when viewing spans associated with a particular user session, the monitoring service groups associated spans by workflow so that it is easier to identify associations and patterns in the span data. As shown inFIG.27, spans2810associated with a given workflow may be highlighted for better visual analysis. FIG.29presents a flowchart2900illustrating a computer-implemented process for aggregating metrics associated with a user interaction during a real user session, in accordance with implementations of the present monitoring service. At block2902, a tagged span associated with a given workflow is identified from a plurality of spans ingested during the real user session. The tagged span, in one implementation, may be generated as a result of selective instrumentation by a client to include a workflow attribute associated with the given workflow in the span. At block2904, the monitoring platform identifies one or more other spans generated during the real user session that are associated with the tagged span. One exemplary method of identifying associated spans was discussed in connection withFIG.23. At block2906, the monitoring platform groups together the tagged span and the one or more other spans as relating to the given workflow. At block2908, metrics are aggregated for the given workflow. In one implementation, the metrics may be aggregated using metadata extracted from the tagged span over a given duration of time. In another implementation, metrics can be aggregated from both the tagged spans and the one or more spans identified as associated spans. The disclosed system advantageously addresses a problem in traditional data analysis of instrumented software tied to computer technology, namely, the technical problem of identifying spans and aggregating metrics that are associated with a given user interaction or client process generated during a real user session. The disclosed system advantageously solves this technical problem by providing a solution also rooted in computer technology, namely, by associating a span tagged with the workflow attribute with other spans related to the user interaction (or client process) that may not be tagged with the particular workflow attribute. Further, the disclosed system advantageously allows metrics to be computed from the entire set of spans associated with a particular workflow. The disclosed subject technology further provides improvements to the functioning aspects of the computer itself by identifying and organizing associated RUM-related spans for secondary analysis into workflows at query time. While the principles of the invention have been described above in connection with specific apparatus, it is to be clearly understood that this description is made only by way of example and not as a limitation on the scope of the invention. Further, the foregoing description, for purpose of explanation, has been described with reference to specific implementations. However, the illustrative discussions above are not intended to be exhaustive or to limit the invention to the precise forms disclosed. Many modifications and variations are possible in view of the above teachings. The implementations were chosen and described in order to best explain the principles of the invention and its practical applications, to thereby enable others skilled in the art to best utilize the invention and various implementations with various modifications as may be suited to the particular use contemplated.
220,174
11860761
DESCRIPTION OF EXAMPLE EMBODIMENTS Overview According to one or more embodiments of the disclosure, a device obtains page load information corresponding to a loaded web application. The device detects, based on the page load information, an anomalous feature of the loaded web application. The device identifies a type of the anomalous feature based on a number of resource anomalies within the loaded web application, wherein the type of the anomalous feature is selected from a group consisting of: a page anomaly; a resource anomaly; and a domain anomaly. The device performs one or more mitigation actions according to the type of the anomalous feature. In one embodiment, the type of the anomalous feature is: the page anomaly when the number of resource anomalies is zero, the resource anomaly when the number of resource anomalies is one, and the domain anomaly when the number of resource anomalies is greater than one. In one embodiment, the domain anomaly is: a single domain anomaly when resource anomalies within the loaded web application belong to a particular domain, and a multi-domain anomaly when the resource anomalies belong to a plurality of domains. Other embodiments are described below, and this overview is not meant to limit the scope of the present disclosure. Description A computer network is a geographically distributed collection of nodes interconnected by communication links and segments for transporting data between end nodes, such as personal computers and workstations, or other devices, such as sensors, etc. Many types of networks are available, ranging from local area networks (LANs) to wide area networks (WANs). LANs typically connect the nodes over dedicated private communications links located in the same general physical location, such as a building or campus. WANs, on the other hand, typically connect geographically dispersed nodes over long-distance communications links, such as common carrier telephone lines, optical lightpaths, synchronous optical networks (SONET), synchronous digital hierarchy (SDH) links, and others. The Internet is an example of a WAN that connects disparate networks throughout the world, providing global communication between nodes on various networks. Other types of networks, such as field area networks (FANs), neighborhood area networks (NANs), personal area networks (PANs), enterprise networks, etc. may also make up the components of any given computer network. In addition, a Mobile AdHoc Network (MANET) is a kind of wireless ad-hoc network, which is generally considered a self-configuring network of mobile routers (and associated hosts) connected by wireless links, the union of which forms an arbitrary topology. FIG.1is a schematic block diagram of an example simplified computing system100illustratively comprising any number of client devices102(e.g., a first through nth client device), one or more servers104, and one or more databases106, where the devices may be in communication with one another via any number of networks110. The one or more networks110may include, as would be appreciated, any number of specialized networking devices such as routers, switches, access points, etc., interconnected via wired and/or wireless connections. For example, devices102-104and/or the intermediary devices in network(s)110may communicate wirelessly via links based on WiFi, cellular, infrared, radio, near-field communication, satellite, or the like. Other such connections may use hardwired links, e.g., Ethernet, fiber optic, etc. The is nodes/devices typically communicate over the network by exchanging discrete frames or packets of data (packets140) according to predefined protocols, such as the Transmission Control Protocol/Internet Protocol (TCP/IP) other suitable data structures, protocols, and/or signals. In this context, a protocol consists of a set of rules defining how the nodes interact with each other. Client devices102may include any number of user devices or end point devices configured to interface with the techniques herein. For example, client devices102may include, but are not limited to, desktop computers, laptop computers, tablet devices, smart phones, wearable devices (e.g., heads up devices, smart watches, etc.), set-top devices, smart televisions, Internet of Things (IoT) devices, autonomous devices, or any other form of computing device capable of participating with other devices via network(s)110. Notably, in some embodiments, servers104and/or databases106, including any number of other suitable devices (e.g., firewalls, gateways, and so on) may be part of a cloud-based service. In such cases, the servers and/or databases106may represent the cloud-based device(s) that provide certain services described herein, and may be distributed, localized (e.g., on the premise of an enterprise, or “on prem”), or any combination of suitable configurations, as will be understood in the art. Those skilled in the art will also understand that any number of nodes, devices, links, etc. may be used in computing system100, and that the view shown herein is for simplicity. Also, those skilled in the art will further understand that while the network is shown in a certain orientation, the system100is merely an example illustration that is not meant to limit the disclosure. Notably, web services can be used to provide communications between electronic and/or computing devices over a network, such as the Internet. A web site is an example of a type of web service. A web site is typically a set of related web pages that can be served from a web domain. A web site can be hosted on a web server. A publicly accessible web site can generally be accessed via a network, such as the Internet. The publicly accessible collection of web sites is generally referred to as the World Wide Web (WWW). Also, cloud computing generally refers to the use of computing resources (e.g., hardware and software) that are delivered as a service over a network (e.g., typically, the Internet). Cloud computing includes using remote services to provide a user's data, software, and computation. Moreover, distributed applications can generally be delivered using cloud computing techniques. For example, distributed applications can be provided using a cloud computing model, in which users are provided access to application software and databases over a network. The cloud providers generally manage the infrastructure and platforms (e.g., servers/appliances) on which the applications are executed. Various types of distributed applications can be provided as a cloud service or as a Software as a Service (SaaS) over a network, such as the Internet. FIG.2is a schematic block diagram of an example node/device200that may be used with one or more embodiments described herein, e.g., as any of the devices102-106shown inFIG.1above. Device200may comprise one or more network interfaces210(e.g., wired, wireless, etc.), at least one processor220, and a memory240interconnected by a system bus250, as well as a power supply260(e.g., battery, plug-in, etc.). The network interface(s)210contain the mechanical, electrical, and signaling circuitry for communicating data over links coupled to the network(s)110. The network interfaces may be configured to transmit and/or receive data using a variety of different communication protocols. Note, further, that device200may have multiple types of network connections via interfaces210, e.g., wireless and wired/physical connections, and that the view herein is merely for illustration. Depending on the type of device, other interfaces, such as input/output (I/O) interfaces230, user interfaces (UIs), and so on, may also be present on the device. Input devices, in particular, may include an alpha-numeric keypad (e.g., a keyboard) for inputting alpha-numeric and other information, a pointing device (e.g., a mouse, a trackball, stylus, or cursor direction keys), a touchscreen, a microphone, a camera, and so on. Additionally, output devices may include speakers, printers, particular network interfaces, monitors, etc. The memory240comprises a plurality of storage locations that are addressable by the processor220and the network interfaces210for storing software programs and data structures associated with the embodiments described herein. The processor220may comprise hardware elements or hardware logic adapted to execute the software programs and manipulate the data structures245. An operating system242, portions of which are typically resident in memory240and executed by the processor, functionally organizes the device by, among other things, invoking operations in support of software processes and/or services executing on the device. These software processes and/or services may comprise a one or more functional processes246, and on certain devices, an illustrative page load monitoring process248, as described herein. Notably, functional processes246, when executed by processor(s)220, cause each particular device200to perform the various functions corresponding to the particular device's purpose and general configuration. For example, a router would be configured to operate as a router, a server would be configured to operate as a server, an access point (or gateway) would be configured to operate as an access point (or gateway), a client device would be configured to operate as a client device, and so on. It will be apparent to those skilled in the art that other processor and memory types, including various computer-readable media, may be used to store and execute program instructions pertaining to the techniques described herein. Also, while the description illustrates various processes, it is expressly contemplated that various processes may be embodied as modules configured to operate in accordance with the techniques herein (e.g., according to the functionality of a similar process). Further, while the processes have been shown separately, those skilled in the art will appreciate that processes may be routines or modules within other processes. —Observability Intelligence Platform— As noted above, distributed applications can generally be delivered using cloud computing techniques. For example, distributed applications can be provided using a cloud computing model, in which users are provided access to application software and is databases over a network. The cloud providers generally manage the infrastructure and platforms (e.g., servers/appliances) on which the applications are executed. Various types of distributed applications can be provided as a cloud service or as a software as a service (SaaS) over a network, such as the Internet. As an example, a distributed application can be implemented as a SaaS-based web service available via a web site that can be accessed via the Internet. As another example, a distributed application can be implemented using a cloud provider to deliver a cloud-based service. Users typically access cloud-based/web-based services (e.g., distributed applications accessible via the Internet) through a web browser, a light-weight desktop, and/or a mobile application (e.g., mobile app) while the enterprise software and user's data are typically stored on servers at a remote location. For example, using cloud-based/web-based services can allow enterprises to get their applications up and running faster, with improved manageability and less maintenance, and can enable enterprise IT to more rapidly adjust resources to meet fluctuating and unpredictable business demand. Thus, using cloud-based/web-based services can allow a business to reduce Information Technology (IT) operational costs by outsourcing hardware and software maintenance and support to the cloud provider. However, a significant drawback of cloud-based/web-based services (e.g., distributed applications and SaaS-based solutions available as web services via web sites and/or using other cloud-based implementations of distributed applications) is that troubleshooting performance problems can be very challenging and time consuming. For example, determining whether performance problems are the result of the cloud-based/web-based service provider, the customer's own internal IT network (e.g., the customer's enterprise IT network), a user's client device, and/or intermediate network providers between the user's client device/internal IT network and the cloud-based/web-based service provider of a distributed application and/or web site (e.g., in the Internet) can present significant technical challenges for detection of such networking related performance problems and determining the locations and/or root causes of such networking related performance problems. Additionally, determining whether performance problems are caused by the network or an application itself, or portions of an application, or particular services associated with an application, and so on, further complicate the troubleshooting efforts. Certain aspects of one or more embodiments herein may thus be based on (or otherwise relate to or utilize) an observability intelligence platform for network and/or application performance management. For instance, solutions are available that allow customers to monitor networks and applications, whether the customers control such networks and applications, or merely use them, where visibility into such resources may generally be based on a suite of “agents” or pieces of software that are installed in different locations in different networks (e.g., around the world). Specifically, as discussed with respect to illustrativeFIG.3below, performance within any networking environment may be monitored, specifically by monitoring applications and entities (e.g., transactions, tiers, nodes, and machines) in the networking environment using agents installed at individual machines at the entities. As an example, applications may be configured to run on one or more machines (e.g., a customer will typically run one or more nodes on a machine, where an application consists of one or more tiers, and a tier consists of one or more nodes). The agents collect data associated with the applications of interest and associated nodes and machines where the applications are being operated. Examples of the collected data may include performance data (e.g., metrics, metadata, etc.) and topology data (e.g., indicating relationship information), among other configured information. The agent-collected data may then be provided to one or more servers or controllers to analyze the data. Examples of different agents (in terms of location) may comprise cloud agents (e.g., deployed and maintained by the observability intelligence platform provider), enterprise agents (e.g., installed and operated in a customer's network), and endpoint agents, which may be a different version of the previous agents that is installed on actual users' (e.g., employees') devices (e.g., on their web browsers or otherwise). Other agents may specifically be based on categorical configurations of different agent operations, is such as language agents (e.g., Java agents, .Net agents, PHP agents, and others), machine agents (e.g., infrastructure agents residing on the host and collecting information regarding the machine which implements the host such as processor usage, memory usage, and other hardware information), and network agents (e.g., to capture network information, such as data collected from a socket, etc.). Each of the agents may then instrument (e.g., passively monitor activities) and/or run tests (e.g., actively create events to monitor) from their respective devices, allowing a customer to customize from a suite of tests against different networks and applications or any resource that they're interested in having visibility into, whether it's visibility into that end point resource or anything in between, e.g., how a device is specifically connected through a network to an end resource (e.g., full visibility at various layers), how a website is loading, how an application is performing, how a particular business transaction (or a particular type of business transaction) is being effected, and so on, whether for individual devices, a category of devices (e.g., type, location, capabilities, etc.), or any other suitable embodiment of categorical classification. FIG.3is a block diagram of an example observability intelligence platform300that can implement one or more aspects of the techniques herein. The observability intelligence platform is a system that monitors and collects metrics of performance data for a network and/or application environment being monitored. At the simplest structure, the observability intelligence platform includes one or more agents310and one or more servers/controllers320. Agents may be installed on network browsers, devices, servers, etc., and may be executed to monitor the associated device and/or application, the operating system of a client, and any other application, API, or another component of the associated device and/or application, and to communicate with (e.g., report data and/or metrics to) the controller(s)320as directed. Note that whileFIG.3shows four agents (e.g., Agent 1 through Agent 4) communicatively linked to a single controller, the total number of agents and controllers can vary based on a number of factors including the number of networks and/or applications monitored, how distributed the network and/or application environment is, the level of monitoring desired, the type of monitoring desired, the level of user experience desired, and so on. For example, instrumenting an application with agents may allow a controller to monitor performance of the application to determine such things as device metrics (e.g., type, configuration, resource utilization, etc.), network browser navigation timing metrics, browser cookies, application calls and associated pathways and delays, other aspects of code execution, etc. Moreover, if a customer uses agents to run tests, probe packets may be configured to be sent from agents to travel through the Internet, go through many different networks, and so on, such that the monitoring solution gathers all of the associated data (e.g., from returned packets, responses, and so on, or, particularly, a lack thereof). Illustratively, different “active” tests may comprise HTTP tests (e.g., using curl to connect to a server and load the main document served at the target), Page Load tests (e.g., using a browser to load a full page—i.e., the main document along with all other components that are included in the page), or Transaction tests (e.g., same as a Page Load, but also performing multiple tasks/steps within the page—e.g., load a shopping website, log in, search for an item, add it to the shopping cart, etc.). The controller320is the central processing and administration server for the observability intelligence platform. The controller320may serve a browser-based user interface (UI)330that is the primary interface for monitoring, analyzing, and troubleshooting the monitored environment. Specifically, the controller320can receive data from agents310(and/or other coordinator devices), associate portions of data (e.g., topology, business transaction end-to-end paths and/or metrics, etc.), communicate with agents to configure collection of the data (e.g., the instrumentation/tests to execute), and provide performance data and reporting through the interface330. The interface330may be viewed as a web-based interface viewable by a client device340. In some implementations, a client device340can directly communicate with controller320to view an interface for monitoring data. The controller320can include a visualization system350for displaying the reports and dashboards related to the disclosed technology. In some implementations, the visualization system350can be implemented in a separate machine (e.g., a server) different from the one hosting the controller320. Notably, in an illustrative Software as a Service (SaaS) implementation, a controller instance320may be hosted remotely by a provider of the observability intelligence platform300. In an illustrative on-premises (On-Prem) implementation, a controller instance320may be installed locally and self-administered. The controllers320receive data from different agents310(e.g., Agents 1-4) deployed to monitor networks, applications, databases and database servers, servers, and end user clients for the monitored environment. Any of the agents310can be implemented as different types of agents with specific monitoring duties. For example, application agents may be installed on each server that hosts applications to be monitored. Instrumenting an agent adds an application agent into the runtime process of the application. Database agents, for example, may be software (e.g., a Java program) installed on a machine that has network access to the monitored databases and the controller. Standalone machine agents, on the other hand, may be standalone programs (e.g., standalone Java programs) that collect hardware-related performance statistics from the servers (or other suitable devices) in the monitored environment. The standalone machine agents can be deployed on machines that host application servers, database servers, messaging servers, Web servers, etc. Furthermore, end user monitoring (EUM) may be performed using browser agents and mobile agents to provide performance information from the point of view of the client, such as a web browser or a mobile native application. Through EUM, web use, mobile use, or combinations thereof (e.g., by real users or synthetic agents) can be monitored based on the monitoring needs. Note that monitoring through browser agents and mobile agents are generally unlike monitoring through application agents, database agents, and standalone machine agents that are on the server. In particular, browser agents may generally be embodied as small files using web-based technologies, such as JavaScript agents injected into each instrumented web page (e.g., as close to the top as possible) as the web page is served, and are configured to collect data. Once the web page has completed loading, the collected data may be bundled into a beacon and sent to an EUM process/cloud for processing and made ready for retrieval by the controller. Browser real user monitoring (Browser RUM) provides insights into the performance of a web application from the point of view of a real or synthetic end user. For example, Browser RUM can determine how specific Ajax or iframe calls are slowing down page load time and how server performance impact end user experience in aggregate or in individual cases. A mobile agent, on the other hand, may be a small piece of highly performant code that gets added to the source of the mobile application. Mobile RUM provides information on the native mobile application (e.g., iOS or Android applications) as the end users actually use the mobile application. Mobile RUM provides visibility into the functioning of the mobile application itself and the mobile application's interaction with the network used and any server-side applications with which the mobile application communicates. Note further that in certain embodiments, in the application intelligence model, a business transaction represents a particular service provided by the monitored environment. For example, in an e-commerce application, particular real-world services can include a user logging in, searching for items, or adding items to the cart. In a content portal, particular real-world services can include user requests for content such as sports, business, or entertainment news. In a stock trading application, particular real-world services can include operations such as receiving a stock quote, buying, or selling stocks. A business transaction, in particular, is a representation of the particular service provided by the monitored environment that provides a view on performance data in the context of the various tiers that participate in processing a particular request. That is, a business transaction, which may be identified by a unique business transaction identification (ID), represents the end-to-end processing path used to fulfill a service request in the monitored environment (e.g., adding items to a shopping cart, storing information in a database, purchasing an item online, etc.). Thus, a business transaction is a type of user-initiated action in the monitored environment defined by an entry point and a processing path across application servers, databases, and potentially many other infrastructure components. Each instance of a business transaction is an execution of that transaction in response to a particular user request (e.g., a socket call, illustratively associated with the TCP layer). A business transaction can be created by detecting incoming requests at an entry point and tracking the activity associated with request at the originating tier and across distributed components in the application environment (e.g., associating the business transaction with a 4-tuple of a source IP address, source port, destination IP address, and destination port). A flow map can be generated for a business transaction that shows the touch points for the business transaction in the application environment. In one embodiment, a specific tag may be added to packets by application specific agents for identifying business transactions (e.g., a custom header field attached to a hypertext transfer protocol (HTTP) payload by an application agent, or by a network agent when an application makes a remote socket call), such that packets can be examined by network agents to identify the business transaction identifier (ID) (e.g., a Globally Unique Identifier (GUID) or Universally Unique Identifier (UUID)). Performance monitoring can be oriented by business transaction to focus on the performance of the services in the application environment from the perspective of end users. Performance monitoring based on business transactions can provide information on whether a service is available (e.g., users can log in, check out, or view their data), response times for users, and the cause of problems when the problems occur. In accordance with certain embodiments, the observability intelligence platform may use both self-learned baselines and configurable thresholds to help identify network and/or application issues. A complex distributed application, for example, has a large number of performance metrics and each metric is important in one or more contexts. In such environments, it is difficult to determine the values or ranges that are normal for a particular metric; set meaningful thresholds on which to base and receive relevant alerts; and determine what is a “normal” metric when the application or infrastructure undergoes change. For these reasons, the disclosed observability intelligence platform can perform anomaly detection based on dynamic baselines or thresholds, such as through various machine learning techniques, as may be appreciated by those skilled in the art. For example, the illustrative observability intelligence platform herein may automatically is calculate dynamic baselines for the monitored metrics, defining what is “normal” for each metric based on actual usage. The observability intelligence platform may then use these baselines to identify subsequent metrics whose values fall out of this normal range. In general, data/metrics collected relate to the topology and/or overall performance of the network and/or application (or business transaction) or associated infrastructure, such as, e.g., load, average response time, error rate, percentage CPU busy, percentage of memory used, etc. The controller UI can thus be used to view all of the data/metrics that the agents report to the controller, as topologies, heatmaps, graphs, lists, and so on. Illustratively, data/metrics can be accessed programmatically using a Representational State Transfer (REST) API (e.g., that returns either the JavaScript Object Notation (JSON) or the eXtensible Markup Language (XML) format). Also, the REST API can be used to query and manipulate the overall observability environment. Those skilled in the art will appreciate that other configurations of observability intelligence may be used in accordance with certain aspects of the techniques herein, and that other types of agents, instrumentations, tests, controllers, and so on may be used to collect data and/or metrics of the network(s) and/or application(s) herein. Also, while the description illustrates certain configurations, communication links, network devices, and so on, it is expressly contemplated that various processes may be embodied across multiple devices, on different devices, utilizing additional devices, and so on, and the views shown herein are merely simplified examples that are not meant to be limiting to the scope of the present disclosure. —Detecting and Identifying Anomalies for Single Page Applications— As noted above, even with the proliferation of web services, access to these web services oftentimes is still done through end user utilization of a web application (or webpage) that is connected to or in communication with the web services. Increasingly, single page applications (SPAs) may be loaded at web browsers of end user devices (e.g., desktop computers, laptops, mobile phones, etc.) for accessing the web services. An SPA is a web browser application that interacts with an end user by dynamically rewriting a current web page rather than loading entire new pages from a server. In an SPA's first load, all necessary code to construct the web application is retrieved in an initial single page load, then additional code, data, and resources can be loaded by “XMLHttpRequest” requests (XHRs). After that, page transitions will simply be content changes through XHR requests or memory state changes. Because these are not full page loads, they are generally referred to as “virtual pages” or “virtual pageviews”. Performance of such SPAs may be influenced by a multitude of architectural components (e.g., frontend software, backend servers, compression engines, etc.). Performance, though, can mainly be indicated or measured by resources on the SPAs, where a resource can be considered the most granular unit in the loading of a SPA. However, as also noted above, traditional performance traditional performance metric monitoring and root cause analysis (based on metrics gathered regarding resource data) techniques are inadequate in that they have not kept up with the newer loading behaviors of SPAs, particularly with respect to analyzing load times, completion, etc. of resources. The techniques herein, therefore, provide for a heuristic approach to auto-detect and pinpoint root cause(s) that impact the performance of web applications based on information about one or more resources found an SPA. Such detection may be done through correlation page load speed with corresponding resource(s) on a page (e.g., an SPA). Particularly, various types of agents and their ability to measure performance (e.g., via a Page Load test or a Transaction test, as described herein above) may be leveraged to monitor page load performance, including resource download behavior. From the monitored information, a hierarchy of a web application may be constructed, for example, a hierarchy that includes pages and resources (of each page) of the web application that delineates where both pages and resources are located. Additionally, anomaly detection algorithms may be performed on the monitored information to determine or identify anomalous features regarding particular pages or resources (e.g., a slow loading page or resource(s)) of the web application). It is contemplated that synthetic data, in addition to the monitored data (from the tests performed by agents), may be used to aid in identifying anomalies. The identified anomalous features may be correlated with one of four types or categories (e.g., a page anomaly, a resource anomaly, a single domain anomaly, or a multi-domain anomaly), and one or more actionable insights that correspond to a type/category may be provided. With more particularity regarding the anomalous feature types, a page anomaly type indicates that resources on a single page application are performing as expected according to identified trends, behaviors, etc. of the SPA (i.e., normalized behavior of the SPA). For this type of anomaly, further analysis may be performed to determine whether a frontend component (e.g., more end user facing software, hardware, etc.) is a cause or bottleneck for a page anomaly. A resource anomaly type indicates that a particular resource is not performing as expected and is a root cause of slowness in the performance of the SPA. For this type of anomaly, the particular resource may be classified into either a static or dynamic resource and further analyzed to identify pattern(s) of resource content to determine: a) whether a configuration of resource(s) has been changed (indicative of compression and/or encoding issues) or b) if one or more backend components is a cause of slowness of the particular resource. A single domain anomaly type is indicative of a particular domain (associated with a plurality of resource anomalies) not performing as expected, and, for this type of anomaly, further analyses may be performed to determine whether a content delivery network (CDN), associated with the particular domain, is misconfigured or slow. Lastly, a multi-domain anomaly type is indicative of a plurality of resources not performing as expected, and, in this case, each resource in the plurality of resources may be analyzed as a resource anomaly (as previously described). Specifically, according to one or more embodiments described herein, a device obtains page load information corresponding to a loaded web application. The device detects, based on the page load information, an anomalous feature of the loaded web application. The device identifies a type of the anomalous feature based on a number of resource anomalies within the loaded web application, wherein the type of the anomalous feature is selected from a group consisting of: a page anomaly; a resource anomaly; and a domain anomaly. The device performs one or more mitigation actions according to the type of the anomalous feature. In one embodiment, the type of the anomalous feature is: the page anomaly when the number of resource anomalies is zero, the resource anomaly when the number of resource anomalies is one, and the domain anomaly when the number of resource anomalies is greater than one. In another embodiment, the domain anomaly is: a single domain anomaly when resource anomalies within the loaded web application belong to a particular domain, and a multi-domain anomaly when the resource anomalies belong to a plurality of domains. Notably, as previously mentioned, the techniques herein may employ any number of machine learning techniques, such as to classify the collected data (e.g., test results of Page Load tests and/or Transaction tests) and to cluster the data as described herein. In general, machine learning is concerned with the design and the development of techniques that receive empirical data as input (e.g., collected metric/event data from agents, sensors, etc.) and recognize complex patterns in the input data. For example, some machine learning techniques use an underlying model M, whose parameters are optimized for minimizing the cost function associated to M, given the input data. For instance, in the context of classification, the model M may be a straight line that separates the data into two classes (e.g., labels) such that M=a*x+b*y+c and the cost function is a function of the number of misclassified points. The learning process then operates by adjusting the parameters a, b, c such that the number of misclassified points is minimal. After this optimization/learning phase, the techniques herein can use the model M to classify new data points. Often, M is a statistical model, and the cost function is inversely proportional to the likelihood of M, given the input data. One class of machine learning techniques that is of particular use herein is clustering. Generally speaking, clustering is a family of techniques that seek to group data according to some typically predefined or otherwise determined notion of similarity. Also, the performance of a machine learning model can be evaluated in a number of ways based on the number of true positives, false positives, true negatives, and/or false negatives of the model. In various embodiments, such techniques may employ one or more supervised, unsupervised, or semi-supervised machine learning models. Generally, supervised learning entails the use of a training set of data, as noted above, that is used to train the model to apply labels to the input data. On the other end of the spectrum are unsupervised techniques that do not require a training set of labels. Notably, while a supervised learning model may look for previously seen patterns that have been labeled as such, an unsupervised model may attempt to analyze the data without applying a label to it. Semi-supervised learning models take a middle ground approach that uses a greatly reduced set of labeled training data. Example machine learning techniques that the techniques herein can employ may include, but are not limited to, nearest neighbor (NN) techniques (e.g., k-NN models, replicator NN models, etc.), statistical techniques (e.g., Bayesian networks, etc.), clustering techniques (e.g., k-means, mean-shift, etc.), neural networks (e.g., reservoir networks, artificial neural networks, etc.), support vector machines (SVMs), logistic or other regression, Markov models or chains, principal component analysis (PCA) (e.g., for linear models), multi-layer perceptron (MLP) artificial neural networks (ANNs) (e.g., for non-linear models), replicating reservoir networks (e.g., for non-linear models, typically for time series), random forest classification, or the like. Operationally,FIG.4illustrates an example architecture400for a page load monitoring service, according to various embodiments. At the core of architecture400is page load monitoring process248, which may be executed by a device that provides a page load monitoring service in a network, or another device in communication therewith. As shown, page load monitoring process248may include any or all of the following components: collector402, application metadata service404, normalization engine406, anomaly system408, and heuristic system410. As would be appreciated, the functionalities of these components may be combined or omitted, as desired. In addition, these components may be implemented on a singular device, for example, controller320, or in a distributed manner, in which case the combination of executing devices can be is viewed as their own singular device for purposes of executing page load monitoring process248. During execution, collector402may operate to aggregate test results from a plurality of agents, for example, agents310described with respect toFIG.3. As described herein above, such agents may be configured to perform a Page Load test that causes the agents to gather data or information indicative of one or more domains, resources, etc. that are included in a particular loading of a web application (e.g., a webpage). Additionally, the agents may perform multi-step transaction tests (e.g., a Transaction test), which, when performed, gathers similar information as that gathered from Page Load tests but from a different perspective (i.e., based on a transaction within a webpage, rather than merely the loading of a webpage). The tests performed by the agents may be performed by different types of agents that are geographically located all over the world as well as being located at different parts of internet infrastructure. For example, the agents may include endpoint agents, cloud agents, enterprise agents, etc., all of which are described in greater detail above. Collector402may be configured to aggregate the test results from a plurality of agents, where the tests are directed to a plurality of web applications or webpages. Collector402may also be configured to store the test results over a period of a time and to make the test results available to other components of page load monitoring process248. Collector402may operate to be in communication with plurality of browsers412at end user devices, where plurality of browsers412download resources (on SPAs) from different domains, for example, while accessing a web application. Using one or more resource timing application programming interfaces (APIs) and browser instrumentation that may be installed on plurality of browsers412, resource location, timing, and size information/metrics may be gathered (e.g., start time, domain name system (DNS) time, connection time, load time, encoded size, transfer size, decoded size, etc.). Collector402may be configured to categorize or correlate a resource to a particular page of an SPA because collector402, based on the gathered information, may determine start and end times of the particular page. As will be understood in the art, collector402may also determine visually complete times (VCTs) of particular resources on a page (or SPA). Using the determined start and end times as well as VCTs of page, collector402may generate page “snapshots” for visits or loadings of an SPA. It is to be understood that both real and synthetic agents (i.e., that mimic behavior of end users) may contribute to generation data used for snapshots. Additionally, to determine a more accurate measurement of SPA end times, and thus load times (e.g., using the start times calculated above), tests performed by agents may track all of the resources loaded and rendered on an SPA page, and wait for a detected period of network inactivity. Some direct resources that may be considered include: images, scripts, and stylesheets. Other resources may also be considered, such as fonts or future types of resources, and those listed here are primary examples only. Note further that images, in particular, may also be rendered on a page after loading, and as such, the techniques herein may specifically consider both load and render time of images. “Hooks” (e.g., event loggers using a JSAgent in DOM) may be added to track any image, script, css (stylesheet), etc. added as a direct resource anytime on a page. The hooks then attach a load listener to each of these resources, such that when a load listener gets invoked, the load timestamp of that resource is noted (i.e., the time at which the resource completed loading or rendering). Accordingly, the resource having the latest/max load timestamp is the last resource to be loaded/rendered on the page. Application metadata service404of architecture400, shown inFIG.4, may be configured to obtain the information gathered by collector402and generate one or more hierarchies of a web application. As will be described in greater detail herein below, a hierarchy of the web application may include pages and resources (of each page) of the web application that visually indicates and delineates where both pages and resources are located. Application metadata service404may additionally be configured to monitor changes to any of the hierarchies that it generates, based on new information regarding the pages and resources (e.g., that is gathered by collector402). Normalization engine406of architecture400may be configured to normalize snapshots and/or data gathered by agents from performing tests, as described with greater is detail above with respected to collector402. In particular, normalization engine406may normalize using, for example, one or more median statistical functions to the information gathered by collector402for pages of a web application (e.g., Page Pa, Page Pb, etc.) to generate normalized “baseline” snapshots of each page of a web application. Anomaly system408of architecture400is configured to compare normalized snapshots (or data) from normalization engine406with individual load times of web applications (for example, by agents at end user devices) to determine whether anomalous features are present for the individual load times. Further, anomaly system408may be configured to analyze trends of subsequent page VCTs and/or load times. In particular, normalized data may be fed into a pattern anomaly detection algorithm, for example, a median-absolute-deviation algorithm, to analyze the trends of page visual complete time and resource load time individually. Such analysis may lead to identification of page and/or resource anomalies. It is contemplated that a plurality of univariate pattern anomaly detection approaches, as understood in the art, may be implemented to detect anomalies in load times (of pages or resources). It is contemplated that normalization engine406may be configured to identify snapshots and/or test results for a page loads of visits by agents to a particular webpage (or web application) over a particular number of days (e.g., 30 days) or any time period and to normalize the snapshots and/or test results. Normalization of these test results leads to determinations of expected load times, both for each SPA of the web application as well as resources on each of the SPAs. It is contemplated that machine learning techniques may be used by normalization engine406to implement the normalization (or even standardization) of the test results. In other words, the machine learning techniques may generate a model of expected load times for SPAs and/or resources that are loaded during visits of a particular web application, where the model may be used compare subsequent loads of the web application to determine whether any anomalous load times are present. Heuristic system410of architecture400may be configured to obtain one or more page or resource anomalies identified by anomaly system408and co-relate the anomalies is between pages and resources. Notably, heuristic system410may do so using information gathered by collector402and application metadata service404. In particular, after anomaly system408identifies that a page or resource has an anomalous feature, heuristic system410may categorize it into one of the following types:1. “zero resource anomaly” that indicates a page has an anomaly but that none of the resources on the page have anomalies;2. “one/single resource anomaly” that indicates the page has an anomaly and that only one resource on the page has an anomaly;3. “N resource anomaly belonging to the same domain” (e.g., a single domain anomaly) that indicates the page has an anomaly and that multiple resources have anomalies, where the multiple resources are loaded from the same domain; or4. “N resources anomaly belonging to the different domains” (e.g., a multi-domain anomaly) that indicates that the page has an anomaly and that multiple resources have anomalies, where the multiple resources are loaded from different domains. Post categorization (i.e., identification of a type for identified anomalous features), diverse heuristics may be applied to extract actionable insights414from each of the aforementioned types. In particular, for a zero resource anomaly, actionable insights414may include an analysis of one or more trends of resource counts per page for the page that exhibits the anomaly may be performed to detect whether the page anomaly is because of a change in the page itself or front-end code slowness. This analysis may be based on, for example, information gathered by collector402and application metadata service404. For a one resource anomaly type, actionable insights414may preliminarily include determining whether the resource that exhibits the anomaly is a static resource (e.g., an image file, a cascading style sheet (CSS), etc.) or a dynamic resource (e.g., an XHR request). In the case of the resource anomaly being a static resource, an analysis of one or more trends regarding a size of the resource over time may be performed to determine whether the resource has changed on a server (that hosts the file) or the is anomaly is due to problems, issues, etc. with a compression/decompression system or a resource encoding system associated with resource. It is contemplated that an accurate conclusion may be reached based on a change in an encoding size of the resource, even though the decoding size remains unchanged. In the case of the resource anomaly being a static resource, an analysis may be performed, using synthetic user data, that determines whether the resource anomaly is due to backend slowness, issues, etc. based on observations of one or more trends of the resource anomaly's encoded size, decoded size, and load-time. For a single domain anomaly type, actionable insights414may include an analysis one or more DNS time trends of resources belonging to the impacted domain, where such analysis determines whether there is a degradation in a CDN or a backend server of the single domain. For a multi-domain anomaly type, actionable insights414may include rules for categorizing the resources based on the individual domains (or resources), such that the multi-domain anomaly may be sub-divided into, for example, a one resource anomaly type or single domain anomaly type. Afterwards, analysis of these sub-divisions may be performed as described above herein. In an embodiment, in addition to anomaly system408being configured to relay detected anomalous features to heuristic system410, anomaly system408may be configured to react to and/or mitigate any anomalous features, such as to automatically flag and report anomalous features via a notification (e.g., an alert) to operators of web applications or end-user devices that have visited a particular web application. In an embodiment, anomaly system408may be configured to automatically block, remove, or revert, anomalous features from a loading of a web application. In another embodiment, anomaly system408may be configured to feed identified anomalous features into firewalls of associated devices (e.g., in networks and/or sub-networks where agents310are installed), where doing so results in termination of active TLS connections with the anomalous features, for example. Anomaly system408may also be configured to generate a variety of graphical user interface-based (GUI-based) images and/or maps that indicate locations of anomalous features. For example, geographic locations may be correlated with anomalous features and may be displayed in a heat map (e.g., a global view) that indicates physical locations of the anomalous features. Additional information that may be displayed in the heat map included a number of anomalous features, where an excess amount, as understood in the art, may indicate an internet-based attack. Turning now toFIG.5, example test results500of a load test performed by an agent are shown. In particular, an agent may perform a load test (as shown, TestID: 7890) that is configured to test a variety of parameters for a visit by the agent to a particular web application or SPA, which is shown as “Domain A” inFIG.5. Such parameters may include throughput and response time, which are indicated by corresponding throughput pane502and total response time pane504. Throughput pane502includes throughputs to Domain A, Domain B, and Domain C, which were all loaded during the visit by the agent to Domain A. Total response time pane504includes the response times for each of Domain A, Domain B, and Domain C during the visit by the agent to Domain A. Test results500may also include a waterfall view pane506, which in addition to displaying each of the domains visited during a loading (e.g., page load) of a visit to Domain A, also displays each of services that may be at sub-levels of domains that were loaded during in temporal order of the agent's visit to Domain A. Notably, waterfall view pane506indicates that the agent loaded particular resources/services508of Domain A then particular resources/services510of Domain B then particular resources/services512of Domain C. For each of these services508-512, waterfall view pane506may include a corresponding identifier, file size, and specific response time. While the test results500are shown indicating that various levels of domains are being tracked (e.g., a first level and a second level), it is contemplated that tests performed by the agents may be configured to only higher (or even further lower) levels. In other words, various limitations may be placed herein on the level of domains being tracked, meaning second level domains and third level domains, and so on. FIG.6illustrates example snapshots600that are indicative of loadings of single page applications of a web application. Snapshots600may be collected by collector402from browsers412as described above with respect toFIG.4. In the example snapshots600shown, “Sample 1”602and “Sample 2”604are snapshots generated for a page of a web application, Page Pa, at a same time, T1. “Sample 1”602and “Sample 2”604, however, may be assumed to have data indicative of visits by different users. Further, “Sample 3”606may be a snapshot generated for a different page of the web application, Page Pb, at time, T2. Each of the samples602-606may have corresponding page details608(regarding load times and VCTs of a page) and resource details610(regarding DNS times, connection times, and load times of resources on the page). For instance, for “Sample 1”602, data may be gathered by collector402which indicates that it has a VCT of Pa1_VCT and a load time of Pa1_LT in a corresponding field page detail field612. Collector402may have also gathered data that indicates that Page Pa includes fields for Resource Pa_Ra614, Resource Pa_Rb616, and Resource Pa_Rc616. Each of the fields for the resources may indicate a DNS time, connection time, and a load time for a corresponding resource. An example hierarchy700is shown inFIG.7for a particular web application702. As described herein above, application metadata service404may be configured to generated hierarchy700, for example, based on the information indicated in snapshots600. InFIG.7, particular web application702(“1”) may have indications for Page Pa704and Page Pb706. Further, Page Pa704may have resources indicated in the hierarchy700: Resource Pa_Ra708, Resource Pa_Rb710, and Resource Pa_Rc712. Page Pb resources indicated in the hierarchy700: Resource Pb_Rd714, Resource Pb_Re716, and Resource Pb_Rf718. Application metadata service404may additionally be configured to monitor changes to hierarchy700and to periodically update it, accordingly. FIG.8illustrates an example normalized snapshot800of a page of a web application. Normalized snapshot800may be generated by normalization engine406, in part, based on snapshots600. In the example, “Normalized Snapshot1”802may include is page detail field, which includes normalized values (e.g., median) each of pages available on the web application, shown as Pages Pa-Pb804. Additionally, a resource field804may include normalized values for resources available on each of Pages Pa-Pb804. In particular, there may be normalized values for Resource Pa_Ra806, Resource Pa_Rb808, Resource Pa_Rc810, all the way to a Resource Pa_R[n]812. Anomaly system408may utilize normalized snapshot800to compared subsequent loadings of a web application, SPA(s) of the web application, or resource(s) in the SPA(s) to determine whether the subsequent loadings include anomalous feature (e.g., in load times). For example, as shown inFIG.8, different pages (e.g., Page Pa, Page Pb, etc.) or particular resources (e.g., Pa_Ra, Pa_Rb, etc.) may be compared by anomaly system408. In another example, Pa_nVCT time is fed to anomaly system408to detect anomalous features by comparing it against its historical data, and Ra_nLT may be fed to anomaly system408to detect anomalies in load time of resource Pa_Ra. With reference toFIGS.9A-9D, examples of types of anomalous features are shown. InFIG.9A, example hierarchy900is shown, where shading902for Page Pa704indicates that there is a detected page anomaly present with Page Pa704. Such page anomaly is indicative of there being zero resource anomalies, and this anomaly may be treated as a zero resource anomaly type according to actionable insights414. InFIG.9B, example hierarchy900is shown, where shading904for Resource Pa_Ra708indicates that there is a detected resource anomaly present within Page Pa704. Because this anomaly is only present with one resource, it may be treated as a one resource anomaly according to actionable insights414. InFIG.9C, example hierarchy900is shown, where shading906indicates that there are detected resource anomalies for Resource Pa_Ra708and Resource Pa_Rb710. Since shading906is of the same kind, it indicates that resource anomalies detected for Resource Pa_Ra708and Resource Pa_Rb710are from a same domain. The resource anomalies indicated inFIG.9C, then, may be treated as a single domain anomaly according to actionable insights414. InFIG.9D, example hierarchy900is shown, where shading908and shading910indicate that there are detected resource anomalies for Resource Pa_Ra708and Resource Pa_Rc712. The difference in shading908and shading910indicate that the resource anomalies detected for Resource Pa_Ra708and Resource Pa_Rc712are from different domains. The resource anomalies indicated inFIG.9D, then, may be treated as a multi-domain anomaly according to actionable insights414. Additionally, heuristic system410may cause an example anomaly detection notification1000, such as the example shown inFIG.10, to be generated and displayed at an end-user device in response to the detection of the anomalous pages or resources. Anomaly detection notification1000may include an identifier of the particular test that was performed (as shown, TestID: 1234) as well as indications of the nature of the anomalous pages and/or resources (e.g., the address of the domain, type of loaded object from the domain, and a categorization of the anomalous domain). In closing,FIG.11illustrates an example simplified procedure for detecting and identifying anomalies for single page applications in accordance with one or more embodiments described herein, particularly from the perspective of either an edge device or a controller. For example, a non-generic, specifically configured device (e.g., device200, particularly a monitoring device) may perform procedure1100by executing stored instructions (e.g., page load monitoring process248). The procedure1100may start at step1105, and continues to step1110, where, as described in greater detail above, a device may obtain page load information corresponding to a loaded web application. At step1115, the device may detect, based on the page load information, an anomalous feature of the loaded web application (e.g., based on one or more baselines, expectations, and so on, as described above). In an embodiment, the page load information may comprise a page load time of a page of the loaded web application. In an embodiment, the page load information may comprise a resource load time of a resource from a page of the loaded web application. At step1120, the device may identify a type of the anomalous feature based on a number of resource anomalies within the loaded web application, wherein the type of the anomalous feature is selected from a group consisting of: a page anomaly; a resource anomaly; and a domain anomaly. In an embodiment, the type of the anomalous feature may be: the page anomaly when the number of resource anomalies is zero, the resource anomaly when the number of resource anomalies is one, and the domain anomaly when the number of resource anomalies is greater than one. In another embodiment, the domain anomaly may be: a single domain anomaly when resource anomalies within the loaded web application belong to a particular domain, and a multi-domain anomaly when the resource anomalies belong to a plurality of domains. At step1125, the device may one or more mitigation actions according to the type of the anomalous feature. In an embodiment, the one or more mitigation actions may comprise causing a graphical user interface to display an indication of the type of the anomalous feature at an end-user device. The simplified procedure1100may then end in step1130, notably with the ability to continue ingesting and processing page load data for further anomalies. Other steps may also be included generally within procedure1100. For example, such steps (or, more generally, such additions to steps already specifically illustrated above), may include: aggregating, by the device, page load information corresponding to the loaded web application; and determining, by the device, one or more baselines of expected page load times for the loaded web application, wherein detecting, by the device and based on the page load information, the anomalous feature of the loaded web application is further based on the one or more baselines of expected page load times; generating, by the device and based on the page load information, a hierarchy of the loaded web application; and so on. It should be noted that while certain steps within procedure1100may be optional as described above, the steps shown inFIG.11are merely examples for illustration, and certain other steps may be included or excluded as desired. Further, while a particular order of the steps is shown, this ordering is merely illustrative, and any suitable arrangement of the steps may be utilized without departing from the scope of the is embodiments herein. The techniques described herein, therefore, provide for detecting and identifying anomalies for single page applications. In particular, the techniques herein provide the ability to categorize or group page anomalies and/or resource anomalies of a web application into types of anomalies: a page anomaly; a resource anomaly; and a domain anomaly (single domain or multi-domain). Based on the type of anomaly, one or more actionable insights may be identified, using heuristics described herein, to address various kinds of roots causes of anomalies that may occur with respect to SPAs. (i.e., to identify one or more mitigation actions). Notably, bottlenecks like poorly performing compression engines, CDNs (or domains), frontend code issues, backend server issues, etc. may be identified, based on the analysis of page load performance as described herein. Illustratively, the techniques described herein may be performed by hardware, software, and/or firmware, such as in accordance with the illustrative page load monitoring process248, which may include computer executable instructions executed by the processor220to perform functions relating to the techniques described herein, e.g., in conjunction with corresponding processes of other devices in the computer network as described herein (e.g., on network agents, controllers, computing devices, servers, etc.). In addition, the components herein may be implemented on a singular device or in a distributed manner, in which case the combination of executing devices can be viewed as their own singular “device” for purposes of executing the page load monitoring process248. According to the embodiments herein, an illustrative method herein may comprise: obtaining, at a device, page load information corresponding to a loaded web application; detecting, by the device and based on the page load information, an anomalous feature of the loaded web application; identifying, by the device, a type of the anomalous feature based on a number of resource anomalies within the loaded web application, wherein the type of the anomalous feature is selected from a group consisting of: a page anomaly; a resource anomaly; and a domain anomaly; and performing, by the device, one or more mitigation actions according to the type of the anomalous feature. In one embodiment, the type of the anomalous feature is: the page anomaly when resource anomalies is one, and the domain anomaly when the number of resource anomalies is greater than one. In one embodiment, the domain anomaly is: a single domain anomaly when resource anomalies within the loaded web application belong to a particular domain, and a multi-domain anomaly when the resource anomalies belong to a plurality of domains. In one embodiment, the method may further comprise the method further comprise: aggregating, by the device, page load information corresponding to the loaded web application; and determining, by the device, one or more baselines of expected page load times for the loaded web application. In one embodiment, detecting, by the device and based on the page load information, the anomalous feature of the loaded web application is further based on the one or more baselines of expected page load times. In one embodiment, the method may further comprise: generating, by the device and based on the page load information, a hierarchy of the loaded web application In one embodiment, the page load information comprises a page load time of a page of the loaded web application. In one embodiment, the page load information comprises a resource load time of a resource from a page of the loaded web application. In one embodiment, the one or more mitigation actions comprises causing a graphical user interface to display an indication of the type of the anomalous feature at an end-user device. In one embodiment, the page load information is obtained from an agent selected from a group consisting of: a cloud agent, an enterprise agent, and an endpoint agent. According to the embodiments herein, an illustrative tangible, non-transitory, computer-readable medium herein may have computer-executable instructions stored thereon that, when executed by a processor on a computer, may cause the computer to perform a method comprising: obtaining page load information corresponding to a loaded web application; detecting, based on the page load information, an anomalous feature of the loaded web application; identifying a type of the anomalous feature based on a number of resource anomalies within the loaded web application, wherein the type of the anomalous feature is selected from a group consisting of: a page anomaly; a resource is anomaly; and a domain anomaly; and performing one or more mitigation actions according to the type of the anomalous feature. Further, according to the embodiments herein an illustrative apparatus herein may comprise: one or more network interfaces to communicate with a network; a processor coupled to the network interfaces and configured to execute one or more processes; and a memory configured to store a process that is executable by the processor, the process, when executed, configured to: obtain page load information corresponding to a loaded web application; detect, based on the page load information, an anomalous feature of the loaded web application; identify a type of the anomalous feature based on a number of resource anomalies within the loaded web application, wherein the type of the anomalous feature is selected from a group consisting of: a page anomaly; a resource anomaly; and a domain anomaly; and perform one or more mitigation actions according to the type of the anomalous feature. While there have been shown and described illustrative embodiments above, it is to be understood that various other adaptations and modifications may be made within the scope of the embodiments herein. For example, while certain embodiments are described herein with respect to certain types of networks in particular, the techniques are not limited as such and may be used with any computer network, generally, in other embodiments. Moreover, while specific technologies, protocols, and associated devices have been shown, such as Java, TCP, IP, and so on, other suitable technologies, protocols, and associated devices may be used in accordance with the techniques described above. In addition, while certain devices are shown, and with certain functionality being performed on certain devices, other suitable devices and process locations may be used, accordingly. That is, the embodiments have been shown and described herein with relation to specific network configurations (orientations, topologies, protocols, terminology, processing locations, etc.). However, the embodiments in their broader sense are not as limited, and may, in fact, be used with other types of networks, protocols, and configurations. Moreover, while the present disclosure contains many other specifics, these should not be construed as limitations on the scope of any embodiment or of what may be claimed, but rather as descriptions of features that may be specific to particular embodiments of particular embodiments. Certain features that are described in this document in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable sub-combination. Further, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a sub-combination or variation of a sub-combination. For instance, while certain aspects of the present disclosure are described in terms of being performed “by a server” or “by a controller” or “by a collection engine”, those skilled in the art will appreciate that agents of the observability intelligence platform (e.g., application agents, network agents, language agents, etc.) may be considered to be extensions of the server (or controller/engine) operation, and as such, any process step performed “by a server” need not be limited to local processing on a specific server device, unless otherwise specifically noted as such. Furthermore, while certain aspects are described as being performed “by an agent” or by particular types of agents (e.g., application agents, network agents, endpoint agents, enterprise agents, cloud agents, etc.), the techniques may be generally applied to any suitable software/hardware configuration (libraries, modules, etc.) as part of an apparatus, application, or otherwise. Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. Moreover, the separation of various system components in the embodiments described in the present disclosure should not be understood as requiring such separation in all embodiments. The foregoing description has been directed to specific embodiments. It will be apparent, however, that other variations and modifications may be made to the described embodiments, with the attainment of some or all of their advantages. For instance, it is expressly contemplated that the components and/or elements described herein can be implemented as software being stored on a tangible (non-transitory) computer-readable medium (e.g., disks/CDs/RAM/EEPROM/etc.) having program instructions executing on a computer, hardware, firmware, or a combination thereof. Accordingly, this description is to be taken only by way of example and not to otherwise limit the scope of the embodiments herein. Therefore, it is the object of the appended claims to cover all such variations and modifications as come within the true intent and scope of the embodiments herein.
72,034
11860762
DESCRIPTION OF EMBODIMENTS Hereinafter, with reference to the drawings, example embodiments of the present disclosure will be described in detail. Throughout the drawings, the same or corresponding elements are denoted by the same symbols and overlapping descriptions will be omitted as necessary for the sake of clarification of the description. First Example Embodiment FIG.1is a block diagram showing a configuration of a semiconductor device100according to a first example embodiment. The semiconductor device100is a control apparatus or an information processing apparatus such as a processor that controls execution of a predetermined program. Further, the semiconductor device100may be mounted on an electronic device such as a computer or an information processing apparatus, or an Internet of Thing (IoT) device or an embedded device. The semiconductor device100includes a first storage unit110, a second storage unit120, and a prediction unit130. The first storage unit110is a storage apparatus storing a plurality of pieces of execution order inspection information111to11n(n is a natural number equal to or larger than two) in advance. The execution order inspection information111and so on are information used for inspection of an execution order of a plurality of code blocks in a predetermined program. In other words, the execution order inspection information111and so on are information for specifying an execution order of a plurality of code blocks in a predetermined program. For example, the execution order inspection information111and so on are, but not limited to, numerical information indicating an execution order of a plurality of code blocks in a predetermined program, a hash value calculated based on an execution order, a code block or the like. The second storage unit120is a storage apparatus that operates faster than the first storage unit110and serves as a cache for the first storage unit110. Therefore, at least a part of the information in the first storage unit110is prefetched to the second storage unit120. Further, when, for example, the first storage unit110is a hard disk, the second storage unit120is a memory or a cache memory. Further, when the first storage unit110is a memory, the second storage unit120is a cache memory. Note that the examples of the first storage unit110and the second storage unit120are not limited to them. The prediction unit130predicts a storage area of the execution order inspection information111and so on to be prefetched from the first storage unit110to the second storage unit120based on prediction auxiliary information and a control flow graph of the aforementioned program. Alternatively, it can also be said that the prediction unit130determines whether or not the execution order inspection information111and so on are to be prefetched. The prediction auxiliary information, which is information in a first code block among the plurality of code blocks, is information for assisting prediction by the prediction unit130. For example, the prediction auxiliary information includes at least one of input values to the first code block of the plurality of code blocks, internal state variable values when the first code block is executed, and the priority of execution of paths that may be executed after being branched off from the first code block. However, the prediction auxiliary information is not limited to them. Further, it is assumed that the prediction unit130predicts the storage area of the execution order inspection information regarding the execution order that corresponds to the first code block or code blocks that may be executed after the first code block based on the control flow graph as a prefetch target. FIG.2is a flowchart showing a flow of prediction processing of a control flow inspection method according to the first example embodiment. First, the prediction unit130acquires the prediction auxiliary information in the first code block among the plurality of code blocks in a predetermined program and the control flow graph of this program (S11). Note that Step S11may be performed when a predetermined program is executed or before the program execution starts. Next, the prediction unit130predicts the storage area of the execution order inspection information to be prefetched from the first storage unit110to the second storage unit120based on the prediction auxiliary information and the control flow graph (S12). Therefore, according to this example embodiment, the storage area of the prefetch target that has been predicted may be prefetched from the first storage unit110to the second storage unit120. As a result, the execution order inspection information that corresponds to the code block that is currently being executed or code blocks that may be executed after this code block is prefetched to the second storage unit120. Then, when inspection of control flow integrity of a predetermined code block is executed later, an access is made to the second storage unit120, resulting in a higher probability that execution order inspection information that corresponds to the current execution order can be acquired. Further, in the case of cache hit (when the execution order inspection information has been successfully acquired), it is possible to acquire the execution order inspection information faster than in a case in which an access is made to the first storage unit110. Therefore, the processing speed for checking control flow integrity of a predetermined code block (processing of comparing the execution order inspection information etc.) is increased as well. Accordingly, with this example embodiment, it is possible to reduce processing overhead while maintaining device security. Note that the semiconductor device100includes, as components that are not shown, a processor, a memory, and another storage apparatus. The other storage apparatus stores a computer program in which the prediction processing of the control flow inspection method according to this example embodiment is implemented. Then, this processor loads a computer program into the memory from the storage apparatus and executes the loaded computer program. Accordingly, the processor implements the function of the prediction unit130. Alternatively, the prediction unit130may be implemented by dedicated hardware. Further, some or all of the components of the prediction unit130may be implemented by general-purpose or dedicated circuitry, processor, or a combination of them. They may be configured using a single chip, or a plurality of chips connected through a bus. Some or all of the components of each apparatus may be implemented by a combination of the above-described circuitry, etc. and a program. Further, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a field-programmable gate array (FPGA), an ARM (registered trademark) architecture and so on may be used as a processor. Second Example Embodiment A second example embodiment is a specific example of the aforementioned first example embodiment. FIG.3is a block diagram showing a configuration of a device200according to the second example embodiment. The device200is one example of the aforementioned semiconductor device100. The device200at least includes a program execution unit210, an inspection unit220, a prediction unit230, a control flow graph240, a cache250, and a memory260. The program execution unit210is a control apparatus that executes an execution target code block211in a predetermined program. It can be further said that the program execution unit210is a main process for executing a predetermined program. Alternatively, the program execution unit210may be, for example, but not limited to, a processor core. The predetermined program is formed of one or more modules and one or more code blocks are implemented in each module. Then, each code block is, for example, a set of program codes of units such as functions. It is further assumed that an inspection instruction212of control flow integrity is set in a part of each code block. The inspection instruction212, which is an instruction for calling the inspection unit220that will be described later, is, for example, a function. A plurality of inspection instructions212may be set in one code block. It is assumed that the inspection instruction212may be set in any position in the code block, such as at the beginning, in the middle, or at the end of the code block. The inspection instruction212can be set between code blocks as well. This case is assumed to be equal to a case in which the inspection instruction is set in the end of the code block immediately before the inspection instruction212or a case in which the inspection instruction is set at the top of the code block immediately after the inspection instruction212. The inspection instruction212may be implemented on a source code when the program is developed or may be inserted into a binary after compilation. The inspection unit220is implementation of inspection processing that corresponds to the inspection instruction. The inspection unit220performs inspection of the control flow integrity regarding the code block that is currently being executed in the program execution unit210or a code block that may be executed immediately after the above code block in accordance with the call of the inspection instruction from the program execution unit210. The inspection unit220outputs, when it has been determined in the inspection that there is no problem, information indicating that execution of the subsequent processing of this code block will be allowed to the program execution unit210and outputs, when it has been determined in the inspection that there is a problem, information indicating that execution of the subsequent processing of this code block will be blocked to the program execution unit210. Note that the inspection unit220may be implemented as a software process executed on the processor core in the device200, like the program execution unit210, or may be implemented as a hardware circuit. The control flow graph (CFG)240is graph information that defines the result of the analysis of the control structure of the program as the flow of control between code blocks. The control flow graph240, which is stored in the memory260or another storage apparatus, functions as a database. The control flow graph240is information that defines the position and the execution order of each code block in a program, and defines a link from the code block executed first to the code block to be executed next, a link to a branch destination etc. FIG.4is a diagram for describing the concept of the CFG and the CFI inspection according to the second example embodiment. A module31, which is a set of code blocks, is, for example, a program file or the like. A code block32, which is a set of one or more program codes, is, for example, a function or the like. An inspection instruction33, which is an inspection instruction set in a code block, conceptually indicates that the inspection unit220has been called. Referring once again toFIG.3, the explanation will be continued. The memory260, which is one example of the aforementioned first storage unit110, is, for example, a Random Access Memory (RAM). The memory260stores all the pieces of execution order inspection information271to27nthat correspond to the respective code blocks in the control flow graph240. In the execution order inspection information271, an execution order2711and a hash value2712are associated with each other. The hash value2712is a value calculated by a predetermined hash function from a numerical value indicating the execution order2711in a specific code block. Likewise, in the execution order inspection information27n, an execution order27n1and a hash value27n2are associated with each other. Note that the execution order inspection information may include any one of a hash value calculated based on a path of the control flow regarding two or more of a plurality of code blocks, the execution order itself, a set of execution orders and the like. For example, the hash value2712may be a value calculated by a predetermined hash function from a set of the path of the control flow of two or more code blocks and the number of the execution order. Alternatively, the execution order inspection information271may not use the hash value2712and may be the execution order2711itself. The cache250is one example of the aforementioned second storage unit120. The cache250is, for example, a cache memory that operates faster than the memory260. The cache250stores the execution order inspection information270and the like. It is further assumed that the cache250stores at least one of the pieces of execution order inspection information271to27nin the memory260. The prediction unit230is one example of the aforementioned prediction unit130. The prediction unit230is implementation of prediction processing of a part of inspection processing performed by the inspection unit220. Therefore, the prediction unit230may be implemented as a software process executed on the processor core in the device200or may be implemented as a hardware circuit. The prediction unit230specifies the second code block that may be executed after the first code block based on the prediction auxiliary information and the control flow graph240, and specifies the path of the control flow from the first code block to the second code block. Then, the prediction unit230predicts the storage area of the execution order inspection information that corresponds to the specified path as the prefetch target. Accordingly, it is possible to prefetch the execution order inspection information that corresponds to each of a plurality of code blocks included in the path and the cache hit rate of the execution order inspection information may be improved. It is assumed here that the prediction auxiliary information includes at least one of the input values to the first code block, the internal state variable values when the first code block is executed, and the priority of execution of the paths that may be executed after being branched off from the first code block. Further, the prediction unit230may specify a first position of the first code block in the program based on the control flow graph240and specify a second position included in the control flow from the first position as the second code block based on the result of the analysis of the prediction auxiliary information and the control flow graph. It is therefore possible to specify the path more appropriately. Further, the prediction unit230may predict, when the execution frequency of the specified path is higher than those of other paths that may be executed by being branched off from the first code block, the storage area of the execution order inspection information that corresponds to the specified path as the prefetch target. Accordingly, the cache hit rate of the execution order inspection information may be improved. Further, the prediction unit230may predict, when the number of code blocks included in the specified path is equal to or larger than a predetermined value, the storage area of the execution order inspection information that corresponds to the specified path as a prefetch target. According to this procedure, the execution order inspection information on the path that is highly likely to be executed later is prefetched, and the cache hit rate of the execution order inspection information may be improved. Further, the prediction unit230may specify the path so as to include three or more code blocks. By pre-reading the execution order inspection information of a multiple steps ahead, the efficiency of reading out the execution order inspection information from the cache may be improved. Further, the prediction unit230predicts the storage area as a prefetch target in accordance with the execution of the inspection instruction of the control flow integrity configured in the first code block and prefetches the predicted storage area from the memory260to the cache250. Then, the prediction unit230determines, when the inspection instruction of the control flow integrity configured in the third code block executed after the first code block is executed, one of the cache250and the memory260as the access destination based on the prediction auxiliary information in the third code block. Next, the prediction unit230acquires the first execution order inspection information that corresponds to the current execution order of the third code block from the determined access destination. At this time, the inspection unit220calculates the second execution order inspection information that corresponds to the current execution order of the third code block. Then, the inspection unit220inspects whether it is possible to execute a code block executed after the third code block in accordance with the result of comparing the first execution order inspection information acquired by the prediction unit230with the calculated second execution order inspection information. FIG.5is a flowchart showing a flow of preliminary processing according to the second example embodiment. The preliminary processing is processing of generating a hash value and a CFG from the program to be executed and storing the generated hash value and CFG. The preliminary processing can be implemented by the device200or a desired information processing apparatus. It is assumed in the following description that the preliminary processing is executed by a desired information processing apparatus. First, the information processing apparatus analyzes the program that will be executed in the device200(S21). Then, the information processing apparatus generates a control flow graph (CFG) of this program based on the result of the analysis (S22). Then, the information processing apparatus inserts an inspection instruction into each code block of the program based on the CFG. For example, the inspection instruction is inserted as shown inFIG.4described above. When the inspection instruction (processing of calling the inspection unit220) has already been implemented in the program to be analyzed, Step S23may be omitted. Next, the information processing apparatus calculates the hash value of the execution order of each code block based on the CFG (S24). For example, as described above, the information processing apparatus gives an execution order to a predetermined hash function and calculates the hash value. After that, the information processing apparatus associates the calculated hash value with the execution order and stores the associated information as the execution order inspection information in the memory260in the device200(S25). Further, the information processing apparatus stores the generated CFG in a storage apparatus (not shown) in the device200as the control flow graph240. FIG.6is a flowchart showing a flow of control flow inspection processing during execution of a program according to the second example embodiment. In this example, a case in which control flow inspection processing is performed when a program that has created the CFG and the hash value in the above preliminary processing is executed in the device200will be described. First, the program execution unit210starts executing the program (S31). For example, a processor core in the device200loads a program to be executed (externally input) into the memory260and executes the loaded program. Next, the program execution unit210executes an inspection instruction during the execution of the predetermined code block and calls the inspection unit220. At this time, the program execution unit210further notifies the inspection unit220of the current execution order of the code block to be executed. Then, the inspection unit220acquires the current execution order of the execution target code block (S32). Then, the inspection unit220calculates a hash value based on the acquired execution order (S33). It is assumed here that the way of calculating the hash value, the hash function to be used, etc. are similar to those in the preliminary processing (Step S24). Further, the inspection unit220causes the prediction unit230to execute preliminary calculation hash value acquisition processing independently of Step S33(S34). Note that the inspection of the control flow may be performed by not using hash values and by directly comparing the execution order of the execution target code block. Specifically, the execution order of the execution target code block may be compared with a pre-defined correct execution order and it may be inspected whether they match each other. FIG.7is a flowchart showing a flow of preliminary calculation hash value acquisition processing according to the second example embodiment. First, the prediction unit230analyzes the input values to the execution target code block (S341). When, for example, the processing of comparing the input values with the predetermined values is implemented in the execution target code block and the subsequent processing (code block) is branched in accordance with the result of the comparison, in Step S341, the prediction unit230performs processing of comparing the input values with the predetermined values and determines the results of the comparison to be the result of the analysis. For example, the prediction unit230sets the result of comparison indicating that the input value is smaller than or it is equal to or larger than a predetermined value as the result of the analysis. The input values greatly affect the operation of this code block or the subsequent code blocks, including branch determination. By analyzing the input values, for example, buffer overflow can be detected and the code block which is a branch destination, can be predicted more accurately. Note that the input value is one example of the prediction auxiliary information. Therefore, in Step S341, in addition to the input values or in place of the input values, the internal state variable values when the execution target code block is executed may instead be analyzed. In this case, the prediction unit230may acquire the internal state variable values of the execution target code block from the program execution unit210. Then, the prediction unit230may perform processing of comparing the internal state variable values, like the processing performed using the input values, and the result of the comparison is used as the result of the analysis. For example, even when input values are the same, the internal state variable values may be changed every time the processing is repeatedly executed. Therefore, by performing prediction in view of the internal state variable values, the code block which is the branch destination can be predicted more accurately. Alternatively, in addition to the input values and the internal state variable values, or in place of the input values and the internal state variable values, the priority of execution of paths that may be executed after being branched off from the first code block may instead be analyzed. The priority may be set for each path in preliminary processing in advance. For example, Internet of Thing (IoT) devices strongly require processing be performed in real time. Therefore, a high priority is set in the CFG in advance for a code block in which processing that requires a response to be made within a certain period of time is implemented, whereby it becomes easy to ensure real-time property. Then, the prediction unit230predicts the path of the control flow and the prefetch target based on the result of the analysis and the control flow graph240(S342). For example, the prediction unit230specifies the second code block which is the branch destination from the first code block that is currently being executed in the control flow graph240in accordance with the input values. Note that the branch destination is not limited to a part immediately after the first code block and includes code blocks in a plurality of steps. Then, the prediction unit230specifies the path of the control flow from the first code block to the second code block. For example, the specified path may include three or more code blocks. FIG.8is a diagram for describing the concept of prediction of the path to be prefetched according to the second example embodiment.FIG.8shows that a path p1is specified when, for example, the input value A is smaller than 10 and a path p2is specified when the input value A is equal to or larger than 10. The prediction unit230may specify the first position of the first code block and the second position of the second code block from the control flow graph240and may specify the path from the first position to the second position. Referring once again toFIG.7, the explanation will be continued. The prediction unit230predicts the specified path as the prefetch target. Specifically, the prediction unit230specifies the storage area of the execution order inspection information that corresponds to the respective code blocks included in the specified path from the memory260. For example, the prediction unit230refers to the control flow graph240, specifies the execution order that corresponds to the respective code blocks included in the specified path, and searches execution orders2711to27n1in the memory260using the specified execution order as a search key. The prediction unit230specifies the address range in the memory260which stores the execution order inspection information to which the execution order found by the search belongs as the storage area (prefetch target). Then, the prediction unit230prefetches hash values to be prefetched from the memory260to the cache250(S343). When, for example, the search finds the execution order2711, the prediction unit230reads out the execution order2711and the hash value2712from the specified storage area in the memory260and writes them into the cache250. Note that Step S343may be executed at another timing. After that, the prediction unit230determines whether the execution frequency of the predicted path is high (S344). The execution frequency may indicate, for example, statistical information such as the actual number of executions per unit time, the probability that it is executed or the like. It is further assumed that the threshold of the execution frequency is stored in the device200in advance or as the actual number of times the program has been executed. Note that the actual number of times the program has been executed is sequentially updated in accordance with the execution of the program. When it is determined in Step S344that the execution frequency is higher than the threshold, the prediction unit230acquires the hash value in the current execution order from the cache250(S345). The prefetch operation in Step S343may be executed after Step S345. On the other hand, when it is determined in Step S344that the execution frequency is not higher than the threshold (equal to or smaller than the threshold), the prediction unit230acquires the hash value of the current execution order from the memory260(S346). In Step S342, the prediction unit230may predict that the specified path is the prefetch target when the specified path satisfies the following condition. For example, the prediction unit230may predict, when the execution frequency of the specified path is higher than those of other paths that may be executed by being branched off from the first code block, the storage area of execution order inspection information that corresponds to the specified path as the prefetch target. Further, in Step S342, the prediction unit230may predict the prefetch target based on whether the execution code is an execution code where delay is allowed instead of predicting the prefetch target based on the execution frequency (S342), and execute the prefetch operation (S343). Specifically, when the execution target code block is generated, a developer specifies whether each code block allows delay. For example, in the case of a code block that is sensitive to a timing, such as device control, the developer specifies that delay should not be allowed. Then, when the prediction unit230predicts prefetch, verification information of a code block where it is specified that delay will not be allowed may be preferentially set as a prefetch target. FIG.9is a diagram for describing an example of the execution frequency of the predicted path according to the second example embodiment. It is assumed here that the path p1is known to be executed with the probability of 90% and the path p2is known to be executed with the probability of 10% for the code block321which is the branch source. Then, the path p1is executed more frequently than the path p2is, which indicates that the path p1is predicted as a prefetch target. Note that 90% and 10% are examples of the frequency of execution, and may be the history of the execution count. Alternatively, the prediction unit230may predict, when the number of code blocks included in the specified path is equal to or larger than a predetermined value, the storage area of the execution order inspection information that corresponds to the specified path as the prefetch target. In the case ofFIG.8, for example, the number of code blocks that belong to the path p1is larger than the number of code blocks that belong to the path p2and the depth of the path p1is larger than that of the path p2. Therefore, in such a case, the path p1may be specified. Referring once again toFIG.6, the explanation will be continued. After Steps S33and S34, the inspection unit220determines whether the hash value calculated in Step S33matches with the hash value acquired in Step S34(S35). When it is determined that the hash values match each other, the inspection unit220outputs information indicating that execution of the execution target code block will be permitted to the program execution unit210. Then, the program execution unit210executes the execution target code block (S36). Then, the program execution unit210determines whether the next execution target code block is present (S37). When the next execution target code block is present, the program execution unit210calls the inspection unit220again when it executes the inspection instruction, the process then proceeds to Step S32, and the following process is repeatedly executed. When it is determined in Step S37that the next execution target code block is not present, the execution of the program is ended. On the other hand, when it is determined in Step S35that the hash values do not match each other, the inspection unit220outputs information indicating that execution of the execution target code block will not be allowed (error notification) to the program execution unit210(S38). Then, the execution of the program is ended. FIG.10is a block diagram showing a configuration of an example of a device400according to the second example embodiment. The device400, which is a specific example of the device200, is, for example, application of Trusted Execution Environment (TEE). The device400includes a normal world410, which is a non-secure area, and a secure world420, which is a secure area. The normal world410includes a processor core411as program execution means. It is assumed that the processor core411executes an execution target code block412. The secure world420includes an execution order inspection unit421, a prefetch prediction unit422, a cache423, a memory424, and a control flow graph425. The secure world420is, for example, a TrustedZone in an ARM (registered trademark) architecture. The execution order inspection unit421, the prefetch prediction unit422, the cache423, the memory424, and the control flow graph425in the secure world420respectively correspond to the inspection unit220, the prediction unit230, the cache250, the memory260, and the control flow graph240described above. The prefetch prediction unit422predicts, in accordance with execution of an inspection instruction by the processor core411, the prefetch target in accordance with the code block in which the inspection instruction is set, and prefetches the prefetch target. Further, the prefetch prediction unit422determines that the access destination of the execution order inspection information is the cache423or the memory424in accordance with the execution of the inspection instruction by the processor core411, and acquires the first execution order inspection information that corresponds to the current execution order of the code block from the determined access destination. The execution order inspection unit421calculates the second execution order inspection information that corresponds to the current execution order of the code block in accordance with the execution of the inspection instruction by the processor core411, and compares the first execution order inspection information acquired by the prefetch prediction unit422with the calculated second execution order inspection information. The execution order inspection unit421inspects whether it is possible to execute code blocks executed after the code block where the inspection instruction is set in accordance with the result of the comparison. FIG.11is a block diagram showing a configuration of the example of an information processing apparatus500according to the second example embodiment. The information processing apparatus500includes a storage apparatus510, a control unit520, a cache530, a memory540, and an InterFace (IF) unit550. The storage apparatus510is a non-volatile storage apparatus such as a hard disk or a flash memory. The storage apparatus510stores a control flow graph511, a control flow inspection program512, and an execution target program513. The control flow graph511corresponds to the aforementioned control flow graph240. The control flow inspection program512is a computer program in which processing of the control flow inspection method according to this example embodiment is implemented. The cache530and the memory540are storage areas that respectively correspond to the cache250and the memory260described above and temporarily hold information when the control unit520performs operation. The IF unit550is an interface that receives or outputs data from or to a device provided in the outside of the information processing apparatus500. For example, the IF unit550outputs external input data to the control unit520and externally outputs data received from the control unit520. The control unit520is a processor that controls each of the components of the information processing apparatus500, that is, a control apparatus. For example, the control unit520may be one or more processor cores. The control unit520loads the control flow inspection program512into the memory540from the storage apparatus510and executes the control flow inspection program512. Further, the control unit520loads the control flow graph511and the execution target program513into the memory540as appropriate from the storage apparatus510and executes the loaded control flow graph511and the execution target program513. Accordingly, the control unit520implements the functions of the program execution unit210, the inspection unit220and the prediction unit230, or the execution order inspection unit421and the prefetch prediction unit422. Note that the control unit520is preferably a CPU that includes a Trusted Execution Environment (TEE). In this case, it can be said that the control flow inspection program512according to this example embodiment is executed on the CPU including the TEE. Other Example Embodiments In the above example embodiments, each of the components shown in the drawings as functional blocks which perform various kinds of processing can be configured by a Central Processing Unit (CPU), a memory, or another circuit in terms of hardware, and is achieved by a program or the like that the CPU loads into the memory and executes the loaded program in terms of software. Accordingly, it will be understood by those skilled in the art that these functional blocks can be implemented in various forms by only hardware, only software or a combination thereof. They are not limited to any one of them. Further, the above-described program can be stored and provided to a computer using any type of non-transitory computer readable media. Non-transitory computer readable media include any type of tangible storage media. Examples of non-transitory computer readable media include magnetic storage media (such as flexible disks, magnetic tapes, hard disk drives, etc.), optical magnetic storage media (e.g., magneto-optical disks), Compact Disc-Read Only Memory (CD-ROM), CD-Recordable (CD-R), CD-ReWritable (CD-R/W), and semiconductor memories (such as mask ROM, Programmable ROM (PROM), Erasable PROM (EPROM), flash ROM, Random Access Memory (RAM), etc.). The program may be provided to a computer using any type of transitory computer readable media. Examples of transitory computer readable media include electric signals, optical signals, and electromagnetic waves. Transitory computer readable media can provide the program to a computer via a wired communication line (e.g., electric wires, and optical fibers) or a wireless communication line. Note that the present disclosure is not limited to the above example embodiments and may be changed as appropriate without departing from the spirit of the present disclosure. Further, the present disclosure may be executed by combining some of example embodiments as appropriate. The whole or part of the example embodiments disclosed above can be described as, but not limited to, the following supplementary notes. (Supplementary Note A1) A semiconductor device comprising:first storage means for storing, in advance, a plurality of pieces of execution order inspection information used for inspection of an execution order of a plurality of code blocks in a predetermined program;second storage means, which is a cache for the first storage means; andprediction means for predicting a storage area of the execution order inspection information based on prediction auxiliary information in a first code block of the plurality of code blocks and a control flow graph of the program, the storage area being a prefetch target to be prefetched from the first storage means to the second storage means. (Supplementary Note A2) The semiconductor device according to Supplementary Note A1, whereinthe prediction means:specifies a second code block that may be executed after the first code block based on the prediction auxiliary information and the control flow graph;specifies a path of a control flow from the first code block to the second code block; andpredicts the storage area of the execution order inspection information that corresponds to the specified path as the prefetch target. (Supplementary Note A3) The semiconductor device according to Supplementary Note A2, whereinthe prediction means:specifies a first position of the first code block in the program based on the control flow graph; andspecifies a second position included in a control flow from the first position as the second code block based on the result of the analysis of the prediction auxiliary information and the control flow graph. (Supplementary Note A4) The semiconductor device according to Supplementary Note A2 or A3, wherein the prediction means predicts, when the execution frequency of the specified path is higher than those of other paths that may be executed by being branched off from the first code block, the storage area of the execution order inspection information that corresponds to the specified path as the prefetch target. (Supplementary Note A5) The semiconductor device according to any one of Supplementary Notes A2 to A4, wherein the prediction means predicts, when the number of code blocks included in the specified path is equal to or larger than a predetermined value, the storage area of the execution order inspection information that corresponds to the specified path as the prefetch target. (Supplementary Note A6) The semiconductor device according to any one of Supplementary Notes A2 to A5, wherein the prediction means specifies the path so as to include three or more code blocks. (Supplementary Note A7) The semiconductor device according to any one of Supplementary Notes A1 to A6, wherein the prediction auxiliary information includes at least one of an input value to the first code block, an internal state variable value at the time of execution of the first code block, and the priority of execution of paths that may be executed by being branched off from the first code block. (Supplementary Note A8) The semiconductor device according to any one of Supplementary Notes A1 to A7, wherein the execution order inspection information includes a hash value calculated based on a path of a control flow regarding two or more of the plurality of code blocks. (Supplementary Note A9) The semiconductor device according to any one of Supplementary Notes A1 to A8, whereinthe prediction means:predicts the storage area as the prefetch target in accordance with execution of an inspection instruction of control flow integrity configured in the first code block;prefetches the predicted storage area from the first storage means to the second storage means; anddetermines, at a time of execution of an inspection instruction of control flow integrity configured in a third code block that has been executed after the first code block, an access destination to be one of the first storage means and the second storage means based on the prediction auxiliary information in the third code block, and acquires first execution order inspection information that corresponds to the current execution order of the third code block from the determined access destination, andthe semiconductor device further comprises inspection means for calculating second execution order inspection information that corresponds to the current execution order of the third code block and inspecting whether it is possible to execute a code block executed after the third code block in accordance with the result of comparing the acquired first execution order inspection information with the calculated second execution order inspection information. (Supplementary Note A10) The semiconductor device according to Supplementary Note A9, whereinthe semiconductor device includes a secure area and a non-secure area,the secure area includes the first storage means, the second storage means, the prediction means, and the inspection means,the non-secure area includes program execution means,the prediction means predicts, in accordance with the execution of the inspection instruction by the program execution means, the prefetch target in accordance with a fourth code block in which the inspection instruction is set, prefetches the prefetch target, determines an access destination to be one of the first storage means and the second storage means based on the prediction auxiliary information in the fourth code block, and acquires third execution order inspection information that corresponds to the current execution order of the fourth code block from the determined access destination, andthe inspection means inspects whether it is possible to execute a code block executed after the code block in which the inspection instruction is set in accordance with the execution of the inspection instruction by the program execution means. (Supplementary Note B1) A control flow inspection method, whereina computer comprising:first storage means for storing, in advance, a plurality of pieces of execution order inspection information used for inspection of an execution order of a plurality of code blocks in a predetermined program;second storage means, which is a cache for the first storage means; andacquires prediction auxiliary information in a first code block of the plurality of code blocks and a control flow graph of the program; andpredicts a storage area of the execution order inspection information based on the prediction auxiliary information and the control flow graph, the storage area being a prefetch target to be prefetched from the first storage means to the second storage means. (Supplementary Note C1) A non-transitory computer readable medium storing a control flow inspection program causing a computer comprising:first storage means for storing, in advance, a plurality of pieces of execution order inspection information used for inspection of an execution order of a plurality of code blocks in a predetermined program; andsecond storage means, which is a cache for the first storage means, to execute:processing of acquiring prediction auxiliary information in a first code block of the plurality of code blocks and a control flow graph of the program; andprocessing of predicting a storage area of the execution order inspection information based on the prediction auxiliary information and the control flow graph, the storage area being a prefetch target to be prefetched from the first storage means to the second storage means. (Supplementary Note D1) An electronic device comprising the semiconductor device according to any one of Supplementary Notes A1 to A10. While the present application has been described with reference to the example embodiments (and the Example), the present application is not limited to the above example embodiments (and the Example). Various changes that those skilled in the art may understand within the scope of the present application can be made to the configurations and the details of the present application. REFERENCE SIGNS LIST 100Semiconductor Device110First Storage Unit111Execution Order Inspection Information11nExecution Order Inspection Information120Second Storage Unit130Prediction Unit200Device210Program Execution Unit211Execution Target Code Block212Inspection Instruction220Inspection Unit230Prediction Unit240Control Flow Graph250Cache260Memory270Execution Order Inspection Information2701Execution Order2702Hash Value271Execution Order Inspection Information2711Execution Order2712Hash Value27nExecution Order Inspection Information27n1Execution Order27n2Hash Value31Module32Code Block33Inspection Instructionp1Pathp2Path400Device410Normal World411Processor Core412Execution Target Code Block420Secure World421Execution Order Inspection Unit422Prefetch Prediction Unit423Cache424Memory425Control Flow Graph500Information Processing Apparatus510Storage Apparatus511Control Flow Graph512Control Flow Inspection Program513Execution Target Program520Control Unit530Cache540Memory550IF Unit
46,733
11860763
DETAILED DESCRIPTION The following description and associated figures teach the best mode of the invention. For the purpose of teaching inventive principles, some conventional aspects of the best mode may be simplified or omitted. The following claims specify the scope of the invention. Note that some aspects of the best mode may not fall within the scope of the invention as specified by the claims. Thus, those skilled in the art will appreciate variations from the best mode that fall within the scope of the invention. Those skilled in the art will appreciate that the features described below can be combined in various ways to form multiple variations of the invention. As a result, the invention is not limited to the specific examples described below, but only by the claims and their equivalents. Mobile application designers often desire to make changes and updates to visual elements and other aspects of the user interface of an application. Ordinarily, such changes would require the application developers to edit program code to implement the new application design requirements. However, a framework can be installed into a mobile application which can receive and interpret changes to visual properties of display elements, providing a quick and easy way for designers to edit the user interface of a mobile application without having to write any programming code. Additionally or alternatively, various changes and feature variants may be coded into the main program code of the application, thereby enabling code toggling to turn various features on or off. Such changes and new features can then be tested on subsets of the user base using various techniques such as A/B testing, staged rollouts, and feature toggling. In some instances, these and other techniques may also be utilized to implement customized segmentation of users and the provision of different changes and features to the various user segments to create different user experiences for each of the segments. For example, the user base of any application may be segmented by demographics such as age or gender, and a different user interface variant may be provided to a younger age group than an older segment. Such segmentation and customization of the user experience may be configured manually or could be determined programmatically. For example, an application developer could run an A/B experiment on all users and the system could then analyze the results to automatically identify that early adopters who have used the application for over one year perform better with one variant than new users who exhibit better performance with another variant. Systems, methods, and software are disclosed herein that enhance application development and software design-for-test (DFT) technology utilizing an application development and optimization platform to facilitate customized segmentation of users and providing different user experiences to each of the segments. Among other benefits, the techniques described herein provide application developers the ability to manage how particular features are provided to various different user segments. Additionally, the results of an A/B test may be evaluated and turned into permanent customizations per segment based on performance analysis. In some examples, the technology described herein can accomplish these techniques by leveraging various embedded dynamic test features such as, for example, dynamic variables and/or code blocks, or other similar programmatic dynamic test features. The following disclosure provides various techniques for creating and managing customizations per user segment. Referring now to the drawings,FIG.1illustrates a communication system that may be utilized to implement customizations of visual elements and other features of an application per user segment.FIG.2illustrates an operation of the communication system in an exemplary embodiment.FIGS.3-11illustrate various exemplary graphical displays of an application, whileFIG.12illustrates an exemplary computing system that may be used to perform any of the techniques, processes, and operational scenarios described herein. Turning now toFIG.1, a block diagram of communication system100is illustrated. Communication system100includes mobile device101, computing system103, communication network130, and application modification server140. Mobile device101includes operating system120and application110. Application110runs on operating system120. Mobile device101may also include a user interface that communicates with operating system120over a bus communication device. Application110comprises main program111and application modification software development kit (SDK)112, which may be implemented as different software modules of application110. Main program111comprises the primary program instructions for the functionality of the application, such as streaming video, social networking, email, instant messaging, weather, navigation, or any other mobile application. Application modification SDK112may be installed into application110to facilitate changes and updates to a user interface and other visual elements of the application110, perform A/B testing of different application design variants, and other functionality. In some examples, application modification SDK112could comprise an embedded control module of application110. Computing system103includes application editor113. Computing system103may also include an operating system and user interface, although these components are not shown for clarity. Application modification server140comprises a computing system that provides an application development and optimization platform. In some examples, application editor113may comprise a web browser application that loads the application development and optimization platform provided by application modification server140. In operation, a developer of application110may execute application editor113on computing system103to operate an application management dashboard to apply real-time changes and updates to a user interface and other visual elements of the application110, activate or deactivate features for any segment of users, perform A/B testing of different application design variants to determine how changes to application110affect user behavior, customize the application110for different user segments, and other functionality. The developer may execute application110on mobile device101for use as a test device, and the execution of application110is then mirrored in the visual editor113executing on computing system103. The mirrored execution of application110within application editor113is achieved by application modification SDK112transferring screenshots of the application110to computing system103for display within the editor113, which may communicate over web sockets. SDK112sends information about the user interface of application110to computing system103for display within application editor113, including the entire view hierarchy of application110, which comprises descriptions or labels of all views that exist in the current interface and screenshots of the views. In this manner, the screenshots of the views can be displayed as images on the screen within the visual application editor113and the view descriptions, labels, and any other information may be displayed in a tree structure or tree diagram that represents the view hierarchy structure in a graphical form. Once the visual application editor113receives and displays the view hierarchy of application110, the developer can then click through the various views within the view hierarchy and make changes to different visual elements of the user interface. These changes are then sent to the application modification server140which can instantly update the display of application110with the changes in real-time on mobile device101via communication with application modification SDK112. Similarly, other application management functionality of the visual application editor113may be created and communicated to application modification server140and subsequently deployed to application110on mobile device101by communicating with SDK112. Of course, any of the functionality described herein could be applied to numerous instances of application110installed on multitudes of user mobile devices which may affect some or all of the entire user base, but only one mobile device101is shown inFIG.1for clarity. An exemplary operation of communication system100will now be discussed with respect toFIG.2. FIG.2is a flow diagram that illustrates an operation of communication system100in an exemplary implementation. The operation200shown inFIG.2may also be referred to as customization process200herein. The steps of the operation are indicated below parenthetically. The following discussion of operation200will proceed with reference to elements of communication system100ofFIG.1in order to illustrate its operations, but note that the details provided inFIG.1are merely exemplary and not intended to limit the scope of process200to the specific implementation ofFIG.1. Operation200may be employed by computing system101to facilitate provision of different user experiences to different groups of users of mobile application110. As shown in the operational flow ofFIG.2, mobile device101receives a manifest provided by an application development and optimization platform that defines a plurality of user segments and a plurality of feature variants individually associated with the plurality of user segments (201). The manifest may be defined by an owner of application110or some other administrator with access to the application development and optimization platform provided by application modification server140. For example, the application developer can configure the manifest to enable particular features for certain user segments, such as demographic groups, user-defined segments, groups of devices, and any other user segments. Additionally or alternatively, the application developer can provide one or more new objects, events, or actions with which to replace existing objects, events, or actions on a per-segment basis. The application development and optimization platform processes the modification instructions and updates the corresponding manifest. In some examples, the manifest may be requested by mobile device101and pulled from the application development and optimization platform provided by application modification server140, or could be pushed by server140automatically for delivery to mobile device101. For example, when mobile device101launches application110, a control module embedded into application110, such as application modification SDK112in this example, may query the application development and optimization platform provided by application modification server140for the latest manifest. In other words, the manifest query may be communicated by mobile device101responsive to launching or executing application110. The application development and optimization platform would then responsively send the manifest (or updated manifest) for delivery to application110executing on mobile device101. The manifest may be used to convey information to mobile device101about changes to mobile application110for various user segments. For example, the manifest can include instructions to enable or disable certain features per segment, instructions to replace objects with different objects upon occurrence of an action or event, or any other instructions. As will be discussed in greater detail below, in some instances the manifest can include an object or data that is to replace an existing object upon occurrence of the action or event on a per-segment basis. The plurality of user segments may be defined according to demographic information, such as age and gender, user role or skill level, such as novice or advanced users, or any other categorization of groups of users. The user segments may also be user-defined, such as different classes of users defined by the owner of application110, such as standard users, green users, and gold users. Note that a particular user may be a member of more than one user segment. In some examples, the plurality of user segments may be defined manually by an application developer associated with mobile application110, but may also be defined automatically in some examples. For example, the application development and optimization platform may be configured to automatically determine user segments from the results of certain A/B experiments or other variant testing, such as by identifying that users in a particular geographic location exhibited improved performance with variant A over variant B, whereas users in another location performed better with variant B, and thus automatically identify these two user segments having members in each of these geographic locations, respectively. In some examples, the application development and optimization platform may also be configured to automatically define which of the plurality of feature variants are individually associated with the plurality of user segments based on analysis of test results for the plurality of feature variants over the plurality of user segments. Mobile device101processes the manifest to determine a segment of the plurality of user segments associated with a user of the mobile application along with a feature variant of the plurality of feature variants associated with the segment of the user (202). In some examples, a control module embedded into application110, such as SDK112, could process the manifest and determine the user segment associated with the user of application110along with the feature variant associated with that user segment. Any objects, events, or actions specified in the manifest may optionally be stored in a local data store on mobile device101, which may later be accessed to replace a default feature with a feature variant as described below. Alternatively, in some instances, the manifest can simply enable or disable features or code blocks of embedded feature variants on a per-segment basis. In such cases, the control module/SDK112can process the manifest and enable or disable the features for the corresponding user segment immediately, without waiting for an event trigger. Mobile device101monitors execution of mobile application110for an occurrence of an event that triggers a default feature of mobile application110(203). In some examples, the event that triggers the default feature and/or the default feature itself may be specified in the manifest received by mobile device101and associated with the corresponding feature variant for the user segment of the user. Alternatively, application110could be preconfigured to automatically determine which event or events to monitor that are known to trigger the default feature that is associated with the feature variant specified in the manifest for the user segment of the user. In some examples, a control module installed into application110, such as SDK112, may monitor program instructions executing the native application110on mobile device101for occurrence of the event that triggers the default feature of mobile application110. In some examples, the event that triggers the default feature could comprise a function call, user login, execution of a code block, or any other detectable programming event. Responsive to the event, mobile device101replaces the default feature with the feature variant associated with the segment of the user (204). As discussed above, in some instances the manifest can include object data that is to replace an existing object upon occurrence of an action or event per user segment. When the event trigger occurs, application110replaces the existing object with the object data specified in the manifest for the user segment associated with the user. For example, application110could replace the default feature with the feature variant by replacing a default visual element with a variant visual element associated with the segment of the user. In some examples, a control module installed into application110, such as SDK112, may access a local data store to retrieve the object data provided in the manifest corresponding to the feature variant to replace the existing object associated with the default feature. In this manner, the control module, responsive to the event triggering the default feature, replaces the default feature with the feature variant. In some embodiments, replacing the default feature with the feature variant associated with the segment of the user comprises identifying an original function that invokes a function call associated with the event and responsively intercepting the function call and executing the feature variant instead of the function call. Advantageously, application110can receive and insert features into the deployed application110by monitoring execution of the program instructions and replacing code blocks, objects, user interfaces, and the like with changed code blocks, objects, or user interface variants for a particular user segment. In this manner, operation of the application110can be modified on a per-segment basis without deploying different versions to the different user segments and without modifying the object code of the deployed application. In addition, SDK112can monitor feedback and/or otherwise track how the changes are received by the various user segments and provide the tracking data to the application development and optimization platform. The development and optimization platform can process the tracking data and provide usable results and metrics to the application developers, allowing for real-time dynamic feature deployments and tracking on a per-segment basis. Various operational scenarios involving mobile device101and computing system103for customizations per segment will now be described with respect toFIGS.3-11. FIG.3illustrates an exemplary graphical display301of application editor113on computing system103. In this example, an application developer is provided with an interface to manage segments associated with a mobile application, such as application110as shown inFIG.1. Defining segments allows the application developer to easily evaluate experiment results by segment and create optimized customizations for the segments. In some examples, segments may be defined by device properties, user demographics, and custom attributes, integrations with third parties, bulk upload of user identifiers into a segment or associating groups of user identifiers with different segments, conversion of test results into custom segments, and other segment definitions. For example, a developer could run an A/B test on all users of application110and visual editor113could analyze the results to identify particular age groups of users that exhibit better performance with different variants, and automatically define new segments based on the identified age groups. Some examples of segments could include business users and personal users, paid subscribers and unpaid users, veteran users and new users, or any other user segmentations. In the example shown inFIG.3on graphical display301, three user-defined segments are shown as gold users, green users, and the rest. Additionally, users are also segmented by gender into male, female, and unknown, and into age groups of child, teen, adult, and senior. A given user could belong to multiple segment dimensions in some examples. For example, a given user could be a member of the gold user segment, the male gender segment, and the adult age segment. The application developer is given the option to edit any of the segments through use of the edit buttons, or add new segment dimensions. In this manner, any number of custom segments may be defined and managed by the application developer. Referring now toFIG.4, an exemplary graphical display401of application110on mobile device101is illustrated. In this example, graphical display401provides a default home screen of application110that is displayed for all users. In particular, as shown inFIG.4, regardless of whether the user segment type is gold, green, or the rest, when these users log in to application110, they are greeted by the same default home screen and the word “WELCOME” on a plain white background as shown on graphical display401. A technique to customize the home screen displayed for each of these different user segments will now be described with respect toFIG.5-9. FIG.5illustrates an exemplary graphical display501of application editor113on computing system103. In this example, an application developer is provided with an application management dashboard to create various projects for a mobile application, such as application110as shown inFIG.1. As shown in graphical display501, the developer is presented with the option to create a new project involving a feature flag to turn specific features on or off for any segment of users, an A/B experiment to determine how changes to a mobile application affect user behavior, an instant update to make real-time changes to a mobile application, and a customization option to customize a mobile application for different segments. As shown in graphical display501, in this example the application developer has selected the “Customization” option to customize application110for different segments. The result of selecting the “Customization” option is next shown in graphical display601A ofFIG.6. FIG.6illustrates an exemplary graphical display601A of application editor113on computing system103. Graphical display601A is displayed as a result of the application developer selecting the “Customization” option shown in graphical display501to customize application110for different segments as discussed above with respect toFIG.5. In the “Customization” display shown in graphical display601A, the application developer is provided fields to enter the customization details in order to name, describe, and tag the customization for easy reference. After entering the customization details, the user can select the desired segment dimension and edit the application for each segment. The link to “Manage Segments” would direct the user to the segment management screen as shown in graphical display301ofFIG.3in order to edit existing segments or add new segment dimensions as described above. As shown in graphical display601A, in this example the user has selected the “User-Defined Segments” segment dimension from the dropdown menu for which to apply the new customization. The “Customization” option screen is continued in graphical display601B ofFIG.7. FIG.7illustrates an exemplary graphical display601B of application editor113on computing system103. Graphical display601B is a continuation of the “Customization” option screen of graphical display601A. Graphical display601B provides the lower portion of the “Customization” option screen that the user began filling out as shown in graphical display601A, where the user has scrolled the screen down to expose the content shown in graphical display601B. Since the user has selected the “User-Defined Segments” segment dimension from the dropdown menu as discussed above with respect to graphical display601A ofFIG.6, the user-defined segments of “Gold”, “Green” and the “Rest” are shown to provide the application developer the ability to edit various elements of the application for each of these segments. In this example, the “Visual” option is selected, enabling the developer to view and edit different visual elements of the application for each of the segments. The developer may also select the “Code Block” and “Dynamic Variables” options to edit these properties of the application for each of the segments as well. As shown in graphical display601B, in this example the user has edited the home screen for the “Gold” user segment to display the text “WELCOME GOLD USER” and has changed the background color to gold (as indicated by the lighter gray shading color), and edited the home screen for the “Green” user segment to display the text “WELCOME GREEN USER” and has changed the background color to green (as indicated by the darker gray shading color). The home screen for the “Rest” user segment remains unchanged as the default home screen with the word “WELCOME” on a plain white background. Of course, any other visual elements of the home screen could be edited in this manner for each of the segments, including changing the arrangement of visual elements per segment, adding different text, buttons, images, video, navigation options, and any other elements per segment, or any other edits to the visual elements for each of the segments. Further, any other screens or views within the view hierarchy of the application110could be selected and edited per segment in this manner, such as a login screen, main application screen, terms of service screen, and the like. Managing the selection of which version to display depending on the user properties is a complex and difficult task, but is handled through the user interface of application editor113in this example, which provides for management of this complexity on the server side over a longer term or permanent basis, without having to code that selection complexity into the application itself. When the application developer has finished custom editing the visual elements for each of the segments, the user clicks the “Launch Customization” button at the bottom of graphical display601B. The result of selecting the “Launch Customization” button on graphical display601B is shown in graphical display601C ofFIG.8. FIG.8illustrates an exemplary graphical display601C of application editor113on computing system103. Graphical display601C is displayed as a result of the application developer selecting the “Launch Customization” button shown in graphical display601B to customize application110for different segments as discussed above with respect toFIG.7. As shown in graphical display601C, a notification dialog box is overlaid on top of the “Customization” option screen shown in graphical display601B ofFIG.7as a result of the user selecting the “Launch Customization” button. The notification dialog box warns the user that launching the customization will push the changes to each of the segments, and prompts the user to “Cancel” or “Continue”. In this example, the application developer selects the “Continue” button to apply the changes and deploy the customization to each of the segments. The result of the user launching the customization is shown inFIG.9. FIG.9illustrates the result of the application developer launching the customization as discussed above with respect toFIG.8. In this example, each of the different user segments are greeted with customized home screens as defined by the application developer in the customization described above with respect to graphical display601B ofFIG.7. In particular, as shown inFIG.9, after the “Gold” user segment logs in to the application, the “Gold” user is greeted with a customized home screen displaying the text “WELCOME GOLD USER” and a gold background color (as indicated by the lighter gray shading color). Similarly, after the “Green” user segment logs in to the application, the “Green” user is greeted with a customized home screen displaying the text “WELCOME GREEN USER” and a green background color (as indicated by the darker gray shading color). Finally, when the “Rest” user segment logs in to the application, the home screen for the “Rest” user segment remains unchanged as the default home screen with the word “WELCOME” on a plain white background. In this manner, each of the different user segments is provided a different experience when accessing application110based on the customization defined by the application developer. Referring now toFIG.10, an exemplary graphical display1001of application editor113on computing system103is illustrated. Graphical display1001provides an example of an interface to enable an application developer to create a segmented customization from the results of running an A/B experiment. In this example, a special onboarding flow experiment is run with two variants, A and B. After acquiring results, application editor113evaluates the data and can turn these results into a permanent customization per segment. The user-defined segment dimension having segments of “Gold”, “Green”, and the “Rest” are displayed on graphical display1001, along with the option to “Customize”. The result of the application developer selecting the “Customize” option is shown in graphical display1101ofFIG.11. FIG.11illustrates an exemplary graphical display1101of application editor113on computing system103. Graphical display1101provides an example of a display screen that may result from an application developer selecting the “Customize” option as shown in graphical display1001ofFIG.10to create a segmented customization from the results of running an A/B experiment. This option effectively stops the A/B experiment and turns it into a persistent customization optimized for each user segment. In this example, the “Gold” user segment receives the “Original” variant, the “Green” user segment receives the “SpecialOnboardingFlowA” variant, and the “Rest” user segment receives the “SpecialOnboardingFlowB” variant. The application editor113may determine these segmented customizations automatically by analyzing the results of the special onboarding flow experiment and recognizing that the assigned variants produced the most optimal results for their respective user segments. Accordingly, what was formerly an onboarding flow experiment becomes an onboarding flow customization. In this manner, application editor113is able to automatically create a segmented customization from the results of running an A/B experiment, thereby greatly facilitating this process for application developers. In addition, the segmented customization techniques disclosed herein may enable anomaly detection, such as the ability to run a test and determine anomalies per segment or segment combination. For example, if an application developer ran a test on an application comparing “Flow A” versus “Flow B” versus “Flow C” and found that “Flow B” was best for male users and “Flow C” for female users, the developer would be inclined to set up that long-term segmented customization. However, during this testing process it could be determine that while overall “Flow B” was best for male users, males in Europe had conversion drop to zero percent, while other geographic regions exhibited much better conversion rates, indicating a major problem for the combination of “Flow B” for male users in Europe. Accordingly, a different variant could be tested and deployed to male users in Europe instead of “Flow B’. Now referring back toFIG.1, mobile device101comprises a processing system and communication transceiver. Mobile device101may also include other components such as a user interface, data storage system, and power supply. Mobile device101may reside in a single device or may be distributed across multiple devices. Examples of mobile device101include mobile computing devices, such as cell phones, tablet computers, laptop computers, notebook computers, and gaming devices, as well as any other type of mobile computing devices and any combination or variation thereof. Examples of mobile device101may also include desktop computers, server computers, and virtual machines, as well as any other type of computing system, variation, or combination thereof. Computing system103comprises a processing system and communication transceiver. Computing system103may also include other components such as a user interface, data storage system, and power supply. Computing system103may reside in a single device or may be distributed across multiple devices. Examples of computing system103include mobile computing devices, such as cell phones, tablet computers, laptop computers, notebook computers, and gaming devices, as well as any other type of mobile computing devices and any combination or variation thereof. Examples of computing system103also include desktop computers, server computers, and virtual machines, as well as any other type of computing system, variation, or combination thereof. Communication network130could comprise multiple network elements such as routers, gateways, telecommunication switches, servers, processing systems, or other communication equipment and systems for providing communication and data services. In some examples, communication network130could comprise wireless communication nodes, telephony switches, Internet routers, network gateways, computer systems, communication links, or some other type of communication equipment, including combinations thereof. Communication network130may also comprise optical networks, packet networks, local area networks (LAN), metropolitan area networks (MAN), wide area networks (WAN), or other network topologies, equipment, or systems—including combinations thereof. Communication network130may be configured to communicate over metallic, wireless, or optical links. Communication network130may be configured to use time-division multiplexing (TDM), Internet Protocol (IP), Ethernet, optical networking, wireless protocols, communication signaling, or some other communication format, including combinations thereof. In some examples, communication network130includes further access nodes and associated equipment for providing communication services to several computer systems across a large geographic region. Application modification server140may be representative of any computing apparatus, system, or systems that may connect to another computing system over a communication network. Application modification server140comprises a processing system and communication transceiver. Application modification server140may also include other components such as a router, server, data storage system, and power supply. Application modification server140may reside in a single device or may be distributed across multiple devices. Application modification server140may be a discrete system or may be integrated within other systems, including other systems within communication system100. Some examples of application modification server140include desktop computers, server computers, cloud computing platforms, and virtual machines, as well as any other type of computing system, variation, or combination thereof. Referring now toFIG.12, a block diagram that illustrates computing system1200in an exemplary implementation is shown. Computing system1200provides an example of mobile device101, computing system103, application modification server140, or any computing system that may be used to execute customization process200or variations thereof, although such systems could use alternative configurations. Computing system1200includes processing system1201, storage system1203, software1205, communication interface1207, and user interface1209. Software1205includes application1206which itself includes customization process200. Customization process200may optionally be implemented separately from application1206as indicated by the dashed lines surrounding process200inFIG.12. Computing system1200may be representative of any computing apparatus, system, or systems on which application1206and customization process200or variations thereof may be suitably implemented. Computing system1200may reside in a single device or may be distributed across multiple devices. Examples of computing system1200include mobile computing devices, such as cell phones, tablet computers, laptop computers, notebook computers, and gaming devices, as well as any other type of mobile computing devices and any combination or variation thereof. Note that the features and functionality of computing system1200may apply as well to desktop computers, server computers, and virtual machines, as well as any other type of computing system, variation, or combination thereof. Computing system1200includes processing system1201, storage system1203, software1205, communication interface1207, and user interface1209. Processing system1201is operatively coupled with storage system1203, communication interface1207, and user interface1209. Processing system1201loads and executes software1205from storage system1203. When executed by computing system1200in general, and processing system1201in particular, software1205directs computing system1200to operate as described herein for each implementation or variations thereof. Computing system1200may optionally include additional devices, features, or functionality not discussed herein for purposes of brevity. Referring still toFIG.12, processing system1201may comprise a microprocessor and other circuitry that retrieves and executes software1205from storage system1203. Processing system1201may be implemented within a single processing device but may also be distributed across multiple processing devices or sub-systems that cooperate in executing program instructions. Examples of processing system1201include general purpose central processing units, application specific processors, and logic devices, as well as any other type of processing device, combinations, or variations thereof. Storage system1203may comprise any computer-readable storage media capable of storing software1205and readable by processing system1201. Storage system1203may include volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information, such as computer readable instructions, data structures, program modules, or other data. Storage system1203may be implemented as a single storage device but may also be implemented across multiple storage devices or sub-systems co-located or distributed relative to each other. Storage system1203may comprise additional elements, such as a controller, capable of communicating with processing system1201. Examples of storage media include random-access memory, read-only memory, magnetic disks, optical disks, flash memory, virtual memory and non-virtual memory, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and that may be accessed by an instruction execution system, as well as any combination or variation thereof, or any other type of storage media. In no case is the computer-readable storage media a propagated signal. In operation, in conjunction with user interface1209, processing system1201loads and executes portions of software1205, such as customization process200, to facilitate provision of different user experiences to different groups of users of a mobile application as described herein. Software1205may be implemented in program instructions and among other functions may, when executed by computing system1200in general or processing system1201in particular, direct computing system1200or processing system1201to receive a manifest provided by an application development and optimization platform that defines a plurality of user segments and a plurality of feature variants individually associated with the plurality of user segments. Software1205may further direct computing system1200or processing system1201to process the manifest to determine a segment of the plurality of user segments associated with a user of the mobile application along with a feature variant of the plurality of feature variants associated with the segment of the user. Additionally, software1205may direct computing system1200or processing system1201to monitor execution of the mobile application for an occurrence of an event that triggers a default feature of the mobile application. Finally, software1205may direct computing system1200or processing system1201to, responsive to the event, replace the default feature with the feature variant associated with the segment of the user. Software1205may include additional processes, programs, or components, such as operating system software or other application software. Examples of operating systems include Windows®, iOS®, and Android®, as well as any other suitable operating system. Software1205may also comprise firmware or some other form of machine-readable processing instructions executable by processing system1201. In general, software1205may, when loaded into processing system1201and executed, transform computing system1200overall from a general-purpose computing system into a special-purpose computing system customized to facilitate provision of different user experiences to different groups of users of a mobile application as described herein for each implementation or variations thereof. For example, encoding software1205on storage system1203may transform the physical structure of storage system1203. The specific transformation of the physical structure may depend on various factors in different implementations of this description. Examples of such factors may include, but are not limited to the technology used to implement the storage media of storage system1203and whether the computer-readable storage media are characterized as primary or secondary storage. In some examples, if the computer-readable storage media are implemented as semiconductor-based memory, software1205may transform the physical state of the semiconductor memory when the program is encoded therein. For example, software1205may transform the state of transistors, capacitors, or other discrete circuit elements constituting the semiconductor memory. A similar transformation may occur with respect to magnetic or optical media. Other transformations of physical media are possible without departing from the scope of the present description, with the foregoing examples provided only to facilitate this discussion. It should be understood that computing system1200is generally intended to represent a computing system with which software1205is deployed and executed in order to implement application1206and/or customization process200to operate as described herein for each implementation (and variations thereof). However, computing system1200may also represent any computing system on which software1205may be staged and from where software1205may be distributed, transported, downloaded, or otherwise provided to yet another computing system for deployment and execution, or yet additional distribution. For example, computing system1200could be configured to deploy software1205over the internet to one or more client computing systems for execution thereon, such as in a cloud-based deployment scenario. Communication interface1207may include communication connections and devices that allow for communication between computing system1200and other computing systems (not shown) or services, over a communication network1211or collection of networks. In some implementations, communication interface1207receives dynamic data1221over communication network1211. Examples of connections and devices that together allow for inter-system communication may include network interface cards, antennas, power amplifiers, RF circuitry, transceivers, and other communication circuitry. The aforementioned network, connections, and devices are well known and need not be discussed at length here. User interface1209may include a voice input device, a touch input device for receiving a gesture from a user, a motion input device for detecting non-touch gestures and other motions by a user, and other comparable input devices and associated processing elements capable of receiving user input from a user. Output devices such as a display, speakers, haptic devices, and other types of output devices may also be included in user interface1209. In some examples, user interface1209could include a touch screen capable of displaying a graphical user interface that also accepts user inputs via touches on its surface. The aforementioned user input devices are well known in the art and need not be discussed at length here. User interface1209may also include associated user interface software executable by processing system1201in support of the various user input and output devices discussed above. Separately or in conjunction with each other and other hardware and software elements, the user interface software and devices may provide a graphical user interface, a natural user interface, or any other kind of user interface. User interface1209may be omitted in some implementations. The functional block diagrams, operational sequences, and flow diagrams provided in the Figures are representative of exemplary architectures, environments, and methodologies for performing novel aspects of the disclosure. While, for purposes of simplicity of explanation, methods included herein may be in the form of a functional diagram, operational sequence, or flow diagram, and may be described as a series of acts, it is to be understood and appreciated that the methods are not limited by the order of acts, as some acts may, in accordance therewith, occur in a different order and/or concurrently with other acts from that shown and described herein. For example, those skilled in the art will understand and appreciate that a method could alternatively be represented as a series of interrelated states or events, such as in a state diagram. Moreover, not all acts illustrated in a methodology may be required for a novel implementation. The above description and associated figures teach the best mode of the invention. The following claims specify the scope of the invention. Note that some aspects of the best mode may not fall within the scope of the invention as specified by the claims. Those skilled in the art will appreciate that the features described above can be combined in various ways to form multiple variations of the invention. As a result, the invention is not limited to the specific embodiments described above, but only by the following claims and their equivalents.
46,140
11860764
In the drawings, the following reference numerals are used: NumeralMeaning101-103Steps200Input layer201Probability of an error-prone pattern202Hidden layer204Output layer400Softmax layer600Maximum probability layer601Minimize Upfront Design602Single Responsibility Principle603Separation of Concerns300Code repository301Code to be evaluated302Static scanning tool303Error-prone pattern detector304Artificial neural network305Prediction result400Apparatus for evaluating code design quality401Determining module402Prediction result determining module403Evaluation module500Apparatus for evaluating code design quality501Processor502Memory DETAILED DESCRIPTION OF THE EXAMPLE EMBODIMENTS A method of an embodiment for evaluating code design quality, comprises:determining the probabilities of error-prone patterns in a code based on the result of static scanning of the code;inputting the probabilities into an artificial neural network, and, based on the artificial neural network, determining a prediction result of whether the code violates predetermined design principles and a quantified degree to which the code violates the design principles; andevaluating the design quality of the code based on the prediction result. It can be seen that, through detecting error-prone patterns in a code, the embodiments of the present invention predict whether key design principles are violated and the quantified degree to which they are violated in the software design process, thereby evaluating the design quality of the code, so that the code review process is fully automated, eliminating the disadvantage of too many false alarms resulting from direct evaluation of the design quality based on quality inspection rules in the prior art and providing high evaluation accuracy. In one embodiment, the error-prone patterns include at least one of the following: shotgun surgery; divergent change; big design up front; scattered/redundant functionality; cyclic dependency; bad dependency; complex class; long method; code duplication; long parameter list; message chain; and unused method; and/orthe design principles include at least one of the following: Separation of Concerns; Single Responsibility Principle; Least Knowledge; Don't Repeat Yourself; and Minimize Upfront Design. It can be seen that the embodiments of the present invention can further improve review accuracy by use of a plurality of predetermined error-prone patterns and predetermined design principles. In one embodiment, the method further comprises: receiving a modification record of the code; anddetermining the probabilities of error-prone patterns in a code based on the result of static scanning of the code comprises: determining the probability of an error-prone pattern in the code based on a compound logic conditional expression comprising the result of static scanning and/or the modification record. Therefore, the embodiments of the present invention can improve the accuracy in detecting error-prone patterns based on a compound logic conditional expression comprising the modification record of the code. In one embodiment, determining the probability of an error-prone pattern in the code based on a compound logic conditional expression comprising the result of static scanning and/or the modification record comprises at least one of the following:determining the probability of the existence of a shotgun surgery based on a compound logic conditional expression comprising the metrics of afferent coupling, efferent coupling and changing method;determining the probability of the existence of a divergent change based on a compound logic conditional expression comprising the metrics of revision number, instability and afferent coupling;determining the probability of the existence of a big design up front based on a compound logic conditional expression comprising the metrics of line of code, line of changed code, class number, changed class number and the statistical average of the above metrics;determining the probability of the existence of a scattered/redundant functionality based on a compound logic conditional expression comprising the metrics of structure similarity and logic similarity;determining the probability of the existence of a long method based on a compound logic conditional expression comprising the metric of cyclomatic complexity;determining the probability of the existence of a complex class based on a compound logic conditional expression comprising the metrics of line of code, attribute number, method number and maximum method cyclomatic complexity;determining the probability of the existence of a long parameter list based on a compound logic conditional expression comprising the metric of parameter number; anddetermining the probability of the existence of a message chain based on a compound logic conditional expression comprising the metric of indirect calling number. It can be seen that the embodiments of the present invention provide a detection method based on quantifiable metrics representing the attribute of the cause of different error-prone patterns, so that the error-prone patterns are detected automatically. In one embodiment, the predetermined thresholds in the compound logic conditional expression are adjustable. Therefore, it can be applicable to a plurality of applications through the adjustment of the predetermined thresholds in the compound logic conditional expression. In one embodiment, the artificial neural network comprises connections between error-prone patterns and design principles, and determining, based on the artificial neural network, a prediction result of whether the code violates predetermined design principles and a quantified degree to which the code violates the design principles comprises:based on the connections in the artificial neural network and the probabilities of error-prone patterns, determining a prediction result of whether the code violates the design principles and a quantified degree to which the code violates the design principles. Therefore, evaluation efficiency is ensured through automatic generation of the prediction result by use of an artificial neural network. In one embodiment, the method further comprises:adjusting the weights of the connections in the artificial neural network based on a self-learning algorithm. It can be seen that the artificial neural network becomes more and more accurate when adjusted based on a self-learning algorithm. In one embodiment, evaluating the design quality of the code based on the prediction result comprises at least one of the following:evaluating the design quality of the code based on whether it violates a predetermined design principle, wherein not violating the predetermined design principle is better in the design quality than violating the predetermined design principle; andevaluating the design quality of the code based on the quantified degree to which it violates the design principle, wherein the quantified degree to which it violates the design principle is inversely proportional to how good the design quality is evaluated to be. In one embodiment, the connections between error-prone patterns and design principles include at least one of the following:a connection between shotgun surgery and Separation of Concerns; a connection between shotgun surgery and Single Responsibility Principle; a connection between divergent change and Separation of Concerns; a connection between divergent change and Single Responsibility Principle; a connection between big design up front and Minimize Upfront Design; a connection between scattered functionality and Don't Repeat Yourself; a connection between redundant functionality and Don't Repeat Yourself; a connection between cyclic dependency and Separation of Concerns; a connection between cyclic dependency and Least Knowledge; a connection between bad dependency and Separation of Concerns; a connection between bad dependency and Least Knowledge; a connection between complex class and Single Responsibility Principle; a connection between long method and Single Responsibility Principle; a connection between code duplication and Don't Repeat Yourself; a connection between long parameter list and Single Responsibility Principle; a connection between message chain and Least Knowledge; and a connection between unused method and Minimize Upfront Design. It can be seen that the embodiments of the present invention optimize the connection form of the artificial neural network through analysis of the correspondence between error-prone patterns and design principles. An apparatus of an embodiment for evaluating code design quality, comprises:a determining module, configured to determine the probabilities of error-prone patterns in a code based on the result of static scanning of the code;a prediction result determining module, configured to input the probabilities into an artificial neural network, and, based on the artificial neural network, to determine a prediction result of whether the code violates predetermined design principles and a quantified degree to which the code violates the design principles; andan evaluation module, configured to evaluate the design quality of the code based on the prediction result. It can be seen that, through detecting error-prone patterns in a code, the embodiments of the present invention predict whether key design principles are violated in the software design process and the quantified degree to which they are violated, thereby evaluating the design quality of the code, so that the code review process is fully automated, eliminating the disadvantage of too many false alarms resulting from direct evaluation of the design quality based on quality inspection rules in the prior art and providing high evaluation accuracy. In one embodiment, the error-prone patterns include at least one of the following: shotgun surgery; divergent change; big design up front; scattered/redundant functionality; cyclic dependency; bad dependency; complex class; long method; code duplication; long parameter list; message chain; and unused method; and/orthe design principles include at least one of the following: Separation of Concerns; Single Responsibility Principle; Least Knowledge; Don't Repeat Yourself; and Minimize Upfront Design. It can be seen that the embodiments of the present invention can further improve review accuracy by use of a plurality of predetermined error-prone patterns and predetermined design principles. In one embodiment, the determining module is further configured to receive a modification record of the code, wherein determining the probabilities of error-prone patterns in a code based on the result of static scanning of the code comprises: determining the probability of an error-prone pattern in the code based on a compound logic conditional expression comprising the result of static scanning and/or the modification record. Therefore, the embodiments of the present invention can improve the accuracy in detecting error-prone patterns based on a compound logic conditional expression comprising the modification record of the code. In one embodiment, determining the probability of an error-prone pattern in the code based on a compound logic conditional expression comprising the result of static scanning and/or the modification record comprises at least one of the following:determining the probability of the existence of a shotgun surgery based on a compound logic conditional expression comprising the metrics of afferent coupling, efferent coupling and changing method;determining the probability of the existence of a divergent change based on a compound logic conditional expression comprising the metrics of revision number, instability and afferent coupling;determining the probability of the existence of a big design up front based on a compound logic conditional expression comprising the metrics of line of code, line of changed code, class number, changed class number and the statistical average of the above metrics;determining the probability of the existence of a scattered/redundant functionality based on a compound logic conditional expression comprising the metrics of structure similarity and logic similarity;determining the probability of the existence of a long method based on a compound logic conditional expression comprising the metric of cyclomatic complexity;determining the probability of the existence of a complex class based on a compound logic conditional expression comprising the metrics of line of code, attribute number, method number and maximum method cyclomatic complexity;determining the probability of the existence of a long parameter list based on a compound logic conditional expression comprising the metric of parameter number; anddetermining the probability of the existence of a message chain based on a compound logic conditional expression comprising the metric of indirect calling number. It can be seen that the embodiments of the present invention provide a detection method based on quantifiable metrics representing the attribute of the cause of different error-prone patterns, so that the error-prone patterns are detected automatically. In one embodiment, the artificial neural network comprises connections between error-prone patterns and design principles; and the prediction result determining module is further configured to determine, based on the connections in the artificial neural network and the probabilities of error-prone patterns, a prediction result of whether the code violates the design principles and a quantified degree to which the code violates the design principles. Therefore, evaluation efficiency is ensured through automatic generation of the prediction result by use of an artificial neural network. In one embodiment, the evaluation module is configured to evaluate the design quality of the code based on whether it violates a predetermined design principle, wherein not violating the predetermined design principle is better in the design quality than violating the predetermined design principle; and to evaluate the design quality of the code based on the quantified degree to which it violates the design principle, wherein the quantified degree to which it violates the design principle is inversely proportional to how good the design quality is evaluated to be. In one embodiment, the connections between error-prone patterns and design principles include at least one of the following:a connection between shotgun surgery and Separation of Concerns; a connection between shotgun surgery and Single Responsibility Principle; a connection between divergent change and Separation of Concerns; a connection between divergent change and Single Responsibility Principle; a connection between big design up front and Minimize Upfront Design; a connection between scattered functionality and Don't Repeat Yourself; a connection between redundant functionality and Don't Repeat Yourself; a connection between cyclic dependency and Separation of Concerns; a connection between cyclic dependency and Least Knowledge; a connection between bad dependency and Separation of Concerns; a connection between bad dependency and Least Knowledge; a connection between complex class and Single Responsibility Principle; a connection between long method and Single Responsibility Principle; a connection between code duplication and Don't Repeat Yourself; a connection between long parameter list and Single Responsibility Principle; a connection between message chain and Least Knowledge; and a connection between unused method and Minimize Upfront Design. It can be seen that the embodiments of the present invention optimize the connection form of the artificial neural network through analysis of the correspondence between error-prone patterns and design principles. A system of an embodiment for evaluating code design quality, comprises:a code repository, configured to store a code to be evaluated;a static scanning tool, configured to statically scan the code to be evaluated;an error prone pattern detector, configured to determine the probability of an error-prone pattern in the code based on the result of static scanning output by the static scanning tool; andan artificial neural network, configured to determine, based on the probability, a prediction result of whether the code violates a predetermined design principle and a quantified degree to which the code violates the design principle, wherein the prediction result is used to evaluate the design quality of the code. Therefore, the embodiments of the present invention detect the probability of an error-prone pattern based on the result of static scanning by a static scanning tool, predict whether key design principles are violated during the software design process and a quantified degree to which the key design principles are violated, and evaluate the design quality of the code based thereon, whereby providing the advantage of high accuracy in the evaluation and effectively preventing false alarms. In one embodiment, the code repository is further configured to store a modification record of the code; andthe static scanning tool is configured to determine the probability of an error-prone pattern in the code based on a compound logic conditional expression comprising the result of static scanning and/or the modification record. Therefore, the embodiments of the present invention can improve the accuracy in detecting error-prone patterns based on a compound logic conditional expression comprising the modification record of the code. An apparatus of an embodiment for evaluating code design quality, comprisesa processor and a memory; whereinthe memory stores an application that can be executed by the processor, which is used to cause the processor to execute the method of an embodiment for evaluating code design quality as described in any of the paragraphs. A computer-readable storage medium, wherein a computer-readable instruction is stored in it, and the computer-readable instruction is used to execute the method of an embodiment for evaluating code design quality as described in any of the paragraphs. The present invention is further described in detail with reference to the drawings and the embodiments, so that its technical solution and advantages become clearer. It should be understood that the specific embodiments described here are only used to illustratively explain the present invention, and are not used to limit the scope of the present invention. In order to be concise and intuitive in the description, the solution of the present invention is described below with reference to several representative embodiments. The large amount of details in the embodiments is only used to help to understand the solution of the present invention. However, it is obvious that the technical solution of the present invention may not be limited to these details. In order to avoid unnecessarily obscuring the solution of the present invention, some embodiments are not described in detail, but only a framework is given. Hereinafter, “including” means “including but not limited to”, and “according to . . . ” means “at least according to . . . , but not limited to . . . ”. Due to Chinese language habits, when the quantity of an element is not specified hereinafter, it means that there may be one or several of the elements, or it can be understood as there is at least one of it. In the embodiments of the present invention, the result of static scanning of a code and/or the modification record (comprising the history of addition/deletion/change/revision) of the code may be used to estimate whether key software design principles of the reviewed source code are followed in the development process and how well they are implemented. It is based on an obvious reason: if a key design principle is fully considered in the design process, the possibility of identifying related error-prone patterns arising from violating the principle in the source code will be reduced as a result. Therefore, by detecting whether there are related error-prone patterns in the source code, it can be estimated to what extent the software has followed the key design principle during the design process, thereby the design quality of the code can be evaluated and the areas for quality improvement can be determined. FIG.1is the method for evaluating code design quality of the embodiments of the present invention. As shown inFIG.1, the method comprises: Step101: determining the probability of an error-prone pattern in a code based on the result of static scanning of the code. Here, static scanning of a code refers to the code analysis technique whereby a program code is scanned through lexical analysis, syntactic analysis, control flow and data flow analysis and other techniques without running the code, to verify whether the code meets the indicators including the specifications, security, reliability, maintainability, etc. For example, static code scanning tools may include: Understand, Checkstyle, FindBugs, PMD, Fortify SCA, Checkmarx, CodeSecure, etc. And static scanning results may include: code metrics, quality defect warnings and related findings, etc. Wherein, code metrics may include: maintainability index, cyclomatic complexity, depth of inheritance, class coupling, line of code, etc. Here, error-prone patterns are usually also called bad code smells, which are diagnostic symptoms that indicate the software design may have quality issues. Error-prone patterns may exist in source code units at different levels. For example, error-prone patterns may include at least one of the following: shotgun surgery, divergent change, big design up front (BDUF), scattered functionality, redundant functionality, cyclic dependency, bad dependency, complex class, long method, code duplication, long parameter list, message chain, unused method, etc. Wherein, divergent change means that a class is always passively modified repeatedly for different reasons. Shot modification is similar to divergent change and means that, whenever a class needs some kind of change, many small modifications must be made in many other different classes accordingly, i.e., shotgun modifications. Big design up front means a large amount of design in advance in the early stages of the project, especially when the requirements are incomplete or unclear. Scattered functionality and redundant functionality mean that the same function/high-level concern is repeatedly implemented by a plurality of methods. Cyclic dependency means that two or more classes/modules/architecture components have direct or indirect dependencies on each other. Bad dependency means that a class/module needs to use information of other classes/modules that should not be closely related to it. Complex class means that a class is too complex. Long method means there are too many logic branches in a method. Code duplication means that the same code structure repeatedly appears at different locations in the software. It mainly includes: two functions in the same class have the same expression; two brother subclasses have the same expression in them; two completely unrelated classes have the same expression, etc. Long parameter list means that a method requires too many parameters. Message chain means that a class/method uses a method or attribute of a class that is not its direct friend. Unused method means a method that will never be used/called in the code. For example, Table 1 is an example description of the error-prone patterns of C++ source code. TABLE 1Error-prone patternDescriptionShotgun surgerySeveral other classes/modules must bemodified in order to make a small changeto a class/module.Divergent changeA class/module is frequently changed indifferent ways and for different reasons.Big design up frontLarge-scale pre-design is done too early,(BDUF)for example, when the requirements arestill unclear or incomplete.Scattered/redundantA plurality of methods repeatedlyfunctionalityimplements the same function/high-levelconcern.Cyclic dependencyTwo or more classes/modules/architecturecomponents have direct or indirectdependencies on each other.Bad dependencyA class/module needs to use informationof other classes/modules that should notbe closely related to it.Complex classThere are too many lines of the code of aclass, and there are many membervariables, attributes and methods,responsible for executing too much work.Long methodA method is too big and too complex, withmany logic branches and bearing too manyresponsibilities.Code duplicationThe same code structure appearsrepeatedly at different locations.Long parameter listA method requires too many parameters.Message chainA class/method uses a method or attributeof a class that is not its direct friend.Unused methodA method will never be used/called byother methods/classes. The above exemplarily describes some typical examples of error-prone patterns. Those skilled in the art may realize that this description is only example and is not used to limit the scope of the embodiments of the present invention. In one embodiment, the method further comprises: receiving a modification record of the code; and determining the probability of an error-prone pattern in a code based on the result of static scanning of the code in Step101comprises: determining the probability of an error-prone pattern in the code based on a compound logic conditional expression, wherein the compound logic conditional expression comprises both the result of static scanning and the modification record, or comprises the result of static scanning but not the modification record, or comprises the modification record but not the result of static scanning. Preferably, the probability of an error-prone pattern in the code may be determined based on a compound logic conditional expression comprising the result of static scanning and the modification record. Or, the probability of an error-prone pattern in the code may be determined based on a compound logic conditional expression comprising only the result of static scanning but not the modification record. Preferably, determining the probability of an error-prone pattern in the code based on a compound logic conditional expression comprising the result of static scanning and/or the modification record comprises at least one of the following:(1) Determining the probability of the existence of a shotgun surgery based on a compound logic conditional expression comprising the metrics of afferent coupling (Ca), efferent coupling (Ce) and changing method (CM).(2) Determining the probability of the existence of a divergent change based on a compound logic conditional expression comprising the metrics of revision number (RN), instability (I) and afferent coupling, wherein instability I=Ce/(Ca+Ce).(3) Determining the probability of the existence of a big design up front based on a compound logic conditional expression comprising the metrics of line of code (LOC), line of changed code (LOCC), class number (CN), changed class number and the statistical average (AVERAGE) of the above metrics.(4) Determining the probability of the existence of a scattered/redundant functionality based on a compound logic conditional expression comprising the metrics of structure similarity (SS) and logic similarity (LS).(5) Determining the probability of the existence of a long method based on a compound logic conditional expression comprising the metric of cyclomatic complexity (CC).(6) Determining the probability of the existence of a complex class based on a compound logic conditional expression comprising the metrics of line of code, attribute number (AN), method number (AN) and maximum method cyclomatic complexity (MCCmax).(7) Determining the probability of the existence of a long parameter list based on a compound logic conditional expression comprising the metric of parameter number.(8) Determining the probability of the existence of a message chain based on a compound logic conditional expression comprising the metric of indirect calling number (ICN). More preferably, the predetermined thresholds in any of the compound logic conditional expressions above are adjustable. The typical algorithms for determining error-prone patterns are described below. For example, the compound logic conditional expression (Ca, TopValues (10%)) OR (CM, HigherThan (10))) AND (Ce, HigherThan (5)) is a typical example of compound logic conditional expressions used to determine a shotgun surgery. The compound logic conditional expression combines Ca, CM and Ce. Wherein, “OR” stands for the OR logic, while “AND” stands for the AND logic. Wherein, the metric Ce is used to calculate the number of classes that have an attribute needed to be accessed or a method to be called by a given class. The metric CM is used to calculate the number of methods that need to access an attribute or call a method of a given class. TopValues and HigherThan are parameterized filtering mechanisms that have specific values (thresholds). TopValues is used to select, from all given members, those members with the value of a specific indicator falling within a specified highest range. HigherThan is used to select, from all the members, all of those members with the value of a specific indicator higher than a certain given threshold. Therefore, the determination strategy based on the above compound logic conditional expression means that: if the value of the metric Ca of a class falls within the range of 10% of the highest of all classes, or the value of its metric CM is higher than 10, and at the same time the value of its metric Ce is higher than 5, it should be a suspicious factor leading to a shotgun surgery. In order to improve the flexibility and applicability of the determination algorithm, three risk levels, namely low, medium and high, can be introduced. Moreover, different thresholds (limits) will be assigned to different risk levels. For example, Table 2 lists the typical examples of compound logic conditional expressions for determining shotgun surgeries based on the three risk levels: TABLE 2Risk LevelDetecting StrategyLow((Ca, TopValues(15%)) OR (CM,HigherThan(5))) AND (Ce, HigherThan(3))Medium((Ca, TopValues(10%)) OR (CM,HigherThan 10))) AND (Ce, HigherThan(5))High((Ca, TopValues(5%)) OR (CM,HigherThan(15))) AND (Ce, HigherThan(6)) The algorithm for determining shotgun surgeries described in Table 2 is based on the metrics of afferent coupling (Ca), efferent coupling (Ce) and changing method (CM). For example, Table 3 lists the typical examples of compound logic conditional expressions for determining divergent changes based on the three risk levels: TABLE 3Risk LevelDetecting StrategyLow(RN, TopValues(15%)) OR ((I, HigherThan(0.7))AND (Ca, HigherThan(1)))Medium(RN, TopValues(10%)) OR ((I, HigherThan(0.8))AND (Ca, HigherThan(1)))High(RN, TopValues(5%)) OR ((I, HigherThan(0.9))AND (Ca, HigherThan(1))) In Table 3, the algorithm based on the compound logic conditional expressions determines divergent changes based on the metrics of instability (I), afferent coupling (Ca) and revision number (RN). The metric of revision number (RN) is the total changes made to a given class and can be obtained directly from the change history. Moreover, instability (I) is calculated by comparing the afferent and efferent dependencies expressed by afferent coupling (Ca) and efferent coupling (Ce). InstabilityI=Ce/(Ca+Ce); The basic principle behind the algorithm in Table 3 is that: if the previous revision history of a class shows that it is frequently changed (measured by the number of revisions), or we predict that it will probably be changed in the future (measured by instability), it should be regarded as a suspicion of divergent changes. For example, Table 4 lists the typical examples of compound logic conditional expressions for determining big designs up front based on the three risk levels: TABLE 4Risk LevelDetecting StrategyLow(CCN/CN > 1.33 × AVERAGE) OR (LOCC/LOC >1.33 × AVERAGE)Medium(CCN/CN > 1.5 × AVERAGE) OR (LOCC/LOC >1.5 × AVERAGE)High(CCN/CN > 1.66 × AVERAGE) OR (LOCC/LOC >1.66 × AVERAGE) Wherein, the metric of line of code (LOC) is used to calculate all the lines of the source code; the line of changed code (LOCC) is the number of lines of the source code that have been changed (added, revised or deleted) in a given period of time. The metric of class number (CN) is used to calculate the total number of classes, while the metric of changed class number (CCN) is used to calculate the number of changed classes in a given period of time. The tag AVERAGE is used to stand for the average of a given metric calculated based on a set of history data. The basic principle of the algorithm is very clear, which is: if too many changes are made to the source code in a relatively short period of time, it is very likely that a big design up front has occurred during the design process. For C++ source code, redundant functionalities (the same or similar functions repeatedly implemented at different locations) should be determined at the method/function level. The determination algorithm should be able to compare the similarity of different methods/functions. For example, Table 5 lists the typical examples of compound logic conditional expressions for determining scattered/redundant functionalities based on the three risk levels: TABLE 5Risk LevelDetecting StrategyLow(SS = LOW) OR (SS = MEDIUM) OR((SS = HIGH) AND (LS = LOW))Medium(SS = HIGH) AND (LS = MEDIUM)High(SS = HIGH) AND (LS = HIGH) In Table 5, if the code structure and logic control flow of two given methods are highly similar, they will be regarded as suspicious of implementing similar functions. The algorithm determines scattered/redundant functionalities based on the composite metric of structure similarity (SS) and logic similarity (LS). The measurement methods for structure similarity and logic similarity are explained below. For example, Table 6 lists the typical examples of compound logic conditional expressions for determining the structure similarity of code based on the three risk levels: TABLE 6Similarity LevelDetecting StrategyLow(PS > 0.8) AND (DLOC < 15) AND(DLOC/LOCmax < 0.2)Medium(RVS = 1) AND (PS > 0.8) AND (DLOC < 10)AND (DLOC/LOCmax < 0.1)High(RVS = 1) AND (PS > 0.9) AND (DLOC < 5)AND (DLOC/LOCmax < 0.05) In Table 6, code structure similarity is determined based on parameter similarity (PS), return value similarity (RVS) and difference in lines of code (DLOC). Wherein, parameter similarity (PS) is used to measure the similarity of the input parameters between two given methods, and is calculated as follows: PS=STPN/PNmax; where PN stands for the number of input parameters. PNmax means comparing the metric PN of the two given methods and selecting the maximum one. STPN stands for the number of input parameters of the same data type. For example, if the parameter lists of two given methods are: Method A (float a, int b, int c); and method B (float d, int e, long f, char g). Then, STPN is 2, because the two methods both have at least one input parameter of the float (floating point) type and one input parameter of the int (integer) type. The PN value of method A is 4, the PN value of method B is 5, and therefore the value of PNmax is 5. Thus, the calculated metric PS is 0.4 (⅖). The metric of return value similarity (RVS) is either 0 or 1. RVS is 1 when the two given methods have the same type of return values; otherwise, RVS is 0. The difference in lines of code (DLOC) means the difference in lines of code (LOC) between two given methods. It can only be a positive value. Therefore, if method A has 10 lines of source code while method B has 15 lines of source code, the DLOC between method A and method B is 5. LOCmax means comparing the LOC of two given methods and selecting the maximum value between them. For example, Table 7 lists the typical examples of compound logic conditional expressions for determining the logic similarity of code based on the three risk levels, which are used to compare the similarity of the logic flow between two given methods. TABLE 7Similarity LevelDetecting StrategyLow(DCC < 5) AND (DCC/CCmax < 15) AND(CFS > 0.85)Medium(DCC < 3) AND (DCC/CCmax < 10) AND((CFS = 1) OR (CFSS = 1))High(DCC < 3) AND (DCC/CCmax < 10) AND(CFS = 1) AND (CFSS =1) In Table 7, the logic similarity (LS) of code is compared based on the metrics of cyclomatic complexity (CC), difference cyclomatic complexity (DCC) and control flow similarity (CFS). Cyclomatic complexity (CC) is a metric known to all, which measures the complexity of a method through calculating the number of independent logical paths. Many static source code analysis tools provide the capability of calculating this metric. Difference cyclomatic complexity (DCC) is calculated based on the metric CC. It can only be a positive value. Assuming that the value of metric CC of method A is 20, and the value of metric CC of method B is 23, then the difference cyclomatic complexity (DCC) between method A and method B is 3. CCmax means comparing the metric CC of two given methods and selecting the maximum one. Control flow similarity (CFS) is used to measure the similarity of the control flow statement between two given methods. This will be explained taking C++ language as an example. In C++ language, logic control statements include if, then, else, do, while, switch, case, for loop, etc. CFS is calculated based on the logic control blocks in a given method. Assuming there are two given methods, A and B, we put the control flow statements used in the methods respectively in two sets, i.e., setA and setB: setA=⁢{if,for,switch,if,if};setB=⁢{if,switch,for,while,if}; then the intersection between setA and setB means the logic control blocks that have appeared in both methods: intersection=setA∩setB={if, for, switch, if}. The metric CFS is calculated by the following equation: CFS=(NE⁢⁢of⁢⁢⁢the⁢⁢intersection)/MAX⁡(NE⁢⁢of⁢⁢setA,NE⁢⁢of⁢⁢setB)=4/5=0.8 Here, NE stands for the number of elements in a set; MAX(NE of setA, NE of setB) stands for the maximum value between the NE of setA and the NE of setB. The metric CFSS is used to calculate the similarity between control flow sequences, and is valid only when the logic control blocks of the same type in one method are all included in the other method. It needs two values: 0 and 1. CFSS will be 1 when all the logic control blocks in one method appear in the same order in the other method; CFSS should be 0 when any control block appears in a different order. For example, if the control blocks of three given methods are as follows: setA=⁢{if,if,for,if}setB=⁢{if,for,if}setC=⁢{switch,if,if,for}, it can be seen that setA fully includes setB. Each logic control block in setB appears in setA in the same order, and therefore the CFSS between method A and method B is 1. Although setC includes all the elements in setB, they appear in a different order, and therefore the CFSS between method B and method C is 0. For the purpose of description, some examples are given below for codes that are in different forms but are logically repeated. Code 1: bool PVProcessing:: isVrefInRange(const Float32_tvRefIntVoltage){bool retVal=FALSE;if(vRefInRange<=pidSssVrefIntLowerLimit){itsIDiagnoisticsHandler.setCondition(VREF_INT_TOO_LOW);}else{if(vRefInRange>=pidSssVrefIntUpperLimit){itsIDiagnoisticsHandler.setCondition(VREF_INT_TOO_HIGH);}else{retVal=TRUE;}}return retVal;} Code 2: bool PVProcessing:: isVddhInRange(const Float32_tvddhVoltage){Bool retVal=FALSE;If(vddhVoltage<=pidSssVddIntLowerLimit){itsIDiagnoisticsHandler.setCondition(VDDH_TOO_LOW);}else{if (vddhVoltage>=pidSssVddhUpperLimit){itsIDiagnoisticsHandler.setCondition(VDDH_TOO_HIGH);}else{retVal=TRUE;}}return retVal;} Code 1 and code 2 above are different in form but repeated in logic. It is very easy for a human evaluator to point out the redundant functionality of code 1 and code 2, because the two methods implement very similar functions. However, it is very difficult for most static analysis tools to detect such cases where the codes are different, but the logics are similar. These tools generally can only look for simple repetitions of codes, i.e., repeated codes generated through “copy-paste” of the same code block. The two methods here implement the same function, but the variables used have different names. Therefore, although the internal logics are very similar, due to some differences in the source codes, static analysis tools cannot determine that these are two segments of logically duplicated code. For the embodiments of the present invention, the metric of structure similarity (SS) of the codes is calculated first. Since both methods have a float parameter and return a bool value, the values of metrics PS and RVS are both 1. The two methods have the same metric LOC, and therefore the metric DLOC=0. Then, the metric structure similarity (SS) of the two given methods is classified as HIGH. For the metric of logic similarity (LS), since the two methods have the same value of the metric cyclomatic complexity (CC), the value of the metric DCC is 0. The two methods have the same logic control block ({if/else, if/else}) and it appears in the same sequence, and therefore the values of both CFS and CFSS are 1. Therefore, the metric logic similarity (LS) of the two given methods is also classified as HIGH. After the metrics SS and LS are calculated, by applying the algorithm in Table 5, the risk that the two methods have the problem of redundant functionality will be classified as HIGH. For example, Table 8 lists the typical examples of compound logic conditional expressions for determining long methods based on the three risk levels. TABLE 8Risk LevelDetecting StrategyLow(CC, HigherThan(15))Medium(CC, HigherThan(20))High(CC, HigherThan(25)) It can be seen from Table 8 that long methods can be determined based on cyclomatic complexity (CC). For example, Table 9 lists the typical examples of compound logic conditional expressions for determining complex classes based on the three risk levels. TABLE 9Risk LevelDetecting StrategyLow(LOCG, TopValues(15%)) AND (((AN, TopValues(15%))AND (MN, TopValues(15%))) OR (MCCmax,TopValues(15%)))Medium(LOC, TopValues(10%)) AND (((AN, TopValues(10%))AND (MN, TopValues(10%))) OR (MCCmax,TopValues(10%)))High(LOC, TopValues(5%)) AND (((AN, TopValues(5%))AND (MN, TopValues(5%))) OR (MCCmax,TopValues(5%))) In Table 9, complex classes are determined based on the metrics of line of code (LOC), attribute number (AN), method number (MN) of the source code and maximum method cyclomatic complexity (MCCmax) of the given class. Wherein, the attribute number (AN) is used to calculate the number of attributes of the given class; and the method number (MN) is used to calculate the number of methods in the given class. The maximum method cyclomatic complexity (MCCmax) of the given class is the maximum value of CC of the methods in the given class. The basic principle of Table 9 is: a code with too many lines and at the same time with classes having too many attributes and methods or a class having at least one very complex method will be identified as suspicious of having a complex class. For example, Table 10 lists the typical examples of compound logic conditional expressions for determining long parameter lists based on the three risk levels. TABLE 10Risk LevelDetecting StrategyLow(PN, HigherThan(4))Medium(PN, HigherThan(6))High(PN, HigherThan(8)) In Table 10, the parameter number (PN) is used to measure the number of the input parameters of a given method. The basic principle of the algorithm shown in Table 10 is: a method with more than 4 parameters should be regarded as suspicious of having a long parameter list. For example, Table 11 lists the typical examples of compound logic conditional expressions for determining message chains based on the three risk levels. TABLE 11Risk LevelDetecting StrategyLow(ICN, TopValues (15%)) AND (ICN > 3)Medium(ICN, TopValues (10%)) AND (ICN > 5)High(ICN, TopValues (5%)) AND (ICN > 6) In Table 11, message chains are determined based on the indirect calling number (ICN). The metric ICN is used to calculate the number of those efferent references in all the efferent references of a given method that are not directly calling its direct friends. Here, calling a direct friend means that:(1) the given method directly calls other methods in the class where it is;(2) the given method directly calls public variables or methods in a range visible to it;(3) the given method directly calls methods in the objects of its input parameters; or(4) the given method directly calls methods of the local objects that it creates. Except for the four types above, all the other efferent callings sent by the given method will be classified as indirect callings and counted in ICN. Those skilled in the art can realize that error-prone patterns such as code duplication, unused methods, cyclic dependency, and bad dependency can be directly identified by static code analysis tools, and therefore their determination algorithms will not be described in detail in the embodiments of the present invention. Step102: inputting the probability determined in Step101into an artificial neural network (ANN), and, based on the artificial neural network, determining a prediction result of whether the code violates a predetermined design principle and a quantified degree to which the code violates the design principle. After determining the probability of the error-prone mode, the embodiments of the present invention can evaluate the software design quality according to the prediction result of the artificial neural network. This can be achieved by mapping the probabilities of error-prone patterns to key design principles. Good software design should focus on reducing the business risks associated with building technical solutions. It needs to be sufficiently flexible to adapt to technical changes in hardware and software, as well as changes in user needs and application scenarios. In the software industry, people generally agree that effectively following some key design principles can minimize R&D costs and maintenance workload, and improve software availability and scalability. Practice has proved that the implementation and execution of these design principles is essential for ensuring software quality. For example, design principles may include at least one of the following: Separation of Concerns, Single Responsibility Principle, Least Knowledge, Don't Repeat Yourself, Minimize Upfront Design, etc. Table 12 is an example description of the five most well-known principles for software design. TABLE 12Design principleDescriptionSeparation of ConcernsThe functions of the entire system(SOC)should be separated into distinctsections, and overlapping should beminimized.Single ResponsibilityEach component or module should bePrinciple (SRP)relatively independent, and should beresponsible for only one specificfeature or function point.Least Knowledge (LCD)A component or object should not knowthe internal details of othercomponents or objects.Don't Repeat YourselfAny feature or function should be(DRY)implemented at only one location ofthe software. It should not berepeated in other components ormodules.Minimize UpfrontBig design up front for an entireDesign (YAGNI)system should be avoided if therequirements may change. The above design principles will be described in more details below. The principle of Separation of Concerns (SOC) is one of the basic principles for object-oriented programming. If correctly followed, it will enable software to have the characteristics of loose coupling and high cohesion. Therefore, error-prone patterns, for example, cyclic dependency and bad dependency, which reflect the tight coupling characteristic and the symptoms of incorrect use of component functions, are obvious signs that the principle of SOC has not been followed in the design process. In addition, SOC also helps to minimize the amount of work required to change the software. Therefore, it is also related to error-prone patterns (for example, shotgun surgery, divergent change, etc.) that cause difficulties in implementing changes. Single Responsibility Principle (SRP): if the scope is narrowed to the level of classes, this principle may be interpreted as “There should not be more than one reason for changing a class”. Therefore, error-prone patterns (for example, shotgun surgery and divergent change) that represent frequent code changes may be related to violations of SRP. In addition, methods/classes taking too many responsibilities are generally logically very complex. Therefore, error-prone patterns (for example, complex class, long method and long parameter list) indicating internal complexity are also related to this principle. The principle of Least Knowledge (LOD): this principle is also called the Law of Demeter, or LoD, which states that, “Don't talk to strangers”. It indicates that a specific class should only talk to its “close friends” but not to “friends of its friends”. Therefore, error-prone patterns like message chain indicate that this principle is not followed. This principle also opposes entanglement of the details of one class with other classes across different architectural levels. Therefore, error-prone patterns such as cyclic dependency and bad dependency are also related to it. Don't Repeat Yourself (DRY): this principle aims to prevent redundancy in the source code, which may otherwise lead to logical contradictions and unnecessary maintenance work. The error-prone patterns of code duplication and redundant functionality are direct indications of violations of the principle of DRY. Minimize Upfront Design: in general, this principle states that “big” design is unnecessary, and most of the design should be implemented throughout the entire software development process. Therefore, the error-prone pattern BDUF is an obvious violation of this principle. Minimize Upfront Design is also called YAGNI (“You are not gonna need it”), which means that we should only do the designs strictly required for achieving the objectives. Therefore, an identification of an unused method also indicates that this principle is possibly not followed. In the embodiments of the present invention, both the error-prone patterns and the design principles can be flexibly expanded. If needed, more error-prone patterns and design principles can be added, in order to improve flexibility and applicability. The above exemplarily describes some typical examples of design principles. Those skilled in the art may realize that this description is only example and is not used to limit the scope of the embodiments of the present invention. Table 13 lists the correspondence between error-prone patterns and design principles. Wherein, “X” means the error-prone pattern violates the design principle. TABLE 13SingleDon'tMinimizeSeparationResponsibilityLeastRepeatUpfrontof ConcernsPrincipleKnowledgeYourselfDesignShotgun surgeryXXDivergent changeXXBig design up frontXScattered/redundantXfunctionalityCyclic dependencyXXBad dependencyXXComplex classXLong methodXCode duplicationXLong parameter listXMessage chainXUnused methodX The embodiments of the present invention may use artificial intelligence (AI) algorithms to simulate the relationships in Table 13 above, and then evaluate the compliance with these key design principles based on the occurrence of related error-prone patterns. In one embodiment, the artificial neural network comprises connections between error-prone patterns and design principles; and determining, based on the artificial neural network, a prediction result of whether the code violates predetermined design principles and a quantified degree to which the code violates the design principles comprises: based on the connections in the artificial neural network and the probabilities of the error-prone patterns, determining a prediction result of whether the code violates the design principles and a quantified degree to which the code violates the design principles. Here, the artificial neural network is a computing model, which comprises a large number of nodes (or neurons) connected to each other. Each node represents a specific output function, called the activation function. Each connection between two nodes represents a weighted value of the signal passing through the connection, called a weight, which is equivalent to the memory of an artificial neural network. The output of the network varies as the connection method of the network, the weight value and the activation function change. Preferably, the method further comprises: adjusting the weight of the connection in the artificial neural network based on a self-learning algorithm. Preferably, connections between error-prone patterns and design principles include at least one of the following: a connection between shotgun surgery and Separation of Concerns; a connection between shotgun surgery and Single Responsibility Principle; a connection between divergent change and Separation of Concerns; a connection between divergent change and Single Responsibility Principle; a connection between big design up front and Minimize Upfront Design; a connection between scattered functionality and Don't Repeat Yourself; a connection between redundant functionality and Don't Repeat Yourself; a connection between cyclic dependency and Separation of Concerns; a connection between cyclic dependency and Least Knowledge; a connection between bad dependency and Separation of Concerns; a connection between bad dependency and Least Knowledge; a connection between complex class and Single Responsibility Principle; a connection between long method and Single Responsibility Principle; a connection between code duplication and Don't Repeat Yourself; a connection between long parameter list and Single Responsibility Principle; a connection between message chain and Least Knowledge; and a connection between unused method and Minimize Upfront Design. Wherein, in order to infer the target (violating design principles (such as SOC, SRP, LOD, DRY, and YAGNI)) from an error-prone pattern (called extracted feature in the following algorithm), artificial intelligence algorithms and artificial intelligence neural networks are applied to establish the relationship between error-prone patterns and design principles. FIG.2is a structural diagram of the artificial neural network of the embodiments of the present invention. InFIG.2, the artificial neural network comprises an input layer200, a hidden layer202, and an output layer204, wherein the output layer204comprises a softmax layer400and a maximum probability layer600. The maximum probability layer600comprises various predetermined design principles, such as Minimize Upfront Design601, Single Responsibility Principle602, Separation of Concerns603, etc. InFIG.2, the neuron (also called unit) represented by a circle in the input layer200is an error-prone pattern201, which is inputted through linear transformation by a weight matrix, and nonlinear transformation is performed on the input by the activation function, in order to improve the variability of the entire model and the representational power of the knowledge model. The significance of the artificial neural network is based on the assumption that there is a hidden mathematical equation that can represent the relationship between the input of an extracted feature and the output of a violation of a design principle, and the parameters in the mathematical equation are unknown. For the artificial neural network, a training dataset may be used to train the model, and a large number (for example, hundreds) of internal parameters recorded in the dataset are iteratively updated until the parameters are so trained that the equation conforms to the relationship between the input and the output. At this point, these parameters reflect the essence of the model; the model can represent the connections between error-prone patterns and design principles. Input layer200: the input to the network is a feature (probability of an error-prone pattern) extracted in the code quality inspection. For example, if there is a segment of code that has a high risk of shotgun surgery, a number can be used to indicate the risk level (for example, 0.8 for high risk, 0.5 for medium risk, and 0.3 for low risk). For example, if the model input is defined as 12 patterns, 12 numbers are used to represent the quality characteristics of the code. These numbers form digital vectors and will be used as the model input. The output of the output layer204is the predicted type of a violation of the design principles. For each code snippet, an attempt will be made to detect the probability of the violation of a principle, and the probability will be classified as one of the three levels: low, medium and high risks. Therefore, for each of the various design principles such as Minimize Upfront Design601, Single Responsibility Principle602and Separation of Concerns603, there will be three output units corresponding to three types. Specifically, the output of the hidden layer202, a 3-element vector, is used as the input vector of the softmax layer400in the output layer204. In the softmax layer400, the elements in the vector are normalized to decimals in the range of 0 to 1 and totaling to 1 by the softmax function, in order to convert these elements into the probability of a type. The maximum probability layer600predicts the probability that the analyzed code violates a specific principle by selecting the type with the highest value as the final output. For the artificial neural network shown inFIG.2, its training process involves forward propagation and back propagation. Forward propagation is a network process that, based on current parameters (weights), enhances the degree of freedom of the model and generates a prediction result (i.e., the probability of the violation of a principle) by use of linear transformation of matrix multiplication and nonlinear transformation of excitation functions (sigmoid, tank, and relu). In combination with back propagation, the parameters used to calculate the prediction can be better updated to generate a prediction closer to the true label (a principle violation does exist in this code, etc.). After a large amount of training (for example, thousands of steps), the prediction is almost the same as the true label, which means that the parameters in the model can form the relationship between the input (feature or metric) and the output (violated principle). In the back propagation step, the gap (“loss”) is first calculated as the difference between the predicted value and the actual label, and the loss is generated by a cost function. The “loss” provides a numerical measure of prediction error for the network. By applying chain rules, the loss value can be propagated to each neuron in the network, and the parameters related to that neuron can be modified accordingly. By repeating this process many times (for example, a thousand times), the loss value becomes lower and lower, and the predicted value becomes closer and closer to the true label. The training data of the artificial neural network comprises error-prone patterns and manual evaluation of whether design principles are violated. The training data may be collected from different projects, which have completely different characteristics. Therefore, after training, the artificial neural network model can reflect project characteristics in different industries, different development processes and different situations. That is, a well-trained model can be very flexible and scalable, and can be used to analyze different types of projects without pre-configuration according to the characteristics of a project, because the model can learn through training data. Another benefit of the use of the artificial neural network is that the model is based on self-learning algorithms. The model can update internal parameters to dynamically adjust its new data input. When the trained model receives a new input and propagates it forward to predict the violation of a principle, a human reviewer can check the prediction result and provide judgment. If the judgment is negative to the prediction, the system can obtain this round of forward propagation of the input and the final human judgment to generate a new data sample point as a future training set, and the model is trained on previous errors and becomes more and more accurate after deployment. Prior to this step, the code review system has successfully completed the steps of accessing the user source code, code scanning for code quality analysis, identifying error-prone patterns, and classifying error-prone patterns to match violations of design principles. Step103: evaluating the design quality of the code based on the prediction result. In order to evaluate software quality, a basic principle to be considered may be that, if a design principle is followed correctly during the design process, the related error-prone patterns should not be identified in the source code. That is, if a certain error-prone pattern is identified, the principle may be violated. Therefore, by observing the occurrence of error-prone patterns, we can measure the compliance with key design principles, and then use traceability to evaluate design quality. In one embodiment, evaluating the design quality of the code based on the prediction result comprises at least one of the following: evaluating the design quality of the code based on whether it violates a predetermined design principle, wherein not violating the predetermined design principle is better in the design quality than violating the predetermined design principle; and evaluating the design quality of the code based on the quantified degree to which it violates the design principle, wherein the quantified degree to which it violates the design principle is inversely proportional to how good the design quality is evaluated to be. In one embodiment, in a typical software development environment, the source code is stored in a central repository, and all changes are controlled. The source code to be reviewed and its revision/change history information (which can be obtained from software configuration management tools such as ClearCase, Subversion, Git, etc.) will be used as the input of the code evaluation method of the present invention. When applying the code evaluation method of the present invention, the user firstly accesses the source code repository and the revision/change history, and generates findings related to code metrics and quality with the help of static code analysis tools. An error-prone pattern detector captures this information together with the revision/change information, and calculates the possibility of any error-prone pattern in the source code. Then, an AI model (for example, an artificial neural network) reads these probabilities as the input, to predict the probabilities of the violation of design principles based on the pre-trained connection between error-prone patterns and design principles. Then, a design quality report can be generated based on the prediction result, including information of the predicted violations of design principles and error-prone patterns, and the user can improve the code quality accordingly. Based on the above description, the embodiments of the present invention also provide a system for evaluating code design quality. FIG.3is a structural diagram of the system for evaluating code design quality of the embodiments of the present invention. As shown inFIG.3, the system for evaluating code design quality comprises:a code repository300, configured to store a code to be evaluated301;a static scanning tool302, configured to statically scan the code to be evaluated301;an error prone pattern detector303, configured to determine the probability of an error-prone pattern in the code based on the result of static scanning output by the static scanning tool302; andan artificial neural network304, configured to determine, based on the probability, a prediction result305of whether the code violates a predetermined design principle and a quantified degree to which the code violates the design principle, wherein the prediction result305is used to evaluate the design quality of the code. For example, the artificial neural network304can automatically evaluate the design quality of the code based on the prediction result305. Preferably, the artificial neural network304displays the prediction result305, and the user manually evaluates the design quality of the code based on the display interface of the prediction result305. In one embodiment, the code repository301is further configured to store a modification record of the code; and the static scanning tool302is configured to determine the probability of an error-prone pattern in the code based on a compound logic conditional expression comprising the result of static scanning and/or the modification record. Based on the above description, the embodiments of the present invention also provide an apparatus for evaluating code design quality. FIG.4is a structural diagram of the apparatus for evaluating code design quality of the embodiments of the present invention. As shown inFIG.4, the apparatus for evaluating code design quality400comprises:a determining module401, configured to determine the probability of an error-prone pattern in a code based on the result of static scanning of the code;a prediction result determining module402, configured to input the probability into an artificial neural network, and, based on the artificial neural network, to determine a prediction result of whether the code violates a predetermined design principle and a quantified degree to which the code violates the design principle; andan evaluation module403, configured to evaluate the design quality of the code based on the prediction result. In one embodiment, the error-prone patterns include at least one of the following: shotgun surgery; divergent change; big design up front; scattered/redundant functionality; cyclic dependency; bad dependency; complex class; long method; code duplication; long parameter list; message chain; and unused method. In one embodiment, the design principles include at least one of the following: Separation of Concerns; Single Responsibility Principle; Least Knowledge; Don't Repeat Yourself; and Minimize Upfront Design. In one embodiment, the determining module401is further configured to receive a modification record of the code, wherein determining the probability of an error-prone pattern in a code based on the result of static scanning of the code comprises: determining the probability of an error-prone pattern in the code based on a compound logic conditional expression comprising the result of static scanning and the modification record. In one embodiment, determining the probability of an error-prone pattern in the code based on a compound logic conditional expression comprising the result of static scanning and/or the modification record comprises at least one of the following: determining the probability of the existence of a shotgun surgery based on a compound logic conditional expression comprising the metrics of afferent coupling, efferent coupling and changing method; determining the probability of the existence of a divergent change based on a compound logic conditional expression comprising the metrics of revision number, instability and afferent coupling; determining the probability of the existence of a big design up front based on a compound logic conditional expression comprising the metrics of line of code, line of changed code, class number, changed class number and the statistical average of the above metrics; determining the probability of the existence of a scattered functionality based on a compound logic conditional expression comprising the metrics of structure similarity and logic similarity; determining the probability of the existence of a redundant functionality based on a compound logic conditional expression comprising the metrics of structure similarity and logic similarity; determining the probability of the existence of a long method based on a compound logic conditional expression comprising the metric of cyclomatic complexity; determining the probability of the existence of a complex class based on a compound logic conditional expression comprising the metrics of line of code, attribute number, method number and maximum method cyclomatic complexity; determining the probability of the existence of a long parameter list based on a compound logic conditional expression comprising the metric of parameter number; determining the probability of the existence of a message chain based on a compound logic conditional expression comprising the metric of indirect calling number, etc. In one embodiment, the artificial neural network comprises connections between error-prone patterns and design principles; and the prediction result determining module402is further configured to determine, based on the connections in the artificial neural network and the probabilities of error-prone patterns, a prediction result of whether the code violates the design principles and a quantified degree to which the code violates the design principles. In one embodiment, the evaluation module403is configured to evaluate the design quality of the code based on whether it violates a predetermined design principle, wherein not violating the predetermined design principle is better in the design quality than violating the predetermined design principle; and to evaluate the design quality of the code based on the quantified degree to which it violates the design principle, wherein the quantified degree to which it violates the design principle is inversely proportional to how good the design quality is evaluated to be. In one embodiment, the connections between error-prone patterns and design principles include at least one of the following: a connection between shotgun surgery and Separation of Concerns; a connection between shotgun surgery and Single Responsibility Principle; a connection between divergent change and Separation of Concerns; a connection between divergent change and Single Responsibility Principle; a connection between big design up front and Minimize Upfront Design; a connection between scattered functionality and Don't Repeat Yourself; a connection between redundant functionality and Don't Repeat Yourself; a connection between cyclic dependency and Separation of Concerns; a connection between cyclic dependency and Least Knowledge; a connection between bad dependency and Separation of Concerns; a connection between bad dependency and Least Knowledge; a connection between complex class and Single Responsibility Principle; a connection between long method and Single Responsibility Principle; a connection between code duplication and Don't Repeat Yourself; a connection between long parameter list and Single Responsibility Principle; a connection between message chain and Least Knowledge; and a connection between unused method and Minimize Upfront Design. FIG.5is a structural diagram of the apparatus having a processor and a memory for evaluating code design quality of the embodiments of the present invention. As shown inFIG.5, the apparatus for evaluating code design quality500comprises a processor501and a memory502. The memory502stores an application that can be executed by the processor501, which is used to cause the processor501to execute the method for evaluating code design quality as described above. Wherein, the memory502may be specifically implemented as a variety of storage media such as electrically erasable programmable read-only memory (EEPROM), flash memory, programmable read-only memory (PROM), etc. The processor501may be implemented to comprise one or more central processing units or one or more field-programmable gate arrays, wherein the field-programmable gate array integrates the core(s) of one or more central processing units. Specifically, the central processing unit or central processing unit core may be implemented as a CPU or MCU. It should be noted that not all steps and modules in the above flowcharts and structural diagrams are necessary, and some steps or modules can be ignored based on actual needs. The sequence of execution of the steps is not fixed, and can be adjusted as needed. A functional division of the modules is used only to facilitate the description. In actual implementation, a module may be implemented by multiple modules, and the functions of multiple modules may be implemented by a single module. These modules may be located in a single device or in different devices. The hardware modules in each embodiment may be implemented mechanically or electronically. For example, a hardware module may comprise specially designed permanent circuits or logic devices (for example, dedicated processors, such as FPGA or ASIC) to complete specific operations. A hardware module may also comprise programmable logic devices or circuits temporarily configured by software (for example, general-purpose processors or other programmable processors) for performing specific operations. Whether to specifically use mechanical methods or dedicated permanent circuits or temporarily configured circuits (such as software configuration) to implement hardware modules may be determined according to cost and schedule considerations. The present invention of an embodiment also provides a machine-readable storage medium, which stores an instruction used to cause a machine to execute the method described herein. Specifically, a system or device equipped with a readable storage medium may be provided, the software program code for implementing the functions of any of the above embodiments is stored on the readable storage medium, and a computer (or CPU or MPU) of the system or device is configured to read and execute the program code stored in the storage medium. In addition, the operating system operating on the computer may also be used to perform part or all of the actual operations through instructions based on the program code. It is also possible to write the program code read from the storage medium to the memory provided in an expansion board inserted into the computer or to the memory provided in an expansion unit connected to the computer, and then the program code-based instructions cause the CPU, etc. mounted on the expansion board or the expansion unit to perform part and all of the actual operations, so as to implement the functions of any of the above embodiments. Implementations of the storage media used to provide the program code include floppy disks, hard disks, magneto-optical disks, optical disks (such as CD-ROM, CD-R, CD-RW, DVD-ROM, DVD-RAM, DVD-RW, DVD+RW), magnetic tapes, non-volatile memory cards and ROMs. Optionally, the program code may be downloaded from a server computer or a cloud via a communication network. The above are only the preferred embodiments of the present invention and are not used to limit the scope of the present invention. Any modification, equivalent replacement and improvement made without departing from the motivation and principle of the present invention shall be included in the scope of the present invention. It should be noted that not all steps and modules in the above processes and system structural diagrams are necessary, and some steps or modules may be ignored based on actual needs. The sequence of execution of the steps is not fixed, and can be adjusted as needed. The system structure described in the above embodiments may be a physical structure or a logical structure, i.e., some modules may be implemented by the same physical entity, or some modules may be implemented by multiple physical entities, or may be implemented by certain components in several independent devices working together. The present invention has been demonstrated and described in detail through the drawings and preferred embodiments above. However, the present invention is not limited to these disclosed embodiments. Based on the above embodiments, those skilled in the art can know that the code review methods in the different embodiments above may be combined to obtain more embodiments of the present invention, and these embodiments also fall within the scope of the present invention.
76,952
11860765
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT Hereinafter, an embodiment of the present invention will be described in detail with reference to the accompanying drawings. FIG.1is a view showing the overall structure of a fuzzing system100according to an embodiment of the present invention. The overall operation of the fuzzing system will be described with reference toFIG.1. The fuzzing system according to the present invention may include a type reasoner110for automatically inferring type information of system calls, and a type-based fuzzer120for performing system call fuzzing using the type information obtained through the inference. For the description ofFIG.1,FIG.3is referred to first. At step310ofFIG.3, the type reasoner110may automatically infer type information of a system call using a library file provided by a computer operating system. At this point, the computer operating system may include Linux-based operating systems that open the source code and type information of system calls, or Windows operating systems that do not open the source code and type information of system calls. In the embodiment according to the present invention, kernel fuzzing operation of the Windows operating systems that do not open the source code and type information of system calls will be described. For example, the type reasoner110may automatically infer types of Windows system calls. The type reasoner110may receive a library file101provided by the Windows operating system and infer type information102of system calls. At step320ofFIG.3, the type-based fuzzer120may perform system call fuzzing on the basis of the type information of system calls obtained through the inference. The type-based fuzzer120may receive the type information102of system calls and a seed application103, finally search for a kernel error104, and output the searched kernel error104as a result. FIG.2is a view for explaining the operation of calling a system call according to an embodiment of the present invention. InFIG.2, an application210calls an API function defined in a library file provided by a computer operating system (e.g., Windows). Then, the called API function goes through a series of internal function calls, finally reaches a system call stub, and makes a system call that sends a request to the kernel230at the reached system call stub. For example, when the library file is provided by the Windows operating system, the source code is not open, and only a binary code obtained by compiling the source code is provided. In this case, since application developers should be able to call the API function, information on API functions that exist and types of function parameters are formally documented. InFIG.2, the type reasoner110may observe the function call flow occurring in the library file using a static analysis240technique. Static analysis means an automated technique that makes a rough estimate of what will happen in the software without actually executing the software. The type reasoner110may determine a system call to which each parameter of the API function is passed, through the static analysis. The type reasoner110may infer type information of a system call, i.e., the type of each parameter of the system call, through the officially documented API function information. FIG.4is a view for explaining the operation of inferring type information of a system call by the type reasoner110. The type reasoner110may perform the steps of binary parsing410, analysis order determination420, function-specific summary generation430, and system call type inference440. Hereinafter, the operation of inferring a call type of the Windows system performed by the type reasoner110will be described. At the binary parsing410step ofFIG.4, the type reasoner110may read and convert the Windows library file into an intermediate language form for static analysis. Specifically, the type reasoner110may first detect the functions existing in the library file, and then convert the machine instruction code included in each function into an assembly code. The type reasoner110may express the code of each function in an intermediate language by converting the assembly code into the intermediate language. The binary parsing410step may be implemented by utilizing a binary analysis platform. A binary analysis platform referred to as B2R2 disclosed in the non-patent document of “M. Jung, S. Kim, H. Han, J. Choi, and S. K. Cha, ‘B2R2: Building an efficient front-end for binary analysis,’ in Proceedings of the NDSS Workshop on Binary Analysis Research, 2019” may be referenced as the binary analysis platform. FIG.6is a view showing a brief summary of the structure (syntax) of the converted intermediate language. In the process of converting into an intermediate language, since the execution flow information of the program is separately collected to generate a control-flow graph, instructions related to the program execution flow are omitted in the drawings. At the analysis order determination420step ofFIG.4, the type reasoner110may determine an order of analyzing functions detected with respect to the detected functions. The type reasoner110may identify relation information between the detected functions and certain other functions called from each of the detected functions, and may generate a function call graph on the basis of the identified relation information. The type reasoner110may determine an order to analyze a called function (callee function) before a calling function (caller function) on the basis of the generated function call graph (topological ordering). At the function-specific summary generation430step ofFIG.4, the type reasoner110may analyze behavior information including system calls and memory update occurring in each function using a modular analysis technique, and generate a summary including the analyzed behaviors. As the modular analysis technique, the method disclosed in the non-patent document of “A. Aiken, S. Bugrara, I. Dillig, T. Dillig, B. Hackett, and P. Hawkins, ‘An overview of the saturn project,’ in Proceedings of the ACM SIGPLAN-SIGSOFT workshop on Program Analysis for Software Tools and Engineering, 2007, pp. 43-48” may be referenced. When a function calls another function and a summary of the called function has already been generated, which system call the function call generates and how the function call changes the memory are identified using the summary that has already been generated without the need of repeatedly analyzing the called function. In order to reuse the summary in this way, the called function should be analyzed and summarized before the calling function, and thus the function analysis order determination420step described above should be preceded. The reason of generating and reusing the summary is to reduce the cost of analysis. When the called function is repeatedly analyzed whenever a function call occurs, the cost of analysis increases dramatically. The type reasoner110according to an embodiment of the present invention may perform static analysis using an abstract interpretation technique. The abstract interpretation technique is one of the representative techniques of static analysis. As the abstract interpretation technique, the method disclosed in the non-patent document of “P. Cousot and R. Cousot, ‘Abstract interpretation frameworks: A unified lattice model for static analysis of programs by construction or approximation of fixpoints,’ in Proceedings of the ACM Symposium on Principles of Programming Languages, 1977, pp. 238-252” will be referenced. The abstract interpretation technique defines abstract domains and abstract semantics, and analyzes what happens in the program on the basis of the abstract domains and abstract semantics.FIGS.7and8express the abstract domains and abstract semantics used in a static analyzer. For example, in analyzing the binary code using the abstract interpretation technique, an interval domain may be used as an abstract domain to analyze memory access, or the static analysis may track memory access using a constant offset. In an embodiment of the present invention, a method of tracking memory access using a constant offset may be used. Although this choice may cause the analysis result to miss some of the behaviors occurring in a target program (false negatives), the cases of falsely including the behaviors that do not occur in the program in the analysis result (false positives) may be reduced instead. In addition, the use of the abstract domains and abstract semantics facilitates application of modular analysis to a binary code. Although many studies have used the modular analysis for source code analysis, the cases of successfully applying the modular analysis to a binary code are limited. When the conventional method of applying the abstract interpretation technique targeting the binary code is combined with the modular analysis, it is difficult to properly summarize memory updates that occur due to the function calls. This is since that the upper and lower limits of the interval domain are unclear in many cases. However, the static analyzer according to the present invention that does not use the interval domain may generate a summary including memory updates. At the system call type inference440step ofFIG.4, the type reasoner110may infer type information of each system call by synthesizing parameter information passed to the system call on the basis of the summary information collected for each function. The type reasoner110may finally infer type information of each system call by acquiring information on the type of parameters passed to the system call on the basis of the summary information collected for each function. When a parameter passed to the system call has a simple type such as an integer or a handle, or a parameter of an API function is directly passed, the type reasoner110may directly determine the type information of the system call parameter. On the other hand, when a pointer value indicating a memory area is passed to the system call as a parameter passed to the system call, the type reasoner110should also infer the type of the content pointed by the pointer with reference to an analyzed memory state. First, when the size of the memory space pointed by the pointer is passed as another parameter of the system call, it may be inferred that the content pointed by the pointer is an array. When this condition is not satisfied, the content pointed by the pointer is regarded as a structure, and the types are recursively inferred for the content stored in the memory and adopted as the types of the structure field. In this process, the pointer type of one piece of integer is regarded as the pointer type of the structure having the integer as a unique field. In the process of inferring the structure type, it is important to determine how much the structure is extended. To this end, in the embodiment of the present invention, the memory access pattern is observed through data flow analysis, and the range of the structure may be inferred based on this. As the data flow analysis, the method disclosed in the non-patent document of “A. V. Aho, M. S. Lam, R. Sethi, and J. D. Ullman, Compilers: Principles, Techniques, and Tools, 2nd edition. Addison Wesley, 2006” may be referenced. A problem that may occur in the process of inferring type information of a system call is that when one system call is called from several points, type information of the system call may be inferred to be different at each of the points. Conventionally, when conflicting results are obtained, it is concluded that the type cannot be determined, whereas in the embodiment of the present invention, a type statistically observed at more points is selected. Through this method, fuzzing efficiency may be increased by providing type information of high probability to be correct, although not perfect, to the fuzzing module. FIG.5is a view for explaining the operation of performing system call fuzzing in a fuzzing system according to an embodiment of the present invention. The type-based fuzzer120of the present invention may intercept system call parameters generated by executing a seed application and make a mutation thereof. The seed application may use certain software executed on the Windows operating system. When the seed application is executed, a great many system calls may be called internally to interact with the kernel. The type-based fuzzer120may intercept the content of the called system call in the middle and operate in a way of mutating the values of the system call parameters. At this point, the type-based fuzzer120may mutate the call parameter values by using the type information of the system call. For example, when an integer type parameter is mutated, simply an integer value thereof needs to be mutated, whereas when a value of a pointer type parameter is mutated, the content stored at the location pointed by the pointer should be mutated. When an appropriate mutation is made considering the type information in this way, it is possible to test the kernel code and find errors more efficiently. Referring toFIG.5, the type-based fuzzer120may performed the steps of collecting seed application information510, executing the seed application520, mutating system call parameters530, observing kernel errors540, and storing error information550. At the step of collecting seed application information510ofFIG.5, the type-based fuzzer120may collect seed application information related to the number of system calls made by one execution of the seed application. The type-based fuzzer120may measure how many system calls are made on average by one execution of the seed application. Here, the measured information may be used to estimate the execution progress of the seed application at the subsequent step of mutating system call parameters530. For example, when it is assumed that N (N is a natural number) system calls are made on average while the seed application is executed, and the seed application has called the M-th (M≤N) system call just before, it is a method of estimating the progress of application as MN at the called time point. At the step of executing the seed application520ofFIG.5, the type-based fuzzer120may execute a given seed application. When the seed application needs to receive an input through a command line interface or a graphical user interface, an interface input may also be provided together. As the seed application is executed, system calls for interacting with the kernel may be called. At the step of mutating system call parameters530ofFIG.5, the type-based fuzzer120may intercept the content of a system call generated by the seed application, and randomly mutate the parameter values of the intercepted content of the system call. For example, there are various techniques that can be used to intercept a system call, and a technique of using API functions provided by Windows for debugging or a technique of overwriting the system service descriptor table of the Windows kernel may be used. The type-based fuzzer120may stochastically mutate the system call parameters as it successfully intercepts the system call. For example, the type-based fuzzer120may determine a ratio for mutating the parameter values of the content of the system call on the basis of execution progress of the seed application, and mutate the system call parameters according to the determined ratio. An effective fuzzing strategy may be freely sought by utilizing the progress of the seed application. In other words, the type-based fuzzer120does not mutate all system call parameters, but may selectively mutate some of the system call parameters, and the ratio of the number of parameters to be mutated may be freely set. When a target for mutating the system call parameters is selected, the values of the selected system call parameters may be mutated using the type information of the system call. At the step of observing kernel errors540ofFIG.5, the type-based fuzzer120may observe whether an error occurs in the kernel code as the values of the mutated system call parameters are passed to the kernel. When the parameters of the system call are mutated and passed to the kernel, an error may occur with a low probability while the kernel code processes abnormal inputs. When a kernel error does not occur and execution of the seed application is normally completed, the process may return to the step of executing the seed application520and repeat a new execution. On the contrary, when an error occurs in the kernel, the system is rebooted, and memory dump may be automatically generated. At the step of storing error information550ofFIG.5, the type-based fuzzer120may store and report information on the kernel error when the kernel error and system rebooting are confirmed. A design of reporting the kernel error and returning to a previous step to continue fuzzing without terminating the process is also possible as needed. Whether the type reasoner according to the present invention is able to correctly infer a parameter type of a system call of Windows may be tested on the 2018 April version of Windows 10. A total of 7 core Windows library files (ntdll.dll, kernelbase.dll, kernel32.dll, win32u.dll, gdi32.dll, gdi32full.dll, user32.dll) can be analyzed. In order to measure the accuracy of the type information of the system call inferred through the analysis, 64 system call functions documented on the Microsoft official website will be used as a benchmark. The 64 system call functions have a total of 326 parameters, and it may be measured how correctly the type reasoner proposed in the embodiment of the present invention may infer the types of the system call function parameters. As a result of the test, the type reasoner may correctly infer the types for 69% of the parameters and infer partially correct types for the remaining parameters. Seeing from the result, it may be expected to show superior accuracy even for the remaining system calls that have not been documented. In addition, whether the type information of the system call obtained by the type reasoner may improve the fuzzing effect may be tested on the same 2018 April version of Windows 10. The type-based fuzzer has performed fuzzing for each of 8 seed applications for 48 hours. As a result, when type information is provided, 1.7 times more kernel crashes can be detected compared to a case where the type information is not provided. Finally, it is possible to test whether the technique devised in the present invention may detect new errors that are unknown before in the latest version of Windows. As a result of testing the 2020 January version of Windows 10, i.e., the latest version at the time of the invention, a total of 11 errors are found. As a result of reporting the found errors to Microsoft, four of the errors are acknowledged as important security vulnerabilities and assigned with vulnerability management numbers (CVE-2020-0792, CVE-2020-1246, CVE-2020-1053, CVE-2020-17004). The experiment result shows that it is a technique that can effectively detect errors in the Windows kernel. Although a person himself or herself does not analyze and identify type information of Windows system calls, system call fuzzing utilizing the type information may be performed. In addition, as the type information is utilized for system call fuzzing, errors in the kernel code may be found more effectively. The device described above may be implemented as hardware components, software components, and/or a combination of the hardware components and the software components. For example, the device and components described in the embodiments may be implemented using one or more general purpose or special purpose computers such as a processor, a controller, an arithmetic logic unit (ALU), a digital signal processor, a microcomputer, a field programmable gate array (FPGA), a programmable logic unit (PLU), a microprocessor, and any other devices capable of executing and responding to instructions. A processing device may execute an operating system (OS) and one or more software applications executed on the operating system. In addition, the processing device may also access, store, manipulate, process, and generate data in response to execution of the software. Although it has been described that one processing device is used in some cases for convenience of understanding, those skilled in the art will know that the processing device may include a plurality of processing elements and/or a plurality of types of processing elements. For example, the processing device may include a plurality of processors or one processor and one controller. In addition, other processing configurations, such as parallel processors, are also possible. The software may include computer programs, codes, instructions, or a combination of one or more of those, and it is possible to configure a processing device to operate as desired or to independently or collectively issue a command to the processing device. The software and/or data may be embodied in any kind of machines, components, physical devices, virtual equipment, computer storage media or devices to be interpreted by the processing device or to provide instructions or data to the processing device. The software may be distributed to computer systems connected through a network to be stored or executed in a distributed manner. The software and data may be stored in one or more computer-readable recording media. The method according to the embodiments may be implemented in the form of program instructions that can be executed through various computer means and recorded in computer-readable media. The computer-readable media may store program instructions, data files, data structures, and the like independently or in combination. The program instructions recorded in the media may be specially designed and configured for the embodiment, or may be known and available to those skilled in the art of computer software. Examples of the computer-readable recording media include magnetic media such as hard disks, floppy disks and magnetic tapes, optical media such as CD-ROMs and DVDs, magneto-optical media such as floptical disks, and hardware devices specially configured to store and execute the program instructions, such as ROM, RAM, flash memory, and the like. Examples of the program instructions include high-level language codes that can be executed by a computer using an interpreter or the like, as well as machine language codes such as those generated by a compiler. Although the embodiments have been described above with reference to limited embodiments and drawings, various changes and modifications are possible from the above description by those skilled in the art. For example, an appropriate result may be achieved although the described techniques are performed in an order different from that of the method described above, and/or the described components of the systems, structures, apparatuses, circuits, and the like are coupled or combined in a form different from those of the method described above, or replaced or substituted by other components or equivalents. Therefore, other implementations, other embodiments, and matters equivalent to the claims are also within the scope of the claims described below.
23,444
11860766
The features and advantages of the embodiments described herein will become more apparent from the detailed description set forth below when taken in conjunction with the drawings, in which like reference characters identify corresponding elements throughout. In the drawings, like reference numbers generally indicate identical, functionally similar, and/or structurally similar elements. The drawing in which an element first appears is indicated by the leftmost digit(s) in the corresponding reference number. DETAILED DESCRIPTION I. Introduction The following detailed description discloses numerous example embodiments. The scope of the present patent application is not limited to the disclosed embodiments, but also encompasses combinations of the disclosed embodiments, as well as modifications to the disclosed embodiments. References in the specification to “one embodiment,” “an embodiment,” “an example embodiment,” etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to effect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described. Numerous exemplary embodiments are described as follows. It is noted that any section/subsection headings provided herein are not intended to be limiting. Embodiments are described throughout this document, and any type of embodiment may be included under any section/subsection. Furthermore, embodiments disclosed in any section/subsection may be combined with any other embodiments described in the same section/subsection and/or a different section/subsection in any manner. II. Example Embodiments As described above, notebooks provide an interactive environment for programmers to develop code, analyze data and inject interleaved visualizations in a single environment. Despite their flexibility, a major pitfall data scientists encounter is unexpected behavior caused by the out-of-order execution model of notebooks. As a result, data scientists face various challenges ranging from notebook correctness, reproducibility and cleaning. Methods and systems are provided that include a framework for performing static analyses on notebook semantics. This framework is general in the sense that it may accommodate a wide range of analyses that are useful for various notebook use cases. This framework has been instantiated on a diverse set of analyses, which have been evaluated on numerous real-world notebooks. 1. Introductory Summary Notebooks have become an increasingly popular development environment for data science. As described above, notebooks provide a dynamic read-eval-print-loop (REPL) experience where developers can rapidly prototype code while interleaving data visualization including graphs, textual descriptions, tables etc. A notable peculiarity of notebooks is that the program i.e., notebook, may be divided into non-scope inducing blocks of code called cells (i.e., code cells). Cells may be added, edited, and deleted on demand by the user. Moreover, cells, regardless of their order in the notebook, may be executed (and re-executed) by the user in any given sequence. This feature provides a level of incrementalism that improves productivity and flexibility. At the same time, such execution semantics make notebook behavior notoriously difficult to predict and reproduce. Studies have shown difficulty in reproducing notebook. In one example, from a large set of notebooks, only 25% of notebooks could be executed without an error and less than 5% were trivially reproduceable. Moreover, an abundance of code smells and bugs have been observed in real world notebooks. In the following example, code analysis tooling is introduced to improve notebook quality and communication of outcomes of various cell execution scenarios.FIG.1A, shows a system100comprising an example notebook analyzer user interface (UI) displaying a plurality of cells entered in a notebook, according to an example embodiment.FIG.1Bshows an example result of a notebook analyzer what-if analysis displayed in the UI of system100, where a potential data leak has been discovered by the notebook analyzer, according to an example embodiment.FIG.1Cshows an example result of a notebook analyzer what-if analysis displayed in the UI of system100, where a potential stale state has been discovered by the notebook analyzer, according to an example embodiment. The example notebook analyzer UI shown inFIGS.1A,1B, and1Care shown by way of example and not limitation. Other UI embodiments will be apparent to persons skilled in the relevant art(s) based on the discussions and descriptions provided herein. Example 1.1 (a motivating example). System100comprises an example notebook102that has five cells (i.e., code cells), which are displayed via a user interface (UI). The cells are numbered from 1 to 5 inFIGS.1A,1B, and1C. If the cells were part of a script instead of being part of a notebook, execution would proceed as if the cells were merged into a single cell and each statement would be executed as dictated by the regular program control flow (e.g., statements in cell 1 are executed sequentially, followed by cell 2, cell 3, and so on). However, in a notebook, any given cell may be executed at any given time (or in any order) by the user. This may produce a potentially infinite space of possible execution paths due to a lack of constraints on which order cells can be executed. Referring again toFIG.1A, in a machine learning example, notebook102may read data from a file into a data frame in cell 1 and in cell 3. Cell 2 may standardize the data, and cell 4 may split the data into test and training segments. In cell 5, the model is trained, tested, and assessed for accuracy. It can be seen that several different orders of execution exist for this particular notebook. For example, one sequence of executing cells could be executing cell 3, cell 4, and cell 5. Another sequence could be executing cell 1, followed by cell 2, cell 4, and cell 5. Furthermore, in the following example scenario: a user may execute a sequence of cells comprising cell 1, cell 2, cell 4, and cell 5 (e.g., skipping cell 3). However, this execution sequence may result in a data leakage bug (e.g., leakage between training and test data) because the function in cell 2 normalizes the data, and then cell 4 splits the data into train and test data after the normalization, thus resulting in a data leak. If the user, after some investigation, identifies this problem, they may re-execute cell 1, skipping cell 2, and then execute cell 4 and cell 5. The user may be perplexed as the same issue re-occurs. The problem is that the user executed cell 4 which referred to the variable x, which was previously computed by cell 2. As can be seen, a user may quickly get into a confusing situation even for relatively simple notebooks, as the one shown inFIG.1B. Each of the bugs described above demonstrates the ease at which a seemingly simple data science script can result in unforeseen behavior in a notebook environment. Moreover, establishing the root cause is similarly difficult without engaging in time-consuming debugging endeavors. On the other hand, restricting notebook execution semantics removes the flexibility that makes notebooks popular. The present disclosure provides for the use of static analyses, applicable to notebook execution semantics, to retain notebook flexibility while reducing errors and debugging efforts, and includes warning to users, ahead of time, of hypothetical erroneous and/or safe actions. To this end, a notebook analyzer system is provided (e.g., notebook analyzer204shown inFIGS.2and3), which offers notebook users the ability to perform, among other things, a what-if analysis on their actions or potential actions in a notebook programming environment. Actions (or events) may comprise, for example, opening a notebook, cell changes, cell executions, cell creation, cell deletion, etc. This notebook analyzer system204may report potential issues that may occur if an action is undertaken. For instance, referring to notebook102described in Example 1.1 and shown inFIG.1B, notebook analyzer system204may warn the user that the event of executing cell 1 may lead to a data leakage by then executing cell 2, cell 4, and cell 5. Moreover, as shown inFIG.1C, notebook analyzer system204may warn that the event of executing cell 1 can result in a stale state if cell 4 is executed before cell 2. Furthermore, the notebook analyzer204may recommend executing the sequence of cells 3, 4, and 5 as safe to execute after cell 1 is executed. The notebook analyzer system204may also support a wider range of static analyses. For example, further use cases of what-if analyses that may be implemented in the notebook analyzer are described below. Several important notebook development use cases including development time error detection, notebook reproducibility, notebook cleaning, among others may be facilitated and automated using these what-if analyses in the notebook analyzer system204. The notebook analyzer system204employs the theory of Abstract Interpretation to perform static analyses on cell code, thus guaranteeing in-cell termination for the price of an approximate analysis result (it is noted that static analysis is undecidable, in general, for Turing complete languages). The key idea is to over-approximate notebook semantics and computational state σ and instead produce an abstract state σ#which comprises an element of an abstract domain that encodes the analysis property of interest. When analyses are triggered by an event, an inter-cell analysis may be performed by propagating the analyses results to valid successor cells in the notebook. To select valid successor cells the notion of cell propagation dependencies is introduced, which allows pruning away unnecessary sequences of cell executions on-the-fly, and is parametrized by the current abstract state. In this way, abstract state is propagated efficiently while ensuring soundness and termination. This framework for performing static analyses on notebook semantics has been instantiated for several analyses tailored to data science and notebook programming. Notebook analyzer204has been evaluated on numerous real-world notebooks and has demonstrated its utility and scalability by an experimental evaluation. At least the following contributions are provided:(1) What-if analysis for notebooks.(2) A what-if framework that supports abstract domains for static analyses.(3) Maintain and analyze phases: this architecture allows for an on-demand what-if analysis for instances where a user may not want a what-if analysis on all actions, all the time. This architecture allows for what-if analyses to be triggered by the user.(4) Cell propagation dependency graph: to avoid unnecessary propagation, the use of pre-summary may determine if the state should be propagated.(5) Instantiated analyses defined for notebooks.(6) Custom properties rules i.e., “contracts”: users may be allowed to specify correctness criteria on cells. 2. Overview An overview is provided for the notebook analyzer (i.e., NBLYZER) static analysis framework for notebooks with reference toFIGS.2and3. For instance,FIG.2is a high-level diagram of a system200comprising a notebook interactive programming environment configured to perform maintenance and/or what-if analysis, according to an example embodiment. As shown inFIG.2, a system200comprises a notebook client202and the notebook analyzer system204(i.e., notebook analyzer204). Notebook analyzer204may comprise an event handler206, an intra-cell analysis engine208, and an inter-cell analysis engine210. FIG.3is a block diagram of a system300showing additional details of the system shown inFIG.2, according to an example embodiment. As shown inFIG.3, system300comprises a computing device302. Computing device302comprises a processor304and memory306. Memory306may comprise a notebook analyzer system204that comprises a static analysis engine310, cells320, events322, analyses324, terminating criteria326, correctness criteria328, and a user interface330. Static analysis engine310may comprise an event handler206, an intra-cell analysis engine208, an inter-cell analysis engine210, global abstract states312, pre-summaries314, and abstract semantics of cells316. These features of systems200and300are described in further detail as follows. Processor304may include one processor or any suitable number of processors, which may include, for example, central processing units (CPUs), microprocessors, multi-processors, processing cores, and/or any other hardware-based processor types described herein or otherwise known. Processor304may be implemented in any type of mobile or stationary computing device. Examples of mobile computing devices include but are not limited to a Microsoft® Surface® device, a personal digital assistant (PDA), a laptop computer, a notebook computer, a tablet computer such as an Apple iPad™, a netbook, a smart phone (such as an Apple iPhone, a phone implementing the Google® Android™ operating system), a wearable computing device (e.g., a head-mounted device including smart glasses such as Google® Glass™, or a virtual headset such as Oculus Rift® by Oculus VR, LLC or HoloLens® by Microsoft Corporation). Examples of stationary computing devices include but are not limited to a desktop computer or PC (personal computer), a server computer (e.g., a headless server), or a gaming console. Processor(s)304may run any suitable type of operating system, including, for example, Microsoft Windows®, Apple Mac OS® X, Google Android™, and Linux®. Memory306may comprise one or more memory devices, which may include any suitable type(s) of physical storage mechanism, including, for example, magnetic disc (e.g., in a hard disk drive), optical disc (e.g., in an optical disk drive), solid-state drive (SSD), a RAM (random access memory) device, a ROM (read only memory) device, and/or any other suitable type of physical, hardware-based storage medium. Cells320may store cells comprising code such as the cells 1, 2, 3, 4, and 5 represented in notebook102. Although a single computing device302is shown inFIG.3, system300may be implemented over a plurality of computer devices302and a variety of platforms. Moreover, notebook client202and computing device302may be implemented in a single computing device or over a plurality of computing devices. UI330may be implemented in notebook analyzer system204and/or in a remote notebook client202. In one example embodiment, notebook analyzer system204may be implemented into a notebook server (e.g., in computing device302) as an extension, and notebook client202may be implemented in a user device. Notebook client202and the notebook server may communicate through communication channels via a network (not shown). Notebook client202may advise notebook analyzer204in the notebook server of events and send code to be executed to the notebook server. The notebook server in turn may perform the static analysis described herein and execute the cells using a run-time system. Based on the static analysis, resulting information may be transmitted back to notebook client202, which may display or highlight, in UI330, the cells, cell sequences, and/or lines of code, which may warn the user of potential problems in various scenarios of cell execution (e.g., as shown inFIGS.1A,1B, and1C). In some embodiments, a user may perform actions (or events322) in notebook100via user interface330, such as opening the notebook, adding cells, changing cells, executing cells, and deleting cells, among other things. For each event322, the user may want to initiate a what-if analysis, essentially asking notebook analyzer204, for example, what can occur if cell 1 is executed? This allows the user to, ahead of time, avoid putting notebook102in a state that will lead to an error. Conversely, the user may ask notebook analyzer204which executions are safe if cell 1 is executed. This allows the user to choose from a set of possible safe execution paths. Other examples of what-if questions include: which cells will become isolated if d is renamed to x in cell 2, and which cells are redundant in the opened notebook102, etc. Each of these what-if questions may be useful for further use cases including reproducibility, security auditing, notebook cleaning and simplification, debugging, and education, among other things. From a systems perspective a what-if analysis is a notebook event322that is associated or configured to a set of analyses324. For example, asking about notebook safety will entail a certain set of analyses324, and asking about notebook cleanliness will entail a different set of analyses324. Notebook analyzer204therefore intercepts an event322from notebook client202and determines the appropriate mode of operation. The modes of operation are described below. Maintenance mode. In the case that an event322comprises a cell execution and the user has not attached any analyses324to this event, (e.g., has not attached a what-if analysis), then notebook analyzer204may perform cell maintenance (i.e., intra-cell analysis) for the executed cell. Since a cell execution may result in the concrete state (not shown) of notebook102being updated, notebook analyzer204may provide for the global abstract state312of future invoked analyses to be maintained. In addition, code summaries that enable faster analyses are also to be updated. Notebook analyzer204may perform maintenance on a cell by updating (if the code has changed) intermediate program representations, including parsing the cell code into an abstract syntax tree (AST), converting the AST to a control flow graph (CFG), and producing use definition (U-D) chains. If the cell code has not changed, these intermediate program representations may be retrieved from a cache, for example. Using the CFG, static analyses (e.g., intra-cell analyses) are performed to update notebook102's abstract state (i.e., the resultant abstract state from a static analysis, which is used to perform static analyses in the future). In Section 3.2.1 a more detailed account of the maintenance process is provided. What-If mode. For a what-if analysis (e.g., conducted for an event having a subset of analyses324associated with it), an inter-cell analysis may be performed. Here, starting from the global notebook abstract state312, a set of possible output abstract states are computed corresponding to the set of possible cell executions up to a limit K depth of cells, or until no new information can be obtained from additional cell executions. In this inter-cell analysis process, for each cell (e.g., of notebook102), inter-cell analysis engine310is configured to check which other cells have a propagation dependency, and propagate the computed abstract state to the dependent cells, for which the incoming abstract state is treated as an initial state. For each cell the output abstract state is checked against correctness criteria328, if an error is found a report may be updated, which may serve as instruction for notebook client202to notify the user as to the consequences of the event. A report may include information such as affected cell, line number, bug type, as well as metrics such as runtime, memory usage, etc.FIGS.1B and1Cshow user interface330displays based on such a report. In Section 3.2.2 a detailed account of the maintenance process is provided. In the case that the event is ignored by notebook analyzer204(i.e., non-execution event with no associated analyses) the notebook (e.g., notebook102) may be executed as normally performed. 3. Technical Description In this section a technical description of the notebook analyzer framework is provided. 3.1 Notebook Program Model 3.1.1 Notebook. A notebook N consists of a set of cells ci∈N. A cell cicomprises a sequence of code statements stji(l, l′) from a location l to location l′ in a control flow graph (CFG). As an abuse of notation, ciis allowed to be used as a label. 3.1.2 Cell Execution. An execution of a cell ciover a state space Σ=V→D where V is the set of notebook variables and D is the concrete domain of execution, is denoted by σi+1=ci+1(σi). Here, σi+1∈Σ is the output state, and σi∈Σ is the input state previously computed by a cell cjwhere i<i+1 in the execution sequence. 3.1.3 Notebook Execution. A notebook execution is a potentially infinite execution sequence σ0→ciσ1→cj. . . where ∀k≥0, ck∈N, σk∈Σ and i=j∨i≠j. The choice of the next cell in an execution sequence may be determined by the user from the space of all cells in a notebook. 3.2 Analysis Framework 3.2.1 Intra-Cell Analysis Events and Analyses (e.g., events322and analyses324). The inter-cell analysis may be triggered by an event e∈Event. An event may be attached to a set of analyses A′⊂A by a mapping:Event→(A). An analysis a is a tuple of an abstraction label abs and condition cond. The condition cond is an assertion on an abstract state of the analysis of type abs. Abstract state computation. From the sequence of statements in a cell, intra-cell analysis engine208is configured to construct a control flow graph (CFG), which is a directed graph that encodes the control flow of the statements in a cell. A CFG is defined asL, Ewhere an edge (l, st, l′)∈E reflects the semantics of the cell statement st associated with the CFG edge from locations l to l′ in the cell. A sound over-approximation σ#of a state σ may be computed, by intra-cell analysis engine208, by iteratively solving the semantic fixed-point equation=σ#=σ0#st#(σ#) using the abstract semanticsst#(e.g., abstract semantics316) for statementsst in the cell, and the initial abstract state (σ0#) At the cell level, this computation is defined as Fciwhich may be referred to as an abstract cell transformer. Fcimay take an abstract state and compute a fix-point solution in the abstract domain. Since a what-if analysis may not be triggered on every event322, and yet a cell 320 is executed by the user, it is of small cost to maintain the abstract state312along with the concrete state (not shown), as the analyses are designed to be faster than performing a concrete execution. Therefore, intra-cell analysis engine maintains an abstract state σ#which may be updated, each time a cell is executed, in parallel with the concrete executions of a notebook cell. At each execution, a cell transformer Fcifor a cell ciis applied with the current global state312, returning an updated global state, for example, Fci(σ#)=σ#′. This process is depicted inFIG.4. For instance,FIG.4is diagram showing execution of abstract semantics of a cell and updating a current global abstract state, which may be utilized in executing abstract semantics of another cell, according to an example embodiment. Intra-cell analysis engine208perform this maintenance for at least two reasons. Firstly, a static analysis may be performed just before cell execution, and blocking execution if an error was found. Secondly, the global abstract state may be utilized to initiate a what-if analyses, once the what-if analysis is triggered by a user. To analyze a cell, the static analysis problem may be reduced to the computation of the least solution of a fix-point equation σ#=Fci(σ#), σ#∈Σ#where Σ#is a domain of abstract properties, and Fciis the abstract transformer for the cell, (i.e., a composition of abstract statement transformers in the cell fix-point computation to solve the static analysis problem). Within the abstract interpretation framework, several analyses can co-exist by constructing an independent product of abstract domains. Executing several transformers in parallel for cell cimay be denoted as FciAwhere A is a set of analyses (e.g., analyses324). Cell summary computation. Apart from computing the abstract state, cell pre-summaries314may be computed. Pre-summaries314comprise intra-cell computed pre-conditions on a cell that are used to determine if an abstract state should be propagated to that cell. Pre-summaries314may be computed for each cell at a notebook initialization time and/or during cell code changes. In order to compute a pre-summary precifor cell cia use-def (U-D) structure may be constructed using standard data-flow techniques. U-Ds provide mappings between variable usages and their definitions. A variable is a defined variable if it is used as a right-hand-side expression in an assignment statement or if it is a function st. A variable is used if it is in the left-hand-side of an assignment statement or in a function st. Thus, given a cell c the following sets of variables can be defined, where the sets of variables define definitions and usages. def(c)={v|∀st∈c s.t. vis defined inst} and use(c)={v|∀st∈c s.t. vis used inst} The U-D structure may be computed using a reaching definition data-flow analysis and provides a mapping use-def for all symbols v∈V in the cell. If a v∈use(c) has no definition, it is mapped to ⊥. Using the U-D structure, the set of all unbounded variables in a cell may be computed. Unbound (c)={v|v∈use(c)∀use-def(c)=⊥}. Thus, the most generic pre-summary precis defined as: prec=unbound(c) Depending on the analysis, the definition of precimay be expanded. For example, for access violation, variables in cells may be ignored where no access patterns occur, and a variable may not be used to change and propagate information (e.g., simply printing data). 3.2.2 Inter-Cell Analysis State propagation. Inter-cell analysis engine210may be configured to compute a set of abstract states312for the entire notebook up to a depth K or as a fixpoint solution. The abstract state from a source cell is propagated to other cells if and only if there exists an edge that satisfies a cell propagation dependency. In some embodiments, when the propagation occurs, an intra-cell analysis computation is performed that treats the incoming cell abstract state as the initial state. FIG.5is a diagram showing inter-cell analysis propagation, according to an example embodiment. Inter-cell analysis comprises recursive execution of abstract semantics316of a plurality of cells320using pre-propagation summaries (i.e., pre-summaries314) of successor cells and abstract states312generated by predecessor cells for determining propagation dependencies and pruning independent cells from abstract state propagation paths. Referring toFIG.5, a what-if analysis may be triggered by an event e for a source cell ci. A pre-defined value of K∈{1, . . . , ∞} is defined where K=∞ means computation continues until a fix-point is reached, which may determine the depth of the analysis. The dependency is defined by determining if the abstract state σ′ciof the cell cican be combined with the pre-summary precjof another cell cj(which may be cell ciitself). If there is a dependency, the unbounded vars in ciconsume their values from σ′ci. This propagation may be continued until a limit K where K=1 would mean only one successor cell is analyzed, and K=c means until a fix-point is reached. The choice of K may be user configured and/or analysis dependent. Cell dependencies may be formalized in the form of a graph definition. Note, in some embodiments, the graph may be constructed lazily during abstract state propagation. Definition 3.1 (Cell Propagation Dependency Graph). Assume the sequence of cells form a directed dependency graph G=<N, D> where N is a finite set of cells, and (c, R, c′)∈R defines an arc from cell ci∈L to cj∈L iff ϕ(σci#,precj). How ϕ(σci#, precj) is defined may be analysis specific. In Section 4 examples are provided for how analyses can be defined to fit into the notebook analyzer framework. 3.2.3 Implementation ExamplesMethod 1 - Event_Handler(code, ci, e, K)1:global variables2:σ#(global abstract state)3:pre (pre-summary mapping)4:end global variables5:A′ = M (e)6:if A′ = 0 ∧ e = execute then7:σ#:= Maintain(code, ci, σ#, A )8:else9:report := InterCell(σ#, ci, K, [ ], A′ )10:return report An example technique is described in Method 1, where event handler206may be configured to receive an event and determine if the method should proceed in maintenance mode (intra-call analysis) or what-if analysis mode (inter-cell analysis). Given that an event e occurs, event handler206may be configured to obtain the following information: the source cell code code, the cell identifier ci, the event e, and the global abstract state σ#. At line 5, event handler206determines if there exists any analyses A′⊆A that are attached to the event e. If not, a maintenance in line 7 is performed whereby intra-cell analysis engine208is configured to call Maintain(code, ci, σ#, A) (e.g., intra-cell analysis) and update the global abstract state σ#(e.g., global abstract state312) as shown inFIG.4. Otherwise, inter-cell analysis engine210may be configured to proceed with a what-if analysis by calling InterCell(σ#, ci, K, [ ], A′) in line 9 and returning the results of the analysis, for example, to notebook analyzer system204. Method 2 - Maintain(code, ci, σ#, A)1:if code not changed then2:σ#′ := FciA(cfg,[ci], σ#)3:return σ#′4:else5:ast := parse(code)6:cfg[ci] := getCfg(ast)7:ud := getUD(cfg)8:pre[ci] := getPre(ud)9:σ#′ := FciA(cfg, σ#)10:return σ#′ In Method 2, intra-cell analysis, namely cell maintenance, is described. In the function Maintain, intra-cell analysis engine208may be configured to first check to see if a code change occurred. If so, intra-cell analysis engine208may be configured to re-build the pre-summary preciand perform an intra-cell static analysis Fci(cfg, σ#) to produce a new abstract state σ#. If the code has not changed, since the abstract state may have changed in the meantime, intra-cell analysis engine208may be configured to perform an intra-cell analysis, for example, FciA(cfg, σ#) for all analyses in A. Note that CFGs, U-Ds, pre-summaries314, and abstract states312may be cached so that they may be computed only when needed, for example, for the code changes. Method 3 - InterCell(σ#, ci, K, report, A′)1:if K = 0 then return report2:σ#′ := FciA′( σ#)3:report′ := Check(σ#′, A′, report )4:if σ#′ = σ#then return report′5:for all cj∈ N do6:if ϕ (σ#′, pre[ci]) then7:report′ = report′ + InterCell(σ#′, cj, K − 1, report′, A′)8:return report′ For the inter-cell method described in Method 3, inter-cell analysis engine210may be configured to perform a what-if analysis. Here, inter-cell analysis engine210may be configured to execute analyses in A′ on cells, starting with the source cell ciin lines 2 and 3 of Method 3, and propagating the abstract state to cells that have a dependency i.e., that satisfy ϕ (σ#′, pre[ci]), as shown in lines 6 and 7 of Method 3. If K=0 (line 1), meaning the required depth has been reached or a fixpoint is detected (line 4) (e.g., terminating criteria326) the method terminates. The method (or algorithm) complexity is O(nK) in the number of cells n for a given K. In some embodiments, an operation for some analyses may be to perform inter-cell widening. This operation will result in an extra condition in the code that checks if the abstract state increases on a given variable. If so, the value for that variable may be added as the top element. A narrowing pass can be also performed to improve precision. Thus far, only numerical analyses utilize this addition. Notebook analyzer system204may be configured in various ways, and may operate in various ways, to perform these and further functions. For instance,FIG.6is a flowchart600of a method for performing intra-cell (e.g., maintenance) and inter-cell (e.g., what-if) analyses in an interactive programming environment with an out-of-order execution model (e.g., a notebook program), according to an example embodiment. Flowchart600may be implemented in systems200and300. For purposes of illustration, flowchart600is described with reference toFIG.1A,FIG.1B,FIG.1C,FIG.2, andFIG.3. Other structural and operational embodiments will be apparent to persons skilled in the relevant art(s) based on the following discussion regarding flowchart600. Flowchart600ofFIG.6begins with step602. In step602, an event related to a first cell is received. For example, event handler206may be configured to receive an event322. The event may be received for various reasons, for example, when a notebook is opened (e.g., opening notebook102), cells are changed (e.g., a user may make changes to code in one or more of the cells of notebook102), cell(s) are executed (e.g., one or more of cells 1-5 may be executed), cell creation (e.g., a user may create a new cell in notebook102or in another notebook), cell deletion (e.g., a user may delete one or more cells in notebook102), etc. The cells and/or information related the cells may be stored as cells320in memory306. In step604, in response to determining that no specified analysis is associated with the event, the following steps may be performed. For example, event handler206may be configured to determine whether any specific analyses are associated with the received event. In instances where there are no specified analyses associated with the event, the method may proceed as follows. In step606, intra-cell analysis may be executed for the first cell based on a current global abstract state and abstract semantics of the first cell. For example, intra-cell analysis engine208may be configured to determine whether code in the first cell has changed. In instances where the code has not changed, intra-cell analysis engine208may be configured retrieve abstract semantics316(e.g., the CFG for the first cell) from memory306(e.g., from a cache) to perform intra-cell analysis based on the current global abstract state312(e.g., σ#) and abstract semantics316of the first cell, e.g., F1A(cfg, σ#), for all analyses A. In instances where the code of the first cell has changed, intra-cell analysis engine208may be configured to convert the code of the first cell to intermediate program representations resulting in a format suitable for performing intra-cell analysis. For example, intra-cell analysis engine208may be configured to parse the cell code of the first cell into an abstract syntax tree (AST), convert the AST to a control flow graph (CFG), and generate use definition chains (U-D) based on the code of the first cell. Using this CFG (i.e., abstract semantics) for the first cell (e.g., cfgc1), intra-cell analysis engine208may be configured to perform intra-cell analysis based on the current global abstract state312(e.g., σ#) comprising σ#′=F1A(cfg, σ#). Moreover, intra-cell analysis engine208may be configured to determine a pre-summary314(e.g., prec1) for the first cell based on the U-D and AST, which may be utilized in performing inter-cell analysis in notebook102. In step608, an updated global abstract state generated based on the intra-cell analysis of the first cell may be stored in memory. For example, in some embodiments, intra-cell analysis engine208may be configured to store one or more of the resultant abstract state σ#′, the determined CFG, the determined U-D, and/or the pre-summary for the first cell in memory306for use in later intra-cell and/or inter-cell analyses. In step610, in response to determining that a specified analysis is associated with the event the following steps may be performed. For example, event handler206may be configured to determine that one or more specified analyses are associated with the received event. For example, the association may be configured as a default setting or configured by a user in notebook102. In instances where there are one or more specified analyses associated with the event, the method may proceed as follows. In step612, starting with the stored global abstract state, inter-cell analysis may be recursively executed, until a terminating criteria is reached, on each successor cell of a plurality of cells including the first cell, for which the successor cell has a propagation dependency relative to a global abstract state generated by a respective predecessor cell of the successor cell. For example, inter-cell analysis engine310may be configured to perform a what-if analysis where one or more specified analyses (e.g., A′) are executed on cells in a notebook, starting with a source cell, where an output abstract state σ#′ generated based on the source cell, may be propagated to one or more successor cells that have a propagation dependency relative to the output abstract state from the source cell. In this regard, the source cell may be referred to as a predecessor cell of the successor cells. This process may be repeated where each successor cell may become a predecessor cell to other cells in the notebook, and an output abstract state that is generated based on a predecessor cell, is propagated to dependent successor cells in the notebook. A successor cell may have a propagation dependency relative to a predecessor cell if the successor cell comprises unbounded variables such that the output abstract state σ#′ of the predecessor cell can be applied to abstract semantics of the successor cell. Inter-cell analysis engine310may be configured to determine dependency of a successor cell based on a pre-summary of the successor cell (i.e., cells that satisfy ϕ(σ#, precj) as described with respect to Method 3 above. A global abstract state σ#′ may be generated by execution of abstract semantics of a predecessor cell, propagated to a respective dependent successor cell, and applied to execution of abstract semantics of the dependent successor cell in the inter-cell analysis. Inter-cell analysis may be terminated when a prescribed depth has been reached (e.g., K=0), or if a fixpoint solution is detected (e.g., σ#′=σ#). In step614, information related to outcomes of one or both of the intra-cell analysis and the inter-cell analysis may be communicated. For example, for each cell included in the inter-cell analysis, the output abstract state may be checked for errors based on a correctness criteria328. If one or more errors are found, a report may be generated. In some embodiments, the report may be updated with each iteration of the inter-cell analysis. The report may comprise information such as identifying an affected cell, an affected line number, a bug type, metrics (e.g., runtime, memory usage, etc.), etc. Moreover, the outcomes information may be communicated for use in notebook client202, for displaying feedback and or instructions for users via user interface330. The information related to outcomes of inter-cell analysis may indicate potential outcomes that will occur in execution of concrete semantics of the same cells for the specified analyses (e.g., a data leak analysis, stale state analysis, etc.). The information in the report may be utilized to generate feedback and/or instructions via user interface330, which may be displayed by computing device302and/or notebook client202.FIGS.1B and1Cshow example feedback displayed in a user interface, which is based on the inter-cell analysis results and report. Also, information based on intra-cell analysis may be indicated in the user interface. For example, the user interface may indicate unbounded variables in cells. The user interface may indicate the consequences of the event that triggered the intra-cell and/or inter-cell analyses. 3.2.4 Analysis Criteria and Contracts The Check function of the inter-cell analysis of method 3, checks the abstract state after a cell execution, and depending on correctness criteria328, determines if a violation has occurred. For standard built-in analyses (see Section 4) this correctness criteria may be hard coded into notebook analyzer204. However, for the available abstract domains, a user can define contracts on lines of code, pre or post conditions on cells or on the global notebook. Notebook analyzer204may expose the set of available abstractions, which can be seen as schema for which users can define queries in a logic-based domain specific language (DSL) that can assert expected behavior. The analysis may provide a set of finite sets of objects from the AST and analysis results that the user can formulate as an error condition, attached to a notebook, cell, or code line. Languages that map to first order logic (e.g., with finite domains) can be used. For example, Datalog or structured query language (SQL) are both candidates. 4 Instantiated Analyses In this section a brief outline of several instantiations of the analysis framework is provided. 4.1 Use Case I: Machine Language (ML) Data Leakage Data Leakage is a bug specific to data science. In machine leaning applications, models typically require normalization of the input data, especially neural networks. Commonly, data is normalized by performing a division of the existing data by its average or maximum. Likewise, data is typically split into training and test subsets. If the normalization is performed using the overall data set, then information from the test set will now be influencing the training subset. For this reason, any normalization should be applied individually on the test and training subsets. Data leakage is a common problem in data science scripts and the chance of it occurring is increased under the execution semantics of notebooks. To this end, light-weight analysis may be implemented to detect potential data leakages in notebooks. Our abstraction tracks which variable points to which data source. When an operation is performed on data that can introduce a leak, e.g., normalization, extrapolation etc. the data source propagation is reset. When variables are input into test and train functions, the system asserts that they do not point to the same data source. 4.1.1 Abstract Semantics An abstract domain is defined, which maps a variable v to a set of variables or data source locationsv: ∀v∈V·v⊆V·vv For each variable, a partial order is defined by a subset relation such that for a given variable v: vvvv′ iffvv′ Meet and join are similarly defined using srt union and intersection, respectively. Abstract semantics are defined for two categories of operations, namely: reset:⁢λσ♯.〚y¯=f⁡(x¯)〛={∀y∈y¯·σ♯[y↦x¯]⁢iff⁢f∈K⁢Br⁢e⁢s⁢e⁢t}(1)propogate:⁢λσ#·〚y¯=f⁡(x¯)〛={∀y∈y¯·σ♯(y)⊔⊔x∈x_σ♯(x)⁢iff⁢⁢f∉K⁢Br⁢e⁢set}(2) The reset operations forget any previous mappings and assign the left-hand-side variable(s) to the variable (or filename) that is being read or transformed. The operations that are classes as resets are obtained by a knowledge base KB that comprises context on used libraries etc. f can be any operation including the identity operation i.e., simple assignment. Lastly, to enable inter-cell propagation the following rule is defined: σci(σci#,precj)=precj⊆{v: vx∈σci#}∧∀v∈precj:σ#[v]≠⊆ Joins and meets that arise from control flow are handled by the join operations of the abstract domain, i.e., pointwise set union and disjunction. This analysis can be performed on a variety of sizes of K and may be user dependent. In some embodiments, users may achieve good results with K≈3. 4.1.2 Analysis Example Considering the example inFIGS.1A,1B, and1C, as noted above, a potential data leakage is shown inFIG.1B, which depends on the execution order of the cells, for example, if cell 2 is executed before cell 4 and 5. The following describes how a what-if analysis can detect this violation. Assume a what-if analysis is triggered for the event of executing cell 1. In other words, a question may be asked as to what can happen if in future executions if cell 1 is executed? First an abstract state is computed for cell 1 which is: σc1#=d{data.csv} Using abstract state and preconditions of other cells, a value is assessed for: σc1(σc1#precj) for all cells cjin the notebook. It may be found that ϕciholds for cell 2. Next, the abstract state for cell 2 is computed with the abstract state of cell 1 as the initial state, obtaining: σc2#=d{data.csv},x{d}as fit_transform∈KBreset The following is evaluated: ϕc2(σc2#,precj) for all cells cjin the notebook, and it may be found that cell 4 holds. Here, all split variables map to d. Again, it may be found that propagation can proceed to cell 5 and the data leakage condition may be applied: if any arguments of train and test functions point to the same data, a potential data leak may occur. More formally this can be defined as a contract as follows: Erorr⁢in⁢notebook={∃f∈TrainCall,∃g∈TestCall,a∈Args⁢(f),b∈Args(g),Points(a)⋂Points(b)=∅} Here, TrainCall, TestCall, Args are relations obtained from the AST and Points is obtained from the abstract domain. With this analysis condition, NBLYZER (a notebook analyzer) may warn the user that the execution sequence of cells executions <1, 2, 4, 5> may result in a data leakage in cell 5 and no alternative safe execution path may exist that is predicated on the event of cell 1 being executed. 4.2 Use Case II: Code Impact Analysis When a change occurs, users may like to know what other code is affected or unaffected by that change. This has a number of usages including assisting in notebook reproducibility, stale state detection, code cleanup and code simplification. For scripts, many of these analyses are a straightforward information flow analysis, however, due to the semantics of notebooks, where any cell can be executed in any order, determining the impact of a change may become more challenging. 4.2.1 Abstract Semantics An abstract domain may be defined, which maps a variable and symbols (function names, etc.) v to a Boolean t to f indicating which variable has changed or not. Practically, the abstract domain may be implemented as a set of variablesv. If a variable is in the set it has changed, otherwise it hasn't. Thus, the lattice may be a standard powerset lattice common in data flow analyses. When a variable on the left-hand-side of a statement has changed, the right-hand-side may be inserted in the set. Below it is stated the propagation semantics for selected statements. Assignment:⁢λσ♯·〚y¯=f⁡(x¯)〛={σ#⋃{y}⁢iff⁢f∈σ#⋁∃x∈x_⁢s.t.x∈σ#}(1)Functions:⁢λσ#·〚f⁡(x¯)⁢{y¯}〛={σ#⋃{f}⁢iff∃x∈x_⁢s.t.x∈σ#⋁∃y∈y_⁢s.t.y∈σ#}(2) Similarly, joins and meets that arise from control flow may be handled by the join operations of the abstract domain, i.e., set union and disjunction. 4.2.2 Analysis Variations Safe cell analysis. Safe cell analysis identifies cells which have a direct dependency, that is all variables will have the most updated values in the incoming state. For this analysis, the same domain of the change impact analysis is used, but q is redefined to correctly propagate the abstract state as follows: σci(σci#,precj)=precj⊆{v:v∈σci#} This analysis may be useful for guiding users to which cells can be executed to avoid staleness, thus it may be used to improve notebook reproducibility. Stale cell analysis. Stale cell analysis may be opposite of the above safe analysis. They highlight cells that have intermediate safe cells between them and the source cell. For this analysis q is defined as the following: ϕci(σci#,precj)={v:v∈σci#}∩precj≠Ø Isolated cell analysis. An isolated cell is a cell that does not have any dependency with other cells. Such cells are typically found during experimentation phases of development and may be identified as candidates for cleanup. This analysis is performed on K=1. It has the negated q condition of freshness and staleness. σci(σci#,precj)={v:vx∈σci#}∩precj=Ø as well as ϕcj(σcj#, preci) Idle cell analysis. Idle cells are cells that do not contribute to the computation and if pruned, will not affect the end result of the notebook computation. They are cells that may have previously been used for debugging, experimentation and a candidates for notebook cleanup. This analysis is performed on K=1 for all cells in a notebook it checks. 4.2.1 Analysis Example Considering the example inFIG.1C, it can be seen that the execution of cell 1 followed by cell 4 will create staleness. This is because cell 2 is fresh and is the intermediate cell between cell 1 and cell 4 dependencies. If the file in cell 1 is changed, then the variable d is in the abstract domain. As before, this is propagated to cell 2 and hence, x is also in the abstract domain. When further propagating to cell 4 (i.e., K=2) reporting may indicate that all the rhs variables are stale if the cell execution sequence 1, 4 is performed. 5. Integration into Notebook As described above, in one example embodiment, what-if analysis techniques may be implemented in a notebook server (e.g., in in computing device302) where a notebook client (e.g., notebook client202) and the server may communicate through communication channels. The notebook client may advise the server of events and send code to be executed. The server, in-turn, may perform the static analysis and execute the code using a run-time system. When the analysis is complete, information may be sent back to the notebook client. The information may include, for example, cells, cell sequences, and lines of code to highlight and warn the user. This implementation may target the python language. For example, notebook analyzer204may parse the code into an AST from which it constructs a control flow graph and usage-definition chains. These low-level code representations may be used to perform the static analyses implemented in the notebook analyzer204framework. In some embodiments, a user can manually trigger the what-if analysis and pre-select which built-in analyses are turned on. The user may be warned, in notebook client202, of potential code violations through use of graphical code, cell highlighting, and messages. The notebook client user interface vary depending on the client used. III. Example Computer System Implementation Embodiments described herein may be implemented in hardware, or hardware combined with software and/or firmware. For example, embodiments described herein may be implemented as computer program code/instructions configured to be executed in one or more processors and stored in a computer readable storage medium. Alternatively, embodiments described herein may be implemented as hardware logic/electrical circuitry. As noted herein, the embodiments described, including but not limited to, systems200and300along with any components and/or subcomponents thereof, as well any operations and portions of flowcharts/flow diagrams described herein and/or further examples described herein, may be implemented in hardware, or hardware with any combination of software and/or firmware, including being implemented as computer program code configured to be executed in one or more processors and stored in a computer readable storage medium, or being implemented as hardware logic/electrical circuitry, such as being implemented together in a system-on-chip (SOC), a field programmable gate array (FPGA), an application specific integrated circuit (ASIC), a trusted platform module (TPM), and/or the like. A SOC may include an integrated circuit chip that includes one or more of a processor (e.g., a microcontroller, microprocessor, digital signal processor (DSP), etc.), memory, one or more communication interfaces, and/or further circuits and/or embedded firmware to perform its functions. Embodiments described herein may be implemented in one or more computing devices similar to a mobile system and/or a computing device in stationary or mobile computer embodiments, including one or more features of mobile systems and/or computing devices described herein, as well as alternative features. The descriptions of computing devices provided herein are provided for purposes of illustration, and are not intended to be limiting. Embodiments may be implemented in further types of computer systems, as would be known to persons skilled in the relevant art(s). FIG.7is a block diagram of an example processor-based computer system700that may be used to implement various embodiments. System700may include any type of computing device, mobile or stationary, such as a desktop computer, a server, a video game console, etc. For example, system700may comprise any type of mobile computing device (e.g., a Microsoft® Surface® device, a personal digital assistant (PDA), a laptop computer, a notebook computer, a tablet computer such as an Apple iPad™, a netbook, etc.), a mobile phone (e.g., a cell phone, a smart phone such as a Microsoft Windows® phone, an Apple iPhone, a phone implementing the Google® Android™ operating system, etc.), a wearable computing device (e.g., a head-mounted device including smart glasses such as Google® Glass™, Oculus Rift® by Oculus VR, LLC, etc.), a stationary computing device such as a desktop computer or PC (personal computer), a gaming console/system (e.g., Microsoft Xbox®, Sony PlayStation®, Nintendo Wii® or Switch®, etc.), etc. System700may be implemented in one or more computing devices containing features similar to those of computing device700in stationary or mobile computer embodiments and/or alternative features. The description of computing device700provided herein is provided for purposes of illustration, and is not intended to be limiting. Embodiments may be implemented in further types of computer systems, as would be known to persons skilled in the relevant art(s). As shown inFIG.7, computing device700includes one or more processors, referred to as processor circuit702, a system memory704, and a bus706that couples various system components including system memory704to processor circuit702. Processor circuit702is an electrical and/or optical circuit implemented in one or more physical hardware electrical circuit device elements and/or integrated circuit devices (semiconductor material chips or dies) as a central processing unit (CPU), a microcontroller, a microprocessor, and/or other physical hardware processor circuit. Processor circuit702may execute program code stored in a computer readable medium, such as program code of operating system730, application programs732, other programs734, etc. Bus706represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. System memory704includes read only memory (ROM)708and random-access memory (RAM)710. A basic input/output system712(BIOS) is stored in ROM708. Computing device700also has one or more of the following drives: a hard disk drive714for reading from and writing to a hard disk, a magnetic disk drive716for reading from or writing to a removable magnetic disk718, and an optical disk drive720for reading from or writing to a removable optical disk722such as a CD ROM, DVD ROM, or other optical media. Hard disk drive714, magnetic disk drive716, and optical disk drive720are connected to bus706by a hard disk drive interface724, a magnetic disk drive interface726, and an optical drive interface728, respectively. The drives and their associated computer-readable media provide nonvolatile storage of computer-readable instructions, data structures, program modules and other data for the computer. Although a hard disk, a removable magnetic disk and a removable optical disk are described, other types of hardware-based computer-readable storage media can be used to store data, such as flash memory cards, digital video disks, RAMs, ROMs, and other hardware storage media. A number of program modules may be stored on the hard disk, magnetic disk, optical disk, ROM, or RAM. These programs include operating system730, one or more application programs732, other programs734, and program data736. Application programs732or other programs734may include, for example, computer program logic (e.g., computer program code or instructions) for implementing processor(s)304, memory106, notebook analyzer204, static analysis engine206, intra-cell analysis engine208, intracell analysis engine210, notebook client202, event handler206, and flowchart600(including any step thereof), and/or further embodiments described herein. Program data736may include cell 1, cell 2, cell 3, cell 4, cell 5, global abstract states312, pre-summaries314, abstract semantics316, cells320, events322, analyses324, terminating criteria326, correctness criteria328, and/or further embodiments described herein. A user may enter commands and information into computing device700through input devices such as keyboard738and pointing device740. Other input devices (not shown) may include a microphone, joystick, game pad, satellite dish, scanner, a touch screen and/or touch pad, a voice recognition system to receive voice input, a gesture recognition system to receive gesture input, or the like. These and other input devices are often connected to processor circuit702through a serial port interface742that is coupled to bus706, but may be connected by other interfaces, such as a parallel port, game port, or a universal serial bus (USB). A display screen744is also connected to bus706via an interface, such as a video adapter746. Display screen744may be external to, or incorporated in computing device700. Display screen744may display information, as well as being a user interface for receiving user commands and/or other information (e.g., by touch, finger gestures, virtual keyboard, etc.). In addition to display screen744, computing device700may include other peripheral output devices (not shown) such as speakers and printers. Computing device700is connected to a network748(e.g., the Internet) through an adaptor or network interface750, a modem752, or other means for establishing communications over the network. Modem752, which may be internal or external, may be connected to bus706via serial port interface742, as shown inFIG.7, or may be connected to bus706using another interface type, including a parallel interface. As used herein, the terms “computer program medium,” “computer-readable medium,” and “computer-readable storage medium” are used to refer to physical hardware media such as the hard disk associated with hard disk drive714, removable magnetic disk718, removable optical disk722, other physical hardware media such as RAMs, ROMs, flash memory cards, digital video disks, zip disks, MEMs, nanotechnology-based storage devices, and further types of physical/tangible hardware storage media. Such computer-readable storage media are distinguished from and non-overlapping with communication media (do not include communication media). Communication media embodies computer-readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media includes wireless media such as acoustic, RF, infrared and other wireless media, as well as wired media. Embodiments are also directed to such communication media that are separate and non-overlapping with embodiments directed to computer-readable storage media. As noted above, computer programs and modules (including application programs732and other programs734) may be stored on the hard disk, magnetic disk, optical disk, ROM, RAM, or other hardware storage medium. Such computer programs may also be received via network interface750, serial port interface742, or any other interface type. Such computer programs, when executed or loaded by an application, enable computing device700to implement features of embodiments discussed herein. Accordingly, such computer programs represent controllers of computing device700. Embodiments are also directed to computer program products comprising computer code or instructions stored on any computer-readable medium. Such computer program products include hard disk drives, optical disk drives, memory device packages, portable memory sticks, memory cards, and other types of physical storage hardware. IV. Additional Example Embodiments In an embodiment, a system for communicating potential cell execution outcomes in an interactive programming environment comprises a processor and a memory device. The memory device stores program code to be executed by the processor. The program code comprises an analysis engine configured to receive an event related to a first cell. Based at least on determining that no analysis is associated with the event, the analysis engine executes intra-cell analysis for the first cell based on a current global abstract state and abstract semantics of the first cell and stores in memory an updated global abstract state generated based on the intra-cell analysis of the first cell. Based at least on determining that an analysis is associated with the event, starting with the stored global abstract state, the analysis engine recursively executes, until a terminating criteria is reached, inter-cell analysis on each successor cell of a plurality of cells including the first cell for which the successor cell has a propagation dependency relative to a global abstract state generated by a respective predecessor cell of the successor cell, and communicates information related to outcomes of one or both of the intra-cell analysis and the inter-cell analysis. In an embodiment of the foregoing system, prior to executing intra-cell analysis for the first cell, based at least on determining that code of the first cell has changed, the analysis engine is further configured to convert the code of the first cell to the abstract semantics of the first cell comprising a format suitable for executing the intra-cell analysis on the first cell. In an embodiment of the foregoing system, the analysis engine is further configured to execute intra-cell analysis on each of the plurality of cells prior to executing the inter-cell analysis on each successor cell. In an embodiment of the foregoing system, the execution of the intra-cell analysis on each of the plurality of cells includes performing a pre-summary for each cell for determining whether each cell has a propagation dependency for receiving a global abstract state propagated from a respective predecessor cell. In an embodiment of the foregoing system, a successor cell has a propagation dependency on a respective predecessor cell if an abstract state generated by execution of abstract semantics of the respective predecessor cell is propagatable to the successor cell in the inter-cell analysis based on unbounded variables in the successor cell. In an embodiment of the foregoing system, a global abstract state generated by execution of abstract semantics of a predecessor cell is propagated to a respective successor cell and applied to execution of abstract semantics of the successor cell in the inter-cell analysis. In an embodiment of the foregoing system, the analysis engine is further configured to check for errors in the generated global abstract state based on a correctness criteria to generate the information related to the outcomes. In an embodiment of the foregoing system, the terminating criteria is based on a parameter configured in the interactive programming environment or is based on results of the abstract cell execution. In an embodiment of the foregoing system, the analysis is configurable via a user interface. In an embodiment of the foregoing system, the analysis comprises a stale state analysis, a machine learning data leakage analysis, a numerical out-of-bounds error analysis, an analysis to detect application programming interface (API) contract violations, or an analysis to detect logic errors causing a cell not to be executed. In an embodiment of the foregoing system, the interactive programming environment is a notebook. In an embodiment, a method for communicating potential cell execution outcomes in an interactive programming environment comprises performing an analysis. The analysis comprises receiving an event related to a first cell. Based at least on determining that no analysis is associated with the event, the analysis further comprises executing intra-cell analysis for the first cell based on a current global abstract state and abstract semantics of the first cell, and storing in memory an updated global abstract state generated based on the intra-cell analysis of the first cell. Based at least on determining that an analysis is associated with the event, the analysis comprises, starting with the stored global abstract state, recursively executing, until a terminating criteria is reached, inter-cell analysis on each successor cell of a plurality of cells including the first cell for which the successor cell has a propagation dependency relative to a global abstract state generated by a respective predecessor cell of the successor cell, and communicating information related to outcomes of one or both of the intra-cell analysis and inter-cell analysis. In an embodiment of the foregoing method, prior to executing intra-cell analysis for the first cell, based at least on determining that code of the first cell has changed, converting the code of the first cell to the abstract semantics of the first cell comprising a format suitable for executing the intra-cell analysis on the first cell. In an embodiment of the foregoing method, intra-cell analysis is executed on each of the plurality of cells prior to executing the inter-cell analysis on each successor cell. In an embodiment of the foregoing method, the executing of the intra-cell analysis on each of the plurality of cells includes performing a pre-summary for each cell for determining whether each cell has propagation dependency for receiving a global abstract state propagated from a respective predecessor cell. In an embodiment of the foregoing method, a successor cell has a propagation dependency on a respective predecessor cell if an abstract state generated by execution of abstract semantics of the respective predecessor cell is propagatable to the successor cell in the inter-cell analysis based on unbounded variables in the successor cell. In an embodiment of the foregoing method, a global abstract state generated by execution of abstract semantics of a predecessor cell is propagated to a respective successor cell and applied to execution of abstract semantics of the successor cell in the inter-cell analysis. In an embodiment of the foregoing method, errors in the generated global abstract state are checked for based on a correctness criteria to generate the information related to the outcomes. In an embodiment, a computer-readable medium having program code recorded thereon that when executed by at least one processor causes the at least one processor to perform a method for communicating potential cell execution outcomes in an interactive programming environment. The method comprises performing an analysis. The analysis comprises receiving an event related to a first cell. Based at least on determining that no analysis is associated with the event, the analysis further comprises executing intra-cell analysis for the first cell based on a current global abstract state and abstract semantics of the first cell, and storing in memory an updated global abstract state generated based on the intra-cell analysis of the first cell. Based at least on determining that an analysis is associated with the event, the analysis further comprises starting with the stored global abstract state, recursively executing, until a terminating criteria is reached, inter-cell analysis on each successor cell of a plurality of cells including the first cell for which the successor cell has a propagation dependency relative to a global abstract state generated by a respective predecessor cell of the successor cell, and communicating information related to outcomes of one or both of the intra-cell analysis and the inter-cell analysis. In an embodiment of the foregoing computer-readable medium, errors in the generated global abstract state are checked for based on a correctness criteria to generate the information related to the outcomes. VI. Conclusion While various embodiments of the present disclosed subject matter have been described above, it should be understood that they have been presented by way of example only, and not limitation. It will be understood by those skilled in the relevant art(s) that various changes in form and details may be made therein without departing from the spirit and scope of the disclosed subject matter as defined in the appended claims. Accordingly, the breadth and scope of the disclosed subject matter should not be limited by any of the above-described exemplary embodiments, but should be defined only in accordance with the following claims and their equivalents.
69,288
11860767
The technologies described herein will become more apparent to those skilled in the art from studying the Detailed Description in conjunction with the drawings. Embodiments or implementations describing aspects of the invention are illustrated by way of example, and the same references can indicate similar elements. While the drawings depict various implementations for the purpose of illustration, those skilled in the art will recognize that alternative implementations can be employed without departing from the principles of the present technologies. Accordingly, while specific implementations are shown in the drawings, the technology is amenable to various modifications. DETAILED DESCRIPTION Disclosed here is a system and method to allow users with a disability to interact with computer programs. To ensure that the computer programs are accessible to users with a disability, prior to releasing the computer program, the system generates an indication of an appropriate test to perform to ensure that a user interface can provide intended information even to a user with a disability. The system can obtain a representation of the user interface to present to a user. The representation can be a design document describing the user interface, a sketch of the user interface, or a functioning user interface. The user interface can be a graphical user interface, an audio interface, or a haptic user interface. The system can identify an element associated with the user interface, such as a menu, a button, a link, etc., where the element is configured to provide information to the user, however, the user interface presentation of the element at least partially fails to provide the information to the user, due to the user's particular disability or severity of that disability. Based on the element, the system can determine an appropriate test to perform, where the appropriate test indicates a test to perform with a keyboard, a gesture test to perform with a mobile screen reader (e.g. TalkBack or VoiceOver), and/or an audio test to perform with a screen reader. The system can generate the indication of the appropriate test. For example, when the appropriate test includes the test to perform with the keyboard, the system can indicate a keyboard key and an effect caused by activating the keyboard key. When the appropriate test includes the gesture test, the system can indicate a gesture and an effect caused by performing the gesture. When the appropriate test includes the audio test, the system can indicate audio and a function to be performed by the audio. The system can provide the presentation prior to releasing the user interface to the user, to ensure that the appropriate tests are performed. A screen reader is a form of assistive technology (AT) that renders text and image content as speech or haptic output, e.g. braille output. A refreshable braille display or braille terminal is an electro-mechanical device for displaying characters, usually by means of round-tipped pins raised through holes in a flat surface. Visually impaired computer users who cannot use a standard computer monitor can use it to read text output. Deafblind computer users may also use refreshable braille displays. Speech synthesizers are also commonly used for the same task, and a blind user may switch between the two systems or use both at the same time depending on circumstances. Screen readers are essential to people who are blind, and are useful to people who are visually impaired, illiterate, or have a learning disability. Screen readers are software applications that attempt to convey what people with normal eyesight see on a display to their users via non-visual means, like text-to-speech, sound icons, or a Braille device. They do this by applying a wide variety of techniques that include, for example, interacting with dedicated accessibility APIs, using various operating system features (like inter-process communication and querying user interface properties), and employing hooking techniques. In computer programming, the term hooking covers a range of techniques used to alter or augment the behavior of an operating system, of applications, or of other software components by intercepting function calls or messages or events passed between software components. Code that handles such intercepted function calls, events or messages is called a hook. Examples of hooking can include intercepting an output from an application sent to the display, to add audio or Braille indications of the output. The description and associated drawings are illustrative examples and are not to be construed as limiting. This disclosure provides certain details for a thorough understanding and enabling description of these examples. One skilled in the relevant technology will understand, however, that the invention can be practiced without many of these details. Likewise, one skilled in the relevant technology will understand that the invention can include well-known structures or features that are not shown or described in detail, to avoid unnecessarily obscuring the descriptions of examples. Ensuring That Computer Programs Are Accessible to Users with a Disability FIG.1shows a user interface to generate an indication of an appropriate test to perform to ensure that a user interface provides intended information to a large set of users, including users with disabilities. The user interface100enables a user to indicate the type of tests to perform on a program, to ensure that a program110is accessible to users with a disability. The users with a disability may not be able to use the mouse or the keyboard, or they may not be able to read due to blindness, limited literacy, or other reasons. The program110can include web programs, such as web pages110A or native applications110B. The native applications110B can be associated with various devices such as mobile phones, computers, tablets, wearable devices, etc. User interfaces of programs110A,110B can include graphical user interfaces, audio user interfaces, and/or haptic user interfaces. FIG.2shows user interface categories associated with the web page. A hardware or software processor executing instructions described in this application can determine the type of program associated with the user interface, and based on the type of program can determine the multiple categories associated with user interface elements. When the program is a web page110A inFIG.1, the processor creates the following categories: HTML200, header (or banner)210, navigation220, main230, form240, and footer (contentinfo)250. Other categories can be included such as aside, article, section. The aside HTML element represents a portion of a document whose content is only indirectly related to the document's main content. Asides are frequently presented as sidebars or call-out boxes. The article HTML element represents a self-contained composition in a document, page, application, or site, which is intended to be independently distributable or reusable (e.g., in syndication). Examples include: a forum post, a magazine or newspaper article, or a blog entry, a product card, a user-submitted comment, an interactive widget or gadget, or any other independent item of content. The section element is a structural HTML element used to group together related elements. Each section typically includes one or more heading elements and additional elements presenting related content. The HTML200element indicates that the document is an HTML document. HyperText Markup Language, or HTML, is the standard markup language for documents designed to be displayed in a web browser. The HTML200element can include user interface elements such as basic web page201, skip link202, header/banner203, navigation menu204, main landmark205, and/or footer/content info206. The header210is a content sectioning element allowing organization of the document content into logical pieces. Header210represents introductory content, typically a group of introductory or navigational aids. Header210may contain some heading elements but also a logo, a search form, an author name, and other elements. The header210element can include user interface elements such as search input, header/banner, and/or navigation menu (not pictured). The navigation220element represents a section of a page that serves the purpose of providing navigation links, either within the current document or to other documents. Common examples of navigation sections are menus, tables of contents, and indexes. The navigation220element can include user interface elements such as search input221, pagination nav222, button223, link224, and/or navigation menu225. The main230element represents the dominant content of the <body>of a document. The main content area consists of content that is directly related to or expands upon the central topic of a document, or the central functionality of an application. The main230element can include user interface elements such as: Alert; Animation; Expander accordion; Figure: map, chart, table; Heading: h1, h2, h3; Image: jpg, gif, png, svg; Modal Dialog; Separator/horizontal rule; Sticky element; Table; Toast Snackbar; Tooltip; Video/audio player; Pagination nav; Progress bar; Link; Carousel; and/or Main landmark (not pictured). The form240element represents a document section containing interactive controls for submitting information. The form240element can include user interface elements such as: Alert; Checkbox; Date picker dialog; Dropdown listbox select; Hint, help, or error; Listbox with inline autocomplete; Number input; Password input; Radio button; Range slider input; Separator/horizontal rule; Star rating; Stepper input; Text input; Toast Snackbar; Toggle switch; Progress bar; and/or Button (not pictured). The footer250element is a content sectioning element. The footer250element represents a footer for its nearest sectioning content or sectioning root element. A <footer>typically contains information about the author of the section, copyright data, or links to related documents. The footer250element can include user interface elements such as navigation menu and/or footer/content info (not pictured). FIG.3shows user interface categories associated with a native application. A processor based on the type of program can determine the multiple categories associated with user interface elements. When the program is a native application110B inFIG.1, the processor creates the following categories: controls300and notifications310. The processor chooses the categories so that user interface elements belonging to native applications running on various platforms such as Android and iOS can be categorized in the categories300,310. The processor can also create category specific to a particular platform such as iOS. Those categories can include bars and views. Controls300are user interface elements that display content or enable interaction. Controls are the building blocks of the user interface. Controls300can include the following user elements: Button; Captcha; Carousel; Checkbox; Link; Menu; Pagination control; Picker/Spinner/Dropdown; Radio button; Range slider; Segmented Control/Tab; Stepper; Table row button; Text input; and/or Toggle switch. The notifications310include Alert/Modal Dialog and/or Toast/snack bar/banner notification. Alert/Modal Dialog notifications interrupt users and demand an action. They are appropriate when a user's attention needs to be directed toward important information. Toast, snackbar, banner all refer to a graphical control element that communicates certain events to the user without forcing the user to react to this notification immediately, unlike conventional pop-up windows. Desktop notifications usually disappear automatically after a short amount of time. Often their content is then stored in some widget that allows the users to access past notifications at a more convenient time. A bar can include a navigation bar, search bar, sidebar, status bar, tab bar, toolbar, etc. The bar can provide various information to the user or receive input from the user. The bar can be vertical, or horizontal on the screen. A view represents a single item on the user interface. It could be a control such as a button or slider, an input text field, or perhaps an image. A view covers a specific area on the screen and can be configured to respond to touch events. FIGS.4A-4Bshow selection of an appropriate test. When the appropriate user interface element400shown inFIGS.4A-4B,410shown inFIG.4B(only 2 labeled for brevity) is selected, the processor can generate the presentation460including the indication420of the appropriate test. The processor can select the user interface elements400,410by analyzing the user interface and determining the user interface elements400,410included in the user interface. Alternatively, the user can manually select the user interface elements400,410. The indication420of the appropriate test can be split into 3 categories: keyboard test430A, gesture test440A, and audio test450A. When the appropriate test includes the keyboard test430A, the test indicates a keyboard key and an effect caused by activating the keyboard key. For example, for testing the user interface element checkbox, the test indicates:1. Test with the keyboardTab or arrow keys: Focus visibly moves to the checkboxSpacebar: Activates on iOS and AndroidEnter: Activates on Android. The tests430A,440A,450A can indicate keys to test specific operating systems and expected effects that depend on the operating system. When the appropriate test includes the gesture test440A, the test indicates a gesture and an effect caused by performing the gesture. For example, for testing the user interface element checkbox, the test indicates:Swipe: Focus moves to the element, expresses its name, role, stateDoubletap: Checkbox toggles between checked and unchecked states. When the appropriate test includes the audio test450A, the test indicates audio and a function to be performed by the audio. For example, for testing the user interface element checkbox, the test indicates that the audio should include:Name: Describes the purpose of the control and matches the visible labelRole: Identifies itself as a checkbox in Android and a Button in iOSGroup: Visible label is grouped with the checkbox in a single swipeState: Expresses its state (disabled/dimmed, checked, not checked). For both native applications and web programs, the group indicates to check that labels and controls are a single object to the user. For example, if the user element is a checkbox, tapping on the checkbox label should activate the checkbox rather than being forced to tap on the tiny checkbox. The presentation460can be made human readable format or can be in a machine readable format because the testing can be performed automatically or manually. The indication420of the appropriate test can be editable to allow modification to the indicated tests430A,440A,450A. FIG.4Bshows selection of multiple appropriate tests. InFIG.4B, both the checkbox400and the menu410have been selected. The processor appends the indication420,470of the appropriate tests to create the presentation460. For testing the user interface element “menu,” the keyboard test430B indicates:1. Test with the keyboardTab or arrow keys: Focus visibly moves, confined within the menuEscape: The menu closes and returns focus to the button that launched itSpace: Any buttons or links are activated on iOS and AndroidEnter: Any buttons or links are activated on Android. When the appropriate test includes the gesture test440B, the test indicates a gesture and an effect caused by performing the gesture. For testing the user interface element “menu,” the gesture test440B indicates:Swipe: Focus moves, confined within the menu.Doubletap: This typically activates most elements. The gesture test440B tests traversing and interacting with the screen for people who are blind and cannot see the screen. The gesture test can test that swiping across the screen moves the screenreader “focus” to different elements one by one. When the appropriate test includes the audio test450B, the test indicates audio and a function to be performed by the audio. For example, for testing the user interface element “menu,” the audio test450B indicates that the audio should include:Name: Purpose of menu is clear.Role: May identify itself as a menu, sidebar, or panel. Confining the user within the menu communicates the context to the screen reader user that there is a menu or modal present.State: When open, other content is inert. Expands/collapses, closes/opens states are typically announced for a menu, sidebar, or panel. The tests430,440,450can be written in a human readable format, and/or in a machine-readable format. The tests430,440,450can be performed manually or automatically. To perform an automated test, the processor can receive the indication of the appropriate tests430,440,450in either human readable format or in a machine readable format. Based on the indication of the appropriate tests430,440,450, the processor can determine which tests to perform. For example, if the test indicates an audio test450A,450B, the processor can execute the program under test, and the audio test for the particular element under test. In addition, the processor can play the audio associated with the particular element and perform natural language processing on the audio to determine the content of the audio and whether the content of the audio matches the particular element under test. For example, if the element is a checkbox, the processor can determine whether the audio corresponds to the output specified in the “Name,” “Role,” “Group,” and “State” requirements of the audio test450A. In particular, to pass the “Name” component of the audio test450A, the processor can determine whether the audio contains the specified output, namely whether the audio correctly states the name of the checkbox. To pass the “Role” component of the audio test450A, the processor can determine whether the audio contains the specified output, namely whether the audio correctly states that the element under test is a checkbox in Android, or a button in iOS. To pass the “Group” component of the audio test450A, the processor can determine whether the audio contains the specified output, namely whether the audio correctly states that the name of the element under test is grouped with the checkbox. To pass the “Group” component of the audio test450A, the processor can also determine that tapping on the checkbox label activates the checkbox rather than being forced to tap on the tiny checkbox. Label and checkbox should act as a single unit. Finally, to pass the “State” component of the audio test450A, the processor can determine whether the audio contains the specified output, namely whether the audio correctly states if the element under test is checked or unchecked. If the audio passes all four of the tests, the processor can determine that the program satisfies the audio test450A. If the audio does not pass all four of the tests, the processor can indicate that the program has failed the audio test. FIG.5is a flowchart that illustrates a process to generate an indication of an appropriate test to perform to ensure that a user interface provides intended information to a user. In step500, a hardware or software processor executing instructions described in this application can obtain a representation of the user interface to present to a user. The representation of the user interface can be a design describing the user interface, and/or a functioning user interface. The user interface can include a user interface, an audio user interface, and/or a haptic user interface. In step510, the processor can determine an element associated with the user interface such as a radio button, text input, capture, etc. The element is configured to provide information to the user, however, the user interface presentation of the element at least partially fails to provide the information to the user. For example, the user can be disabled and not be able to see and may need to interact with the user interface using audio or gestures when appropriate. The user may not be able to use the mouse and may need to interact with the user interface using a keyboard. In step520, based on the element, the processor can determine an appropriate test to perform. To accommodate various disabilities, the appropriate test can indicate a test to perform with a keyboard, a gesture test to perform with a mobile screen reader, and an audio test to perform with a screen reader. In step530, the processor can generate the indication of the appropriate test categorized by the type of test. When the appropriate test includes the test to perform with the keyboard, the appropriate test indication can include a keyboard key and an effect caused by activating the keyboard key. When the appropriate test includes the gesture test, the appropriate test indication can include a gesture and an effect caused by performing the gesture. When the appropriate test includes the audio test, the appropriate test indication can include audio and a function to be performed by the audio. The processor can provide the presentation prior to releasing the user interface to the user. In step540, the processor can provide the indication of the appropriate test prior to releasing the user interface to the user. The indication can be a presentation such as a multimedia presentation, text presentation, an audio presentation, a multimedia presentation, etc. The indication can be editable. Prior to releasing the user interface to the user, the processor can perform the appropriate test is performed on the user interface and can indicate when the user interface passed the appropriate test. The processor can determine a type of program associated with the user interface. Based on the type of program, the processor can determine multiple categories associated with the user interface. The processor can determine a type of element by categorizing the element associated with the user interface into a category among the multiple categories. Finally, the processor can enable a selection of the appropriate test by presenting the multiple categories, the element, and the category associated with the element. The user interface can be a web page. When the user interface is a web page, the processor can determine multiple categories associated with the user interface including at least four of: HTML, header, navigation, main, form, and footer. The processor can categorize the element associated with the user interface into a category among the multiple categories. The processor can enable a selection of the appropriate test by presenting the multiple categories, the element, and the category associated with the element. The selection can be performed by a user or automatically. The user interface can be associated with a native application. When the user interface is associated with the native application, the processor can determine multiple categories associated with the user interface including controls and notifications. The processor can categorize the element associated with the user interface into a category among the multiple categories. The processor can enable a selection of the appropriate test by presenting the multiple categories, the element, and the category associated with the element. As mentioned above, the selection can be performed by a user or automatically. The processor can test the performance of the program. The processor can obtain a program associated with the user interface and the indication of the appropriate test, where the appropriate test includes the audio test. The processor can execute the program associated with the user interface and the audio test. The processor can perform natural language processing on the audio test to determine whether the audio test corresponds to an output indicated in the appropriate test. Upon determining that the audio test does not correspond to the output indicating the appropriate test, the processor can indicate that the program did not pass the test. Computer System FIG.6is a block diagram that illustrates an example of a computer system600in which at least some operations described herein can be implemented. As shown, the computer system600can include: one or more processors602, main memory606, non-volatile memory610, a network interface device612, a video display device618, an input/output device620, a control device622(e.g., keyboard and pointing device), a drive unit624that includes a storage medium626, and a signal generation device630that are communicatively connected to a bus616. The bus616represents one or more physical buses and/or point-to-point connections that are connected by appropriate bridges, adapters, or controllers. Various common components (e.g., cache memory) are omitted fromFIG.6for brevity. Instead, the computer system600is intended to illustrate a hardware device on which components illustrated or described relative to the examples of the Figures and any other components described in this specification can be implemented. The computer system600can take any suitable physical form. For example, the computer system600can share a similar architecture as that of a server computer, personal computer (PC), tablet computer, mobile telephone, game console, music player, wearable electronic device, network-connected (“smart”) device (e.g., a television or home assistant device), AR/VR systems (e.g., head-mounted display), or any electronic device capable of executing a set of instructions that specify action(s) to be taken by the computer system600. In some implementations, the computer system600can be an embedded computer system, a system-on-chip (SOC), a single-board computer system (SBC), or a distributed system such as a mesh of computer systems, or it can include one or more cloud components in one or more networks. Where appropriate, one or more computer systems600can perform operations in real time, near real time, or in batch mode. The network interface device612enables the computer system600to mediate data in a network614with an entity that is external to the computer system600through any communication protocol supported by the computer system600and the external entity. Examples of the network interface device612include a network adapter card, a wireless network interface card, a router, an access point, a wireless router, a switch, a multilayer switch, a protocol converter, a gateway, a bridge, a bridge router, a hub, a digital media receiver, and/or a repeater, as well as all wireless elements noted herein. The memory (e.g., main memory606, non-volatile memory610, machine-readable medium626) can be local, remote, or distributed. Although shown as a single medium, the machine-readable medium626can include multiple media (e.g., a centralized/distributed database and/or associated caches and servers) that store one or more sets of instructions628. The machine-readable (storage) medium626can include any medium that is capable of storing, encoding, or carrying a set of instructions for execution by the computer system600. The machine-readable medium626can be non-transitory or comprise a non-transitory device. In this context, a non-transitory storage medium can include a device that is tangible, meaning that the device has a concrete physical form, although the device can change its physical state. Thus, for example, non-transitory refers to a device remaining tangible despite this change in state. Although implementations have been described in the context of fully functioning computing devices, the various examples are capable of being distributed as a program product in a variety of forms. Examples of machine-readable storage media, machine-readable media, or computer-readable media include recordable-type media such as volatile and non-volatile memory devices610, removable flash memory, hard disk drives, optical disks, and transmission-type media such as digital and analog communication links. In general, the routines executed to implement examples herein can be implemented as part of an operating system or a specific application, component, program, object, module, or sequence of instructions (collectively referred to as “computer programs”). The computer programs typically comprise one or more instructions (e.g., instructions604,608,628) set at various times in various memory and storage devices in computing device(s). When read and executed by the processor602, the instruction(s) cause the computer system600to perform operations to execute elements involving the various aspects of the disclosure. Remarks The terms “example,” “embodiment,” and “implementation” are used interchangeably. For example, references to “one example” or “an example” in the disclosure can be, but not necessarily are, references to the same implementation; and, such references mean at least one of the implementations. The appearances of the phrase “in one example” are not necessarily all referring to the same example, nor are separate or alternative examples mutually exclusive of other examples. A feature, structure, or characteristic described in connection with an example can be included in another example of the disclosure. Moreover, various features are described which can be exhibited by some examples and not by others. Similarly, various requirements are described which can be requirements for some examples but no other examples. The terminology used herein should be interpreted in its broadest reasonable manner, even though it is being used in conjunction with certain specific examples of the invention. The terms used in the disclosure generally have their ordinary meanings in the relevant technical art, within the context of the disclosure, and in the specific context where each term is used. A recital of alternative language or synonyms does not exclude the use of other synonyms. Special significance should not be placed upon whether or not a term is elaborated or discussed herein. The use of highlighting has no influence on the scope and meaning of a term. Further, it will be appreciated that the same thing can be said in more than one way. Unless the context clearly requires otherwise, throughout the description and the claims, the words “comprise,” “comprising,” and the like are to be construed in an inclusive sense, as opposed to an exclusive or exhaustive sense; that is to say, in the sense of “including, but not limited to.” As used herein, the terms “connected,” “coupled,” or any variant thereof means any connection or coupling, either direct or indirect, between two or more elements; the coupling or connection between the elements can be physical, logical, or a combination thereof. Additionally, the words “herein,” “above,” “below,” and words of similar import can refer to this application as a whole and not to any particular portions of this application. Where context permits, words in the above Detailed Description using the singular or plural number may also include the plural or singular number respectively. The word “or” in reference to a list of two or more items covers all of the following interpretations of the word: any of the items in the list, all of the items in the list, and any combination of the items in the list. The term “module” refers broadly to software components, firmware components, and/or hardware components. While specific examples of technology are described above for illustrative purposes, various equivalent modifications are possible within the scope of the invention, as those skilled in the relevant art will recognize. For example, while processes or blocks are presented in a given order, alternative implementations can perform routines having steps, or employ systems having blocks, in a different order, and some processes or blocks may be deleted, moved, added, subdivided, combined, and/or modified to provide alternative or sub-combinations. Each of these processes or blocks can be implemented in a variety of different ways. Also, while processes or blocks are at times shown as being performed in series, these processes or blocks can instead be performed or implemented in parallel, or can be performed at different times. Further, any specific numbers noted herein are only examples such that alternative implementations can employ differing values or ranges. Details of the disclosed implementations can vary considerably in specific implementations while still being encompassed by the disclosed teachings. As noted above, particular terminology used when describing features or aspects of the invention should not be taken to imply that the terminology is being redefined herein to be restricted to any specific characteristics, features, or aspects of the invention with which that terminology is associated. In general, the terms used in the following claims should not be construed to limit the invention to the specific examples disclosed herein, unless the above Detailed Description explicitly defines such terms. Accordingly, the actual scope of the invention encompasses not only the disclosed examples, but also all equivalent ways of practicing or implementing the invention under the claims. Some alternative implementations can include additional elements to those implementations described above or include fewer elements. Any patents and applications and other references noted above, and any that may be listed in accompanying filing papers, are incorporated herein by reference in their entireties, except for any subject matter disclaimers or disavowals, and except to the extent that the incorporated material is inconsistent with the express disclosure herein, in which case the language in this disclosure controls. Aspects of the invention can be modified to employ the systems, functions, and concepts of the various references described above to provide yet further implementations of the invention. To reduce the number of claims, certain implementations are presented below in certain claim forms, but the applicant contemplates various aspects of an invention in other forms. For example, aspects of a claim can be recited in a means-plus-function form or in other forms, such as being embodied in a computer-readable medium. A claim intended to be interpreted as a means-plus-function claim will use the words “means for.” However, the use of the term “for” in any other context is not intended to invoke a similar interpretation. The applicant reserves the right to pursue such additional claim forms in either this application or a continuing application.
34,864
11860768
DETAILED DESCRIPTION Aspects and applications of the invention presented herein are described below in the drawings and detailed description of the invention. Unless specifically noted, it is intended that the words and phrases in the specification and the claims be given their plain, ordinary, and accustomed meaning to those of ordinary skill in the applicable arts. In the following description, and for the purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the various aspects of the invention. It will be understood, however, by those skilled in the relevant arts, that the present invention may be practiced without these specific details. In other instances, known structures and devices are shown or discussed more generally in order to avoid obscuring the invention. In many cases, a description of the operation is sufficient to enable one to implement the various forms of the invention, particularly when the operation is to be implemented in software. It should be noted that there are many different and alternative configurations, devices and technologies to which the disclosed inventions may be applied. The full scope of the inventions is not limited to the examples that are described below. Supply chain process control systems are built on different computing platforms and spread throughout the supply chain network. As supply chains and other industries integrate the systems that underlie these processes with cloud-based systems, the tools for testing the quality assurance and performance (QAP) of the systems need to improve. QAP testing comprises a mixture of cloud-based and non-cloud-based technologies and requires testing tools that can handle a wide variety of clients, such as, for example, HTML-based web user interfaces, rich user interfaces having integrated spreadsheets, thick clients, and the like. QAP testing would be benefited by having a standard test case format that is usable across different platforms, different clients, and different testing types. QAP testing of supply chain systems and programs include regression testing and performance testing. Regression testing may be done automatically using an automation testing framework. Automation testing framework uses automation tools to test a program by automatically performing predefined actions described in a test case. The automation tools then compare the test program results with the expected results. If the results align, the program is behaving properly. If the results do not align, the program likely contains one or more errors. Fixing these errors often comprises, for example, examining the program code, identifying and altering the code to fix errors, and running additional tests until the actual and expected outcomes align. In addition to regression testing, QAP testing includes performance testing. Performance testing determines the responsiveness and stability of a system under a particular workload. It can also serve to investigate, measure, validate, or verify other quality attributes of the system, such as scalability, reliability, and resource usage. As described below, embodiments of the following disclosure comprise an automation testing framework having an infrastructure highly adapted to advanced automation of QAP testing of multi-solution, platform-independent programs and systems. According to particular aspects, embodiments include a user interface package for automation testing framework to perform multi-threaded cloud-based regression testing, smoke testing, performance testing, load testing, issue replication, performance benchmarking, and baselining all of which are easily configured using the automation tools of testing system interface400and a unified test case writing technique that is product-, solution-, and technology-independent. As described below, embodiments comprise a single workflow that extends across regression and performance testing and translates standardized test case files to directly invoke the appropriate testing components. These and other embodiments described in the following disclosure reduce cycle time, extend across many supply chain systems and programs, run tests significantly faster than human users, replicate issues, reliably perform the same operation, allow comprehensive testing of nearly every feature in software applications (including sophisticated tests that bring hidden information), has zero defect seepage, no overhead, and a central repository. FIG.1illustrates exemplary distributed supply chain network100, according to an embodiment. Distributed supply chain network100comprises one or more QAP servers110, one or more external testing components112, one or more application and/or web servers120, one or more data nodes130, one or more infrastructure servers and services140, database150, inventory system160, transportation network170, one or more supply chain entities180, computer190, network198, and communication links199a-199j. Although one or more QAP servers110, one or more external testing components112, one or more application and/or web servers120, one or more data nodes130, one or more infrastructure servers and services140, a single database150, a single inventory system160, a single transportation network170, one or more supply chain entities180, a single computer190, a single network198, and one or more communication links199a-199jare shown and described, embodiments contemplate any number of QAP servers, external testing components, application and/or web servers, data nodes, infrastructure servers and services, databases, inventory systems, transportation networks, supply chain entities, computers, networks, and communication links, according to particular needs. One or more QAP servers110comprise user action processor114, test processor116, and testing components118that support various QAP testing tools based on an automated testing framework and provides the ability to automate test cases for both web- and operating system (OS)-based programs (such as, for example, WINDOWS operating system-based programs) and generate detailed reports dynamically for regression testing and Performance, Scalability, and Reliability (PSR) testing. As described in more detail below, user action processor114receives and processes a user input (such as, for example, receiving a user input from a graphical user interface, described in more detail below) and decides the action to be invoked. Test processor116receives test case and properties files, processes the files, and then invokes one or more testing components118and/or one or more external testing components112comprising automation and testing tools for multi-threaded QAP testing. As explained in more detail below, the test cases that allows regression testing and performance testing for both OS and web applications to be written in a standardized format. As explained in more detail below, test processor116automatically chooses the correct one or more testing components118and external testing components112by invoking the appropriate underlying Application Program Interfaces APIs156based on the standardized format in the test cases. According to embodiments, QAP server110supports technology stacks including, the web, a thick client, JAVA applications, rich user interface, Putty, and Citrix Web. Although QAP server110is described in connection with supply chain systems and programs, embodiments contemplate regression and performance testing for applications including workforce management, assortment planning, allocation planning, order promising, enterprise simulation planning, sales and operation planning, order optimization, inventory optimization, demand planning, flow casting, order fulfillment, dispatching and transportation management. Additionally, QAP server110is industry-agnostic and may perform QAP testing in any industry, including, for example, retail, supply chain, warehouse management, transportation management, workforce management, and other areas. As described in more detail below, external testing components112may comprise one or more modules that support processes to perform testing on one or more programs of one or more application and/or web servers120and is capable of storing, receiving, processing, communicating, and modifying data stored at one or more data nodes130or database150. According to embodiments, one or more application and/or web servers120store, receive, process, communicate, and modify data at one or more data nodes130and/or database150. One or more application and/or web servers120host applications that support one or more supply chain processes, including, for example, supply chain management, inventory optimization, or retail, manufacturing, enterprise, or utility planning. In addition, one or more data nodes130and database150may comprise any physical or virtual server, and any supporting hardware or software, supporting the storage of data at one or more locations local to, or remote from, QAP server110and one or more application and/or web servers120. Database150comprises one or more databases or other data storage arrangements at one or more locations, local to, or remote from, QAP server110, one or more application and/or web servers120, one or more data nodes130, supply chain entities180, and computer190. According to embodiments, database150comprises supply chain data152, test case data154, and APIs156. Supply chain data152may comprise, for example, metadata, which is comprised of dimensions, hierarchies, levels, members, attributes, and member attribute values, and fact data, comprising measure values for combinations of members. Data stored in database150may be, for example, various decision variables, business constraints, goals, and objectives of supply chain entities180. According to other embodiments, supply chain data152may comprise for example, various decision variables, business constraints, goals and objectives of one or more supply chain entities150. According to some embodiments, supply chain data152may comprise hierarchical objectives specified by, for example, business rules, master planning requirements along with scheduling constraints and discrete constraints, such as, for example, sequence dependent setup times, lot-sizing, storage, shelf life, and other like constraints. According to an embodiment, test case data154may comprise test case files and properties files. According to embodiments, QAP server110accepts test cases that are formatted as comma-separated files (CSV) and include the properties identified in a properties file associated with the particular test case type. The test cases may be bundled with the programs they test and may function automatically behind a cloud and a firewall. According to embodiments, test case files of the test cases comprise five elements: test case name, action type, selector, value, and/or dataset. Each test case comprises line items that specify the test case name and subsequent line items comprise actions to execute the test case, as described in more detail below. Additionally, a properties file of the test cases comprises a map between a user interface element and its locator. According to some embodiments, a properties file may comprise an object map between user interface elements and the way to find it, such as, for example, a mapping between a web element and the corresponding locator type and value. According to embodiments, the mapping for the various elements in an application are stored in different locations, depending on the framework of the application. For example, some frameworks store an object repository in a properties file, and some store it inside an XML or a database file, such as, for example, a MICROSOFT EXCEL spreadsheet format. Some embodiments of automation testing framework200include test cases that are written with a standardized methodology that is similar for regression and performance testing, and re-usable for different versions of the same software. While test case format for regression and performance testing is similar, according to embodiments, there is difference in what is provided for regression and for performance testing. Similar to a TestCaseName column, a performance test case format may comprise a scenario (Performance Test Case Scenario) column. Each Test Case Scenario may comprise multiple transactions (Transaction_Name column) and each transaction may comprise one or more requests to send to the server (Request_URL). Each request may comprise a post parameter associated based on a request type. Also, each request may comprise other required data, such as, for example, a regular expression to retrieve dynamic request parameters and host/port details if a request is made for another server. The test case processor, as described in more detail below, may identify if a user requested regression testing or performance testing based on the user input and the test case file format. The test case process may invoke a performance testing component in case of performance testing and, for regression testing, test processor116may identify the proper testing component based on the action type and invoke the proper testing component using one or more testing APIs156. According to embodiments, the test case files are the same for both regression and performance testing and for windows and web applications. Test processor116translates test case files to invoke the underlying testing APIs156. According to embodiments, QAP server110is deployed on a cloud with access to APIs156available for integration. Here, integration indicates that, although the QAP processes may be run from testing system interface400, one or more supply chain entities may use one or more scripts, commands, and/or APIs156from the backend to utilize the testing APIs156from QAP server110to integrate QAP testing with their supply chain processes. Inventory system160comprises server162and database164. Server162of inventory system160is configured to receive and transmit item data, including item identifiers, pricing data, attribute data, inventory levels, and other like data about one or more items at one or more locations in distributed supply chain network100. Server162stores and retrieves item data from database164of inventory system160, database150, or one or more locations in distributed supply chain network100. Transportation network140comprises server142and database144. According to embodiments, transportation network140directs one or more transportation vehicles146to ship one or more items between one or more supply chain entities150at least partially based on an inventory policy, target service levels, the number of items currently in stock at one or more supply chain entities150, the number of items currently in transit in the transportation network140, forecasted demand, a supply chain disruption, and/or one or more other factors described herein. Transportation vehicles146comprise, for example, any number of trucks, cars, vans, boats, airplanes, unmanned aerial vehicles (UAVs), cranes, robotic machinery, or the like. Transportation vehicles146may comprise radio, satellite, or other communication that communicates location information (such as, for example, geographic coordinates, distance from a location, global positioning satellite (GPS) information, or the like) with supply chain planner110, one or more imaging devices120, inventory system130, transportation network140, and one or more supply chain entities150to identify the location of transportation vehicle146and the location of any inventory or shipment located on transportation vehicle146. According to embodiments, transportation vehicles may be associated with one or more suppliers, manufacturers, distributors, or retailers, or another supply chain entity, according to particular needs and be directed by automated navigation including GPS guidance. As shown inFIG.1, distributed supply chain network100comprising QAP server110, external testing components112, one or more application and/or web servers120, one or more data nodes130, one or more infrastructure servers and services140, database150, inventory system160, transportation network170, supply chain entities180, computer190, network198, and communication links199a-199jmay operate on one or more computers that are integral to or separate from the hardware and/or software that support QAP server110, external testing components112, one or more application and/or web servers120, one or more data nodes130, one or more infrastructure servers and services140, database150, inventory system160, transportation network170, supply chain entities180, computer190, network198, and communication links199a-199j. Computer190may include any suitable input device192, such as a keypad, mouse, touch screen, microphone, or other device to input information. Output device194may convey information associated with the operation of distributed supply chain network100, including digital or analog data, visual information, or audio information. Computer190may include fixed or removable computer-readable storage media, including a non-transitory computer readable medium, magnetic computer disks, flash drives, CD-ROM, in-memory device or other suitable media to receive output from and provide input to distributed supply chain network100. Computer190may include one or more processors196and associated memory to execute instructions and manipulate information according to the operation of the distributed supply chain network100and any of the methods described herein. In addition, or as an alternative, embodiments contemplate executing the instructions on computer190that cause computer190to perform functions of the method. Further examples may also include articles of manufacture including tangible non-transitory computer-readable media that have computer-readable instructions encoded thereon, and the instructions may comprise instructions to perform functions of the methods described herein. QAP server110, external testing components112, one or more application and/or web servers120, one or more data nodes130, one or more infrastructure servers and services140, database150, inventory system160, transportation network170, and supply chain entities180may each operate on one or more separate computers, a network of one or more separate or collective computers, or may operate on one or more shared computers. In addition, distributed supply chain network100may comprise a cloud-based computing system having processing and storage devices at one or more locations, local to, or remote from QAP server, external testing components112, one or more application and/or web servers120, one or more data nodes130, one or more infrastructure servers and services140, database150, inventory system160, transportation network170, and supply chain entities180. In addition, each of the one or more computers190may be a work station, personal computer (PC), network computer, notebook computer, tablet, personal digital assistant (PDA), cell phone, telephone, smartphone, mobile device, wireless data port, augmented or virtual reality headset, or any other suitable computing device. In an embodiment, one or more users may be associated with QAP server110, external testing components112, one or more application and/or web servers120, one or more data nodes130, one or more infrastructure servers and services140, database150, inventory system160, transportation network170, and supply chain entities180. These one or more users may include, for example, a “manager” or a “planner” handling automated application testing and/or one or more related tasks within distributed supply chain network100. In addition, or as an alternative, these one or more users within distributed supply chain network100may include, for example, one or more computers programmed to autonomously handle, among other things, production planning, demand planning, option planning, sales and operations planning, order placement, automated warehouse operations (including removing items from and placing items in inventory), robotic production machinery (including building or items), and/or one or more related tasks within distributed supply chain network100. One or more supply chain entities180may comprise server182and database184. One or more supply chain entities180represent one or more suppliers, manufacturers, distributors, and retailers in one or more supply chain networks, including one or more enterprises. A manufacturer may be any suitable entity that manufactures at least one product. A manufacturer may use one or more items during the manufacturing process to produce any manufactured, fabricated, assembled, or otherwise processed item, material, component, good or product. In one embodiment, a product represents an item ready to be supplied to, for example, another one or more supply chain entities180, such as a supplier, an item that needs further processing, or any other item. A manufacturer may, for example, produce and sell a product to a supplier, another manufacturer, a distributor, a retailer, a customer, or any other suitable person or an entity. Such manufacturers may comprise automated robotic production machinery that produce products based, at least in part, on a supply chain plan, a material or capacity reallocation, current and projected inventory levels, and/or one or more additional factors described herein. One or more suppliers may be any suitable entity that offers to sell or otherwise provides one or more components to one or more manufacturers. Suppliers may comprise automated distribution systems that automatically transport products to one or more manufacturers based, at least in part, on a supply chain plan, a material or capacity reallocation, current and projected inventory levels, and/or one or more additional factors described herein. One or more distributors may be any suitable entity that offers to sell, warehouse, transport, or distribute at least one product to one or more retailers and/or customers. Distributors may, for example, receive a product from a first supply chain entity in distributed supply chain network100and store and transport the product for a second supply chain entity. Such distributors may comprise automated warehousing systems that automatically transport to one or more retailers or customers and/or automatically remove from or place into inventory products based, based, at least in part, on a supply chain plan, a material or capacity reallocation, current and projected inventory levels, and/or one or more additional factors described herein. One or more retailers may be any suitable entity that obtains one or more products to sell to one or more customers. In addition, one or more retailers may sell, store, and supply one or more components and/or repair a product with one or more components. One or more retailers may comprise any online or brick and mortar location, including locations with shelving systems. Shelving systems may comprise, for example, various racks, fixtures, brackets, notches, grooves, slots, or other attachment devices for fixing shelves in various configurations. These configurations may comprise shelving with adjustable lengths, heights, and other arrangements, which may be adjusted by an employee of one or more retailers based on computer-generated instructions or automatically by machinery to place products in a desired location. Although one or more suppliers, manufacturers, distributors, and retailers are shown and described as separate and distinct entities, the same entity may simultaneously act as any one or more suppliers, manufacturers, distributors, and retailers. For example, one or more supply chain entities180acting as a manufacturer could produce a product, and the same entity could act as a supplier to supply a product to another one or more supply chain entities180. Although one example of distributed supply chain network100is shown and described; embodiments contemplate any configuration of distributed supply chain network100, without departing from the scope of the present disclosure. In one embodiment, QAP server110may be coupled with network198using communications link199a, which may be any wireline, wireless, or other link suitable to support data communications between QAP server110and network198during operation of distributed supply chain network100. External testing components112may be coupled with network198using communications link199b, which may be any wireline, wireless, or other link suitable to support data communications between external testing components112and network198during operation of distributed supply chain network100. One or more application and/or web servers120may be coupled with network198using communications link199c, which may be any wireline, wireless, or other link suitable to support data communications between one or more application and/or web servers120and network198during operation of distributed supply chain network100. One or more data nodes130may be coupled with network198using communications link199d, which may be any wireline, wireless, or other link suitable to support data communications between one or more data nodes130and network198during operation of distributed supply chain network100. One or more infrastructure servers and services140may be coupled with network198using communications link199e, which may be any wireline, wireless, or other link suitable to support data communications between one or more infrastructure servers and services140and network198during operation of distributed supply chain network100. Database150may be coupled with network198using communications link199f, which may be any wireline, wireless, or other link suitable to support data communications between database150and network198during operation of distributed supply chain network100. Inventory system160may be coupled with network198using communications link199g, which may be any wireline, wireless, or other link suitable to support data communications between inventory system160and network198during operation of distributed supply chain network100. Transportation networks may be coupled with network198using communications link199h, which may be any wireline, wireless, or other link suitable to support data communications between transportation networks and network198during operation of distributed supply chain network100. One or more computers may be coupled with network198using communications link199i, which may be any wireline, wireless, or other link suitable to support data communications between the one or more computers and network198during operation of distributed supply chain network100. One or more supply chain entities180may be coupled with network198using communications link199j, which may be any wireline, wireless, or other link suitable to support data communications between one or more supply chain entities180and network198during operation of distributed supply chain network100. Although communication links199a-199jare shown as generally coupling QAP server110, external testing components112, one or more application and/or web servers120, one or more data nodes130, one or more infrastructure servers and services140, database150, inventory system160, transportation network170, one or more supply chain entities180, and computer190to network198, each of QAP server110, external testing components112, one or more application and/or web servers120, one or more data nodes130, one or more infrastructure servers and services140, database150, inventory system160, transportation network170, one or more supply chain entities180, and computer190may communicate directly with each other, according to particular needs. In another embodiment, network198includes the Internet and any appropriate local area networks (LANs), metropolitan area networks (MANs), or wide area networks (WANs) coupling QAP server110, external testing components112, one or more application and/or web servers120, one or more data nodes130, one or more infrastructure servers and services140, database150, inventory system160, transportation network170, one or more supply chain entities180, and computer190. For example, data may be maintained by locally or externally of QAP server110, external testing components112, one or more application and/or web servers120, one or more data nodes130, one or more infrastructure servers and services140, database150, inventory system160, transportation network170, one or more supply chain entities180, and computer190and made available to one or more associated users of QAP server110, external testing components112, one or more application and/or web servers120, one or more data nodes130, one or more infrastructure servers and services140, database150, inventory system160, transportation network170, one or more supply chain entities180, and computer190using network198or in any other appropriate manner. For example, data may be maintained in a cloud database at one or more locations external to QAP server110, external testing components112, one or more application and/or web servers120, one or more data nodes130, one or more infrastructure servers and services140, database150, inventory system160, transportation network170, one or more supply chain entities180, and computer190and made available to one or more associated users of QAP server110, external testing components112, one or more application and/or web servers120, one or more data nodes130, one or more infrastructure servers and services140, database150, inventory system160, transportation network170, one or more supply chain entities180, and computer190using the cloud or in any other appropriate manner. Those skilled in the art will recognize that the complete structure and operation of network198and other components within distributed supply chain network100are not depicted or described. Embodiments may be employed in conjunction with known communications networks and other components. In accordance with the principles of embodiments described herein, applications supported by one or more application and/or web servers120may reallocate inventory of one or more items among demands or orders of one or more supply chain entities180. Furthermore, applications supported by one or more application and/or web servers120may instruct automated machinery (i.e., robotic warehouse systems, robotic inventory systems, automated guided vehicles, mobile racking units, automated robotic production machinery, robotic devices and the like) to adjust product mix ratios, inventory levels at various stocking points, production of products of manufacturing equipment, proportional or alternative sourcing of one or more supply chain entities180, and the configuration and quantity of packaging and shipping of items based on current inventory or production levels. For example, methods described herein may include computers190receiving product data from automated machinery having at least one sensor and the product data corresponding to an item detected by the automated machinery. The received product data may include an image of the item, an identifier, as described above, and/or other product data associated with the item (dimensions, texture, estimated weight, and any other like data). The method may further include computers190looking up the received product data in a database system associated with applications supported by one or more application and/or web servers120to identify the item corresponding to the product data received from the automated machinery. Computers190may also receive from the automated machinery, a current location of the identified item. Based on the identification of the item, computers may also identify (or alternatively generate) a first mapping in distributed supply chain network110, where the first mapping is associated with the current location of the identified item. Computers may also identify a second mapping in distributed supply chain network110, where the second mapping is associated with a past location of the identified item. Computers may also compare the first mapping and the second mapping to determine if the current location of the identified item in the first mapping is different than the past location of the identified item in the second mapping. Computers may then send instructions to the automated machinery based, as least in part, on one or more differences between the first mapping and the second mapping such as, for example, to locate items to add to or remove from an inventory of one or more supply chain entities. FIG.2illustrates high-level architecture diagram200of QAP server110ofFIG.1, in accordance with an embodiment. High-level architecture diagram200comprises QAP server110, external testing components112, one or more clients210, and communication links220a-220b. As discussed above in connection withFIG.1, QAP server110comprises user action processor114, test processor116, and one or more testing components118and is communicatively coupled with one or more external testing components112and one or more client computers210. QAP server110communicates with one or more client computers210and the one or more external testing components112using one or more communications links220a-220b, which may be any wireline, wireless, or other link suitable to support data communications between QAP server110, one or more external testing components112, one or more clients210, and network198during operation of distributed supply chain network100. As discussed above, user action processor114receives and processes a user input at input process232and decides the action to be invoked at user action process234. For example, QAP server110receives test cases and test case parameters by a user interface, such as, for example, a graphical user interface (GUI), that provides for the selection and input of one or more options, parameters, and other variables associated with a test case for a particular application to be tested, as described in more detail herein. Also, as discussed herein, test processor116receives test case and properties files at tests process236, processes the files, and, if test processor116validates the received files, creates a test case for use during application testing. Additionally, test processor116may invoke an external testing component for testing one or more applications by calling one or more APIs156that execute one or more processes of external testing components112. According to embodiments, test processor116executes the validated test cases which comprise a test case file and a properties file. As described above, the test case files are written in a standardized format that ignores the software environment of the tested application. However, test processor116cannot use the same external testing component or API calls for any application, or even within the same application, because different software environments require different testing processes. Instead, and as described in more detail herein, test processor116analyzes the test case file and the properties file to determine the appropriate external testing component and API call for each action required by the test case. For example, a test case may require testing of the application itself, a third-party remote application, a spreadsheet application, any of which may be in different software environments. By reading the test case file and the properties file, test processor116invokes one or more external testing components112using APIs156of QAP server110. Additionally, embodiments may also include parallel execution of web application test cases by multi-threading. Testing components118of QAP server110may comprise webpage automation tool240, OS automation tool242, and regression and smoke reporting module244. Based on the identified action type for the user actions, test processor116calls the appropriate external testing components112that support testing that action of the application. According this example, test processor116would call an API of webpage automation tool240in response to selection of an add to cart button during regression testing of the application. By way of a further example of regression analysis, an application may comprise an OS-based application (such as, for example, a WINDOWS-based application) that receives data entry by a MICROSOFT EXCEL spreadsheet. A test case file for this type of application will include actions, such as, for example, open database worksheet, type in data, put data into the first row, or other like user actions. Test processor116may then read the test case file and based on the action type of the test case, test processor116will call the appropriate external testing component, which, in this example, would comprise the OS automation tool242. QAP server110leverages regression and smoke reporting module244to log information from each step in each test case, and, at the end of the test suite execution, generate a report with detailed information, such as, for example, time taken for test suite execution, time taken for each test case, number of passed and failed test cases, and details at which step a test case has failed, and one or more screenshots identifying reasons for the test case failure. External testing components112of QAP server110may comprise performance testing module250and performance statistics reporting module252. Additionally, or in the alternative, one or more external testing components112are located external to QAP server110. According to this embodiment, QAP server110acts as a wrapper around performance testing module250and uses scripting to invoke performance testing by performance testing module250. When QAP server110passes control to performance testing module250, QAP server110calls performance testing module250to retrieve the status of the current testing and periodically generates a report using a call (such as, for example, an AJAX call) on the browsers. Although performance testing module250is illustrated and described as external to QAP server110, performance testing module250and any of the other one or more testing components118and one or more external testing components112may be located at any suitable location in distributed supply chain network100, internal or external, of QAP server110, according to particular needs. According to embodiments, performance statistics reporting module252may generate detailed explanations including, for example, response times for each node, identify errors, response time differences compared to the number of requests, and percentile-based quoting. To further illustrate testing using QAP server110, an example is now given. When a new cloud-based application is created, such as, for example, a demand forecast program for a retail planning solution, user interface of QAP server110may be used to create a folder corresponding to the new product. Test cases may then be created using a spreadsheet editing program to create test case files and properties file to represent the parameters for the new test cases. The test cases are then associated with the URL pointing to one or more application and/or web servers120that hosts the newly-created program which is being tested, and QAP server110executes the test cases on the newly-created program. Additionally, QAP server110provides a user interface through which users can initiate supported actions including, for example, uploading and downloading input files, starting a regression suite for a selected product and customer combination, and PSR testing for any selected product. According to embodiments, one or more of the described supported actions of testing system interface400is also available as an API via batch to be triggered via different means for seamless execution and integration. FIG.3illustrates automation testing framework300, in accordance with a cloud-based architecture embodiment. According to this embodiment, automation testing framework300comprises QAP server110, one or more application and/or web servers120, one or more client computers210, and one or more infrastructure services and servers140. According to one embodiment, automation testing framework300comprises a cloud-based architecture that supports one or more supply chain network cloud-based applications hosted on one or more application and/or web servers120which may be tested by QAP server110. According to embodiments, QAP server110of cloud-based architecture300may comprise one or more components of an application server and/or a web server. For example, user action processor114receives user inputs, calls test processor116service, and displays results dynamically, such as by QAP web server302. Test processor116comprises required business logic to invoke testing components118and/or external testing components112, such as by QAP app server304. In addition, and as described in more detail herein, one or more application and web servers120may comprise one or more web servers306and one or more application servers308. As described above, the cloud-based applications may comprise programs for one or more supply chain network processes and be accessed by one or more clients210including, for example, supply chain planning and optimization client (SCPO)310a, transportation management systems client (TMS)310b, warehouse management systems client (WMS)310c, workforce management client (WFMR)310d, and one or more additional supply chain network process clients310n. As discussed above, cloud-based applications may be tested using QAP server110. Automation testing framework300may include one or more client computers210hosted on internal network312, which may comprise an intranet of one or more supply chain entities180. One or more client computers210may host a thin or thick client to access the cloud-based applications supported by one or more application and/or web servers120and/or the one or more web servers208. According to an embodiment, automation testing framework300comprises four organizational layers314a-314d: untrust layer314a, web services layer314b, app services layer314c, and infrastructure layer314d. QAP web server302and any other web servers may be present in web services layer314b. QAP web server302supports a user interface for configuring and initiating automation testing that may be accessed by the one or more client computers210. QAP web server302may receive test case files and properties files from one or more client computers210which define the parameters for the application testing. QAP web server302and any other web servers may send and receive data from QAP app server304and any one or more web servers306and/or one or more application servers308supporting cloud-based applications. As described in more detail below, QAP app server204may initiate one or more tests on one or more web servers306and/or one or more application servers308based on communications received form QAP web server302and according to the configurations indicated in the test case files. According to embodiments, QAP app server304initiates a test by sending a request to one or more web servers306and/or one or more application servers308supporting the tested cloud-based application. One or more web servers306and/or one or more application servers308may then execute the request in parallel on one or more web servers306supporting the cloud-based application. After testing, one or more web servers306may then communicate results and reports from the testing to QAP app server304. Additionally, automation testing framework300comprises infrastructure layer314dcomprising one or more infrastructure servers and services140. According to embodiments, infrastructure layer314dcomprises a collection of multiple common services shared by one or more application/web servers and may include, for example, SMTP server providing E-mail services to send regression and performance testing reports to a requester, an antivirus server to protect servers from computer viruses and/or malware, an AD server, data gateway services, and the like. Although automation testing framework200is described as comprising a single QAP web server302, a single QAP app server304, one or more web servers306, one or more application servers308, one or more client computers210, and one or more infrastructure services and servers140, embodiments contemplate an automation testing framework300comprising any suitable number or configuration of QAP web servers, QAP app servers, application servers, web servers, client computers, and infrastructure services and servers, according to particular needs. FIG.4illustrates exemplary testing system interface400, in accordance with an embodiment. Testing system interface400generates for display one or more graphical elements and user-selectable elements, which, upon selection, perform one or more actions, as described herein. According to an embodiment, testing system interface400provides test suite input402, product name input404, customer name input406, and email ID input408. Testing system interface400may receive one or more inputs for test suite input402, product name input404, customer name input406, and email ID input408. In response and at least partially based on the received one or more inputs for test suite input402, product name input404, customer name input406, and email ID input408, QAP server110may execute a test case or upload or download a test case file. In addition, testing system interface400of QAP server110provides for uploading, downloading, and executing test cases. In addition, testing system interface400of QAP server110provides for resetting one or more inputs for test suite input402, product name input404, customer name input406, and email ID input408. As described herein, after creating a test case file and properties file, these files may be uploaded using testing system interface400(here, comprising a user interface webpage) of QAP web server302. Testing system interface400may comprise a main navigation screen (here, comprising a user interface homepage) providing one or more user-interactive elements including, for example, dropdown selection lists, selection buttons, text entry boxes, and other like user-interactive elements. A main navigation screen of testing system interface400comprises test suite dropdown selection list402, product name dropdown selection box404, customer name dropdown selection box406, requester email address input408, execute selection button410, upload selection button412, download selection button414, and reset selection button416. In response to a user selection of an execute button, testing system interface400communicates to test processor116of QAP web server302to begin a selected test suite (including, for example, regression, smoke, performance, and the like), for the selected product (including, for example, SCPO, TMS, WMS, WFMR, and the like) for the selected customer (including, for example, any of supply chain entities180using the selected product) and transmit the error log and completed report to the email address entered into the email ID text entry box. In response to a user selection of upload selection button412, testing system interface400displays one or more graphical elements providing for locating, selecting, and uploading a test case file and a properties files to allow the test case described by the documents to be selected for testing. After the test case file and the properties file are successfully uploaded, the test case described by the uploaded files may be downloaded by selection of download selection button414. Testing system interface400may be accessed by one or more client computers210using a web browser and navigating a URL hosting testing system interface400of QAP server110. Embodiments contemplate accessing QAP server110by, for example, QAP web server302using one or more client computers210comprising any suitable client, such as, for example, a rich-user interface client, a thick client, a thin client, or the like. As described above, testing system interface400provides four configurable options: test suite, product name, customer name, and requester email ID. After selection, the four options selected or entered will be used when the test case file and the properties file are uploaded. In response to selection of an execute button (such as, for example, by a user selection using an input device), testing system interface400will communicate to test processor116of QAP web server302to begin the selected test suite (such as, for example, regression), for the selected product (here, SCPO) for the selected customer (such as, for example, customer2) and transmit the error log and completed report to the email address entered into the email ID text entry box (e.g., [email protected]). According to embodiments, testing system interface400provides for selecting and uploading a test case file and a properties file. In response to selection of the upload button of the main navigation screen, testing system interface400displays an upload overlay window. Here, a test case file and a corresponding properties file for a particular test case may be selected for upload. In response to selection of upload button412, QAP web server202searches for the selected files, and in response to locating and uploading the selected files, testing system interface400may display a confirmation that the files were uploaded and, in addition, make available for selection the test case described by the test case file and the properties file. In response to a successful upload of the test case file and the properties file, testing system interface400may display an upload confirmation message, such as, for example, an upload confirmation message displayed on an overlay window with text indicating, “Files uploaded Successfully”. Testing system interface400may return to a main navigation screen, either automatically or in response to user selection of a close button. In addition, embodiments of testing system interface400provide for executing a test case. After testing system interface400returns to the main navigation screen, execute selection button410may be selected. In response to user selection of execute selection button410, testing system interface400may begin the selected test suite for the selected product for the customer indicated and send the report to the specified email address. By way of an example and not of limitation, in response to a user selection of an execute button, testing system interface400communicates to test processor116of QAP web server302to begin a selected test suite (including, for example, regression, smoke, performance, and the like), for a selected product (including, for example, SCPO, TMS, WMS, WFMR, and the like) for the selected customer (including, for example, any of supply chain entities180using the selected product) and transmit the error log and completed report to the email address entered into the email ID text entry box408. In response to a user selection of upload selection button412, testing system interface400displays one or more graphical elements providing for locating, selecting, and uploading a test case file and a properties files to allow the test case described by the documents to be selected for testing. After the test case file and the properties file are successfully uploaded, the test case described by the uploaded files may be downloaded by selection of download selection button414. According to embodiments, QAP server110accepts test cases that are formatted as comma-separated files (CSV) and comprise the properties identified in a properties file associated with the particular test case type. As discussed herein, a properties file comprises a map between a user interface element and its locator. According to embodiments, QAP server110accepts test case files comprising five elements: test case name, action type, selector, value, and/or dataset. Each test case includes two or more line items where the first line item of the test case specifies the test case name and subsequent line items comprise steps to execute the test case. Each step indicates one user action and for each user action, a test case input CSV may comprise an Action Type, Selector, Value and/or Dataset. An action type element indicates an action performed during user interaction with the application, such as, for example, open, click, and the like. According to embodiments, QAP server110comprises predefined list of Actions which identify all possible user interactions through User Interface. A Selector element specifies the attribute name which will be used to locate a web element. A Value element specifies the value to be used to match against a specified Selector. A Dataset element specifies a value to be entered in an input element. To eliminate redundancy and for ease of maintenance, embodiments contemplate identifying the actual values for Selector, Value and Dataset elements in the properties file as key=value pairs, where the key is specified in above mentioned fields. Before a test case may be invoked on QAP server110, correctly-formatted test case CSV files and associated properties files are uploaded through user interface of QAP server110. QAP server110checks the uploaded files and, if validated, stores the uploaded files as an invokable test case on QAP server110web server. QAP server110also provides for downloading input files through user interface of QAP server110. This allows downloaded input files to be modified and then uploaded as updated input files to QAP server110. After uploading and validation one or more test cases, QAP server110may execute regression testing of one or more programs based on the test cases. User interface of QAP server110receives requests for regression test case execution that include selected product/customer combination. QAP server110may then validate the product/customer combination, verify if the input files are available, and invoke test case execution. During regression execution, QAP server110application parses input files, resolves properties, and executes test cases by invoking a webpage automation tool API and/or OS automation tool API based on the application test case. According to embodiments, QAP server110spawns multiple threads to execute web application test cases in parallel mode by leveraging the capabilities of a multithreading framework and provides implementation for each predefined Action Type in which it locates user interface Web Elements using values from Selector and Value columns and performs specified actions by invoking webpage automation tool240, OS automation tool242and/or their respective APIs. According to embodiments, QAP server110dynamically displays results during execution of the test cases execution and sends reports after all test cases are executed. To further illustrate regression testing according to QAP server110, an example is now given. According to the following non-limiting example, a newly-developed web application provides for adding one or more accessories to an internet shopping cart by an end user of the application. To test adding accessories to an internet shopping cart by an end user for this exemplary web-application, a test case file would be created that represents the actions taken by the end user to add some accessories to the cart. For example, the test case file may represent each of these actions, such as, for example, the end user logs into the website, the end user then searches for a particular accessory (such as, for example, a laptop bag), the end user navigate to the product page, and any additional actions. According to embodiments, the test case file associates each of these actions with the corresponding user action that takes place in the application. For example, if the test case comprised an additional action, such as, for example, an add to cart action, the corresponding user action in the application will be a user clicks on the add to cart button. As described in more detail below, test processor116read the test case file and identifies the action type of each user action. TABLE 1TestNormal RunQAP serverSolutionCasesTime (min)Run Time (min)SCPO21251.28WMS13202.51WFMR271.5 TABLE 1 illustrates test case runtimes created using QAP server110and a comparison of runtimes against previous method of running versus QAP server110integrated parallel run. For three selected products, if the run time for twenty-one test cases was twenty-five minutes with QAP server110, using multiple parallel threads it came down to approximately one and a quarter minutes (i.e. 1.28 minutes). According to embodiments, regression testing by QAP server110begins with preparing a test case file and a properties file. The syntax and format of the test case file will be discussed first, followed by the discussion of the performance test. A test case file comprises a set of test inputs, execution conditions, and expected results developed for a particular objective, such as to exercise a particular program path or to verify compliance with a specific requirement. QAP server110test case file template may comprise five columns. TABLE 2Test case NameAction TypeSelectorValueDataset TABLE 2 illustrates the five columns of a test case file, according to an embodiment. Although the test case file is described as comma separated spreadsheet (CSV) file, embodiments contemplate arranging the contents of the test case file in any suitable format of any suitable file type, according to particular needs, including, for example, spreadsheets, text documents, databases, and the like. The use and syntax of each of the five columns will be explained in more detail below. TABLE 3Test case NameAction TypeSelectorValueDatasetLogin TABLE 3 illustrates the name column of the test case file, according to an embodiment. Test case name indicates the name that will appear in testing system interface400to identify the test case. Here, testing system interface400will identify the test case as “Login.” TABLE 4Test case NameAction TypeSelectorValueDatasetLoginopenclick TABLE 4 illustrates the action type column of the test case file, according to an embodiment. Action type comprises simulated interactions between a user and a program, that may be automated during testing of the program. These interactions may simulate input of a mouse, keyboard, touchpad, and/or one or more other input devices such as, for example, simulating the actions of user when using the one or more input devices to perform one or more actions. According to embodiments, the action types may include, for example, open, sendKeys, click, clear, hoverAndSelect, switchFrame, switchToDefaultFrame, dropDown, verifyText, switchWindow, createNewBrowser, switchToOriginalBrowser, getCurrentWindowHandle, swithToChildWindow, swithToParentWindow, wait, and the like. The third column comprises a selector. Selectors may comprise a selection criteria (such as locators or object identifiers) that allow automation tools to find elements of a web page or application. According to embodiments, the selectors may be prioritized in order of speed, uniqueness, likelihood of mistake with other elements, and other like considerations. According to embodiments, the selectors are prioritized (from highest priority to lowest priority) according to: ID, name, link text, partial link text, tag name, class name, css, and/or xpath. Although the selectors are described according to a particular priority, embodiments contemplate any suitable selection and prioritization of selectors, according to particular needs. Each of the listed selectors locates an element by a slightly different method. For example, an id selector may select an element with the specified @id attribute. A name selector may select the first element with the specified @name attribute. A link text selector may select a link (anchor tag) element which contains text matching the specified link text. A partial link text selector may select a link (anchor tag) element which contains text matching the specified partial link text. A tag name selector may locate an element using a tag name. A class name selector may locate an element using a class name. A css selector may select the element using css selectors. Finally, an xpath selector may locate an element using an xpath expression. TABLE 5Test case NameAction TypeSelectorValueDatasetLoginopenid TABLE 5 illustrates the selector column of the test case file, according to an embodiment locating an element by ID. The most efficient way and preferred way to locate an element on a web page is by ID. An ID for an element will be unique, which easily identifies the element. IDs are the safest and fastest locator option and should always be the first choice even when there are multiple choices, For example, if the webpage element:<input id=“email” class=“required” type=“text”/> is being located by id “email”, then the test case file may comprise id for the selector. TABLE 6Test caseNameAction TypeSelectorValueDatasetLoginClickName TABLE 6 illustrates the selector column of the test case file, according to an embodiment locating an element by name. An element may also be selected by a name attribute. However, if there are more than one element with the same name attribute, an automation testing tool using the test case file may select only the first element with a matching name attribute. If the element:<input name=“register” class=“required” type=“text”/> is being located by name “register”, then the test case file may comprise name for the selector. TABLE 7Test caseNameAction TypeSelectorValueDatasetLoginClickLinkText TABLE 7 illustrates the selector column of the test case file, according to an embodiment locating an element by link text. An element may also be selected by link text. However, if there are multiple links with the same link text (such as repeated header and footer menu links), the automation tool may select only the first element matching the link text. If the element:<a href=“http://www.abc123456.org”>Downloads</a> is being located by linked text “Download”, then the test case file may comprise LinkText for the selector. TABLE 8Test case NameAction TypeSelectorValueDatasetLoginClickPartial LinkText TABLE 8 illustrates the selector column of the test case file, according to an embodiment locating an element by partial link text. An element may also be selected by partial link text. However, if there are multiple links with the same partial link text, the automation tool may select only the first element matching the partial link text. If the element:<a href=“http://www.abc123456.org”>Download new software</a> is being located by partial linked text “Download”, then the test case file may comprise Partial LinkText for the selector. TABLE 9Test caseNameAction TypeSelectorValueDatasetLoginClickTag Name TABLE 9 illustrates the selector column of the test case file, according to an embodiment locating an element by tag name. TagName may be used to select Group elements such as, for example, select boxes, check boxes, and dropdowns. For example, if the element:<a input id=“email” class=“required” type=“text”/>LOGIN<d> is being located by the name “LOGIN”, then the test case file may comprise TagName for the selector. TABLE 10Test case NameAction TypeSelectorValueDatasetLoginClickClass TABLE 10 illustrates the selector column of the test case file, according to an embodiment locating an element by class. Class name may be used to select elements, but, as described above, some embodiments of automation tools will only select the first element with a matching class name, a class name matching the selected name. If there are more than one element that have the same class is being used to locate an element, then the selector needs to be extended using the classname and its subelements. For example, if the element,<a input id=“email” class=“required” type=“text”/>LOGIN<d> is being located by class “required”, then the test case file may comprise class for the selector. TABLE 11Test case NameAction TypeSelectorValueDatasetLoginClickCSS TABLE 11 illustrates the selector column of the test case file, according to an embodiment locating an element by CSS. CSS may be used to provide style rules for web pages, but it may also be used by automation tools to select elements. Locating elements by CSS may be faster than locating elements by XPath and is usually the fastest and easiest selector to locate complex elements. For example, if the element,input[id=email’] is being located by css “input[id=email’]”, then the test case file may comprise css for the selector. TABLE 12Test case NameAction TypeSelectorValueDatasetLoginClickxpath TABLE 12 illustrates the selector column of the test case file, according to an embodiment locating an element by XPath. XPath is designed to allow the navigation of XML documents to select individual elements, attributes, or some other part of an XML document for specific processing. There are two types of xpath, native XPath and relative XPath. A Native Xpath directs the XPath to go in direct way, such as:html/head/body/table/tr/td Here, the advantage of specifying native path is, finding an element from the direct path is straightforward, but if the path changes, the XPath will break. In relative XPath, the relative path is specified, so that XPath finds an element by the path in between. The relative XPath has the advantage that, if there is any change in a webpage coding, the relative XPath may still locate the element, but not if the particular path to the element has changed. If the path changes, finding the address would be difficult and may require checking each node to find the path. For example, for the relative XPath://table/tr/td the following listed example syntaxes may be used to locate the elements indicated. Example Syntax to work with Imagexpath=//img[@alt=‘image alt text goes here’] Example syntax to work with tablepath=//table[@id=‘table1’]//tr[4]/td[2]xpath=(//table[@class=‘nice’])//th[text( )=‘headertext’]/ Example syntax to work with anchor tagxpath=//a[contains(@href,‘href goes here’)]xpath=//a[contains(@href,‘#id1’)]/@class Example syntax to work with input tagsxpath=//input[@name=‘name2’ and @value=‘yes’] TABLE 13Test case NameAction TypeSelectorValueDatasetLoginopenLinkBaseurlSendkeysxpathUsername_xapth TABLE 13 illustrates the value column of the test case file, according to an embodiment. The value column used to pass the location of the element stored in the properties file. For example, if the following:Username_xapth=//input[@name=‘name2’ and @value=‘yes’] is being used to locate one or more elements, then the test case file may comprise the values as indicated. TABLE 14Test case NameAction TypeSelectorValueDatasetLoginopenLinkBaseurlSendkeysxpathUsername_xapthUsername TABLE 14 illustrates the dataset column of the test case file, according to an embodiment. The data set column may be used to pass input data of the element that is stored in the properties file. For example, for the following elements:Baseurl=www.gmail.comUsername=SystemPassword=Password then the test case file may comprise the entries indicated for the five columns indicated above. In addition to preparing the test case file, the data properties file may also be prepared. Data properties file is a map between a user interface element and its locator, which can also be written as an object map between a user interface element and the way to find it. According to other embodiments, the properties file comprises a mapping between web elements and their corresponding locator types and values. For example, the three distinct values of element name, locator type, and locator value may be used to locate an element uniquely on a web page. According to embodiments, the properties file may be stored in various locations within the system depending on many factors, including the framework used for the application. For example, some frameworks store an object repository in a properties file and others store the object repository inside an XML file or a spreadsheet file. Although these values may be stored in a properties file, embodiments contemplate using the values directly in the test code. FIG.5illustrates first exemplary properties file500, according to an embodiment. According to embodiments, a property file stores information as key-value pair502: Key-value pair502is represented by two string values separated by an equal sign (“=”). Here, a value504(“India”) is matched to a key506(“Location.”) By way of further explanation, and not by limitation, an example comprising a test case file and a property file is now given. TABLE 15Test case NameAction TypeSelectorValueDatasetLoginopenLinkBaseurlSendkeysxpathUsername_xpathUsernameSendkeysxpathPassword_xpathPasswordClickidLogin_id TABLE 15 illustrates an exemplary test case file, according to a first embodiment. The exemplary test case file may be associated with a properties file comprising the following data:Properties:Baseurl=https://www.gmail.comUsername_xpath=*//[@name=uname]Password_xpath==*//[@name=password]Login_id=loginUsername=SystemPassword=Password The properties data may be saved in any suitable file type or format, such as a properties file format with the particular data structures described herein and according to particular needs. FIG.6illustrates a second exemplary properties file600for the test case file of the example in TABLE 15, according to an embodiment. Second exemplary properties file600indicates three elements, first name field602, last name field604, and link element606, with “Partial Link Test” as the link text. Continuing with the above example, three values will be stored in the properties file: element name610, locator type612, and locator value614. To separate values, value separator616, such as a colon (:) may be used. Here the properties value comprises an element name (PartialLink), a locator (PartialLinkText), a locator value (Partial Link Test), with the locator and the locator value separated by a colon. Additionally, any comments may be added to the properties folder by using the #keyword. As shown above, the text displayed after the # will be green and will not be read by the properties file reader. TABLE 16Test caseNameAction TypeSelectorValueDatasetLoginopenLinkBaseurlSendkeysxpathUsername_xapthUsername TABLE 16 illustrates an exemplary test case file, according to a second embodiment. All test case data are fetching from this properties file only, generally it is an object repository. After selection of the execute button, testing system interface400may display an alert overlay window with information describing the status of the executed test suite. An overlay window may open and which dynamically displays a status of each test case execution in a test suite. This information may include, for example, a time stamp, the name of the test cases being tested, the current status of each test case being tested, the thread number of the test case being executed, and the like. As stated above, after regression testing, QAP server110may display results and reports of the test cases executed during the regression testing. After test suite execution is finished, an alert overlay window may display a message indicating the test suite is completed, such as, for example, a message that reads, “Test Suite Execution has completed, Reports will be sent shortly”. The message indicates that all test cases in the test case suite have been executed. Also, as indicated above, a report may be sent to an indicated email address. The report may be sent as an attachment to the email as a zipped file. Although the report is described as being sent by email as a zipped attachment, embodiments contemplate transmitting the report in any suitable format of any suitable file type (such as, for example, PDF, text documents, spreadsheets, or the like) through email, instant messaging, uploading the reports to a network location, or the like. After receiving the report, the results may be compared with the expected results. The report may indicate, for example, failed test cases, at which step the test case failed, and a reason for the failure. If the report indicates that one or more errors occurred, the code for the tested programs may be altered to correct for the errors. After the code is altered, the test suite may be run again to look for further errors. This repetition of running the test suite and fixing any error that occurred may be repeated as many times as necessary to remove all of the errors from the tested programs. Application testing using QAP server is not limited only to regression testing, load testing, and PSR testing. Instead, embodiments of QAP server include issue replication, benchmarking and baselining. For example, QAP server110may replicate an issue by automatically running particular keystrokes or processes for short periods of time (such as, for example, several days) to replicate issues (such as, for example, computer errors) that are caused by a combination of factors that take long periods of time to occur. For example, an exemplary supply chain process application may receive an infrequent error that occurs once every few weeks or months (such as, for example, a search for an item by SKU gives an error once every six months, but otherwise behaves normally). According to embodiments, QAP server automatically replicates the issue by running a particular key stroke entry associated with the error repeatedly to simulate the time period over which the error occurs. For example, a particular QAP server may replicate an issue by automatically and repeatedly running the software under parameters associated with the error. By running the software for a couple days, QAP server can replicate issues which typically would appear only after many months of normal usage of the software. Additionally, embodiments of QAP server perform performance benchmarking by establishing a performance baseline of a particular software application by, for example, executing a predefined set of business critical workflows under an expected ideal load on the server and measuring various metrics, including, for example, Average Response Time, Request Rate, CPU Usage, Memory Usage, and the like based on the results that QAP server110is able to provide. The results of performance benchmarking may be used to compare and measure a software application's performance, scalability, and reliability at different application loads and in different environments. Additionally, or in the alternative, embodiments of QAP server110to simulate test cases for different Internet browsers on which the tests may be executed and also choosing the number of threads to execute each test suite. An end user may select any suitable internet browser to run regression test cases and the number of threads to be used. Furthermore, embodiments of QAP server110contemplate providing test suite execution on multiple servers with one request and, in addition, providing an administration console for solution setup. Embodiments of QAP server110may create significant value proposition in terms of hardware reductions and cycle time reductions. FIG.7illustrates a case study for cloud services for an exemplary manufacturer, according to an embodiment. Time taken for testing700is reduced by 10× for an Enterprise Application. This is because QAP server removes manual intervention; QAP server is multithreaded (the above ran on five parallel internet browser windows); defect seepage702is reduced by 16× for an enterprise application; and stack upgrades will never break existing functionalities. According to embodiments, QAP server110displays the status of each test case on testing system interface400updated in substantially real-time. As discussed herein, QAP server110provides PSR testing. According to embodiments, QAP server110acts as a wrapper over performance testing module250, which executes performance testing, including load testing and PSR testing. Load testing comprises sending a large number of regressions to an application and monitoring the response from the application. PSR testing comprises calculation of a performance baseline and response times and labeling the response from the application at various loads. Performance testing module250comprises a test plan that may be configured to execute test case references in a test case file for a predefined number of iterations and for a predefined number of concurrent users. After PSR testing is completed, QAP server110may generate one or more reports and accompanying graphics and charts. At the end of PSR testing, QAP server110may generate a report by leveraging response times for each step in each test case scenario along with various graphical representations which can be used for analysis of potential performance issues. This framework eliminates creation and maintenance of a test plan for each product separately and a load testing report helps immediately and easily identify any performance bottlenecks. Reference in the foregoing specification to “one embodiment”, “an embodiment”, or “some embodiments” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the invention. The appearances of the phrase “in one embodiment” in various places in the specification are not necessarily all referring to the same embodiment. While the exemplary embodiments have been shown and described, it will be understood that various changes and modifications to the foregoing embodiments may become apparent to those skilled in the art without departing from the spirit and scope of the present invention.
78,088
11860769
While embodiments are described herein by way of example for several embodiments and illustrative drawings, those skilled in the art will recognize that the embodiments are not limited to the embodiments or drawings described. It should be understood, that the drawings and detailed description thereto are not intended to limit embodiments to the particular form disclosed, but on the contrary, the intention is to cover all modifications, equivalents and alternatives falling within the spirit and scope as defined by the appended claims. The headings used herein are for organizational purposes only and are not meant to be used to limit the scope of the description or the claims. As used throughout this application, the word “may” is used in a permissive sense (i.e., meaning having the potential to), rather than the mandatory sense (i.e., meaning must). The words “include,” “including,” and “includes” indicate open-ended relationships and therefore mean including, but not limited to. Similarly, the words “have,” “having,” and “has” also indicate open-ended relationships, and thus mean having, but not limited to. The terms “first,” “second,” “third,” and so forth as used herein are used as labels for nouns that they precede, and do not imply any type of ordering (e.g., spatial, temporal, logical, etc.) unless such an ordering is otherwise explicitly indicated. “Based On.” As used herein, this term is used to describe one or more factors that affect a determination. This term does not foreclose additional factors that may affect a determination. That is, a determination may be solely based on those factors or based, at least in part, on those factors. Consider the phrase “determine A based on B.” While B may be a factor that affects the determination of A, such a phrase does not foreclose the determination of A from also being based on C. In other instances, A may be determined based solely on B. The scope of the present disclosure includes any feature or combination of features disclosed herein (either explicitly or implicitly), or any generalization thereof, whether or not it mitigates any or all of the problems addressed herein. Accordingly, new claims may be formulated during prosecution of this application (or an application claiming priority thereto) to any such combination of features. In particular, with reference to the appended claims, features from dependent claims may be combined with those of the independent claims and features from respective independent claims may be combined in any appropriate manner and not merely in the specific combinations enumerated in the appended claims. DETAILED DESCRIPTION In various embodiments, an application test execution and maintenance system may be implemented. The test execution and maintenance system may include an application learner and a test execution and repair manager. The application learner may be trained, based on one or more learning tests, to develop knowledge as to operations of an application. In training, the application learner may execute learning tests with explorations to navigate operations of the application. For instance, if a given learning test may be viewed as a sequence of operations of the application (hereinafter “path”), the application leaner may deviate from a path prescribed by the given learning test to explore one or more alternative sequences of operations to complete the learning test. The explorations allow the application leaner to acquire knowledge of operations of the application. Once trained, the test execution and maintenance system may be deployed to execute and maintain a functional test of the application. For instance, the test execution and maintenance system may use the test execution and repair manager to execute the application, following a functional test definition. The test execution and repair manager may detect a failure in the functional test. Responsive to detecting the failure, the test execution and repair manager may repair the functional test based on the acquired knowledge of operations of the application. In particular, the test execution and repair manager may determine a repair, for instance, by identifying an alternative sequence of operations to circumvent the failure. The test execution and repair manager may then apply the repair to patch the functional test. The automatic repair of a broken test can avoid undesired interruptions and improve efficiency of automatic testing. According to some embodiments, responsive to detecting a failure, the test execution and maintenance system may optionally command the application learner to perform a re-learning process. The re-learning may update the knowledge of the application, especially to reflect constraints to operations of the application. For instance, a failure may break one or more sequences of operations, and a previously identified path may not be a viable alternative path any more. Thus, the test execution and maintenance system need to update its knowledge base in order to perform repairs appropriately. According to some embodiments, the application learner may leverage a transfer training for re-learning. For instance, instead of start from scratch, the application learner may re-learn with parameters developed from prior training. The transfer learning may save time and improve efficiency of the application learner. According to some embodiments, the application learner may be implemented based on reinforcement learning. The reinforcement learning may train the application leaner with rewards. Compared with existing techniques with random input, the reinforcement learning can provide a more directional learning experience to the test execution and maintenance system. In some embodiments, the application learner may also implement an asynchronous learning to develop knowledge of operations of multiple applications. This provides the test execution and maintenance system abilities to test and maintain multiple applications. FIG.1shows an example test execution and maintenance system, according to some embodiments. As shown inFIG.1, test execution and maintenance system100may include application learner110and test execution and repair manager115. Application learner110may receive a learning test definition, for instance, from test definitions repository135through storage interface120. Test definitions repository135may store one or more testing definitions for an application, for instance, a mobile shopping application. One test definition may provide one or more cases each with a set of test steps of the application. In other words, a test definition may prescribe one or more sequences of operations of the application. The learning test definition may be same as a functional test definition that prescribes a functional test to be tested by execution and repair manager115. Alternatively, the learning test definition may prescribe a different sequence of operations from the functional test definition. The purpose of the learning test definition is to provide a starting point for application learner to run and explore operations of the application. For example, the learning test may prescribe the sequence of operations of the application for one test case—performing an item search in the mobile shopping application. The case may include steps, such as, (1) displaying the homepage; (2) receiving input in the search bar, for example, a title of a book; (3) conducting a search; and (4) returning a search result, for example, a list of items relevant to the book. Based on the learning test definition, application learner110may send task actions to system under test105through test interface125to command executions of the learning test of the application, but with explorations. With the explorations, application leaner110may command task actions to deviate from the prescribed learning test definition. This allows application learner110to navigate operations of the application. In return, application learner110may receive feedback from system under test105. The feedback may indicate status of the application following a commanded task action, for instance, whether an action is executed successfully, whether a webpage is properly loaded, etc. Based on the explorations and feedback, application learner110may develop knowledge of operations of the application and store the application knowledge, for instance, in application knowledge database130through storage interface120. Note that system under test105is basically a system for application execution. Thus, system under test105may be implemented by any applicable testing system, such as a simulator, an emulator, a real device (e.g., an Android or iOS smart phone), or the like. In a regular functional test, test execution and repair manager115may receive a functional test definition, for instance, from test definitions repository135through storage interface120. The functional test definition may prescribe a set of instructions for a functional test. Test execution and repair manager115may command executions of the functional test of the application, step by step following the functional test definition, on system under test105through test interface125. Test execution and repair manager115may receive feedback from system under test105, based on which, test execution and repair manager115may detect whether there is a failure of the functional test. Responsive to detecting the failure, test execution and repair manager115may obtain knowledge of operations of the application from application knowledge database130through storage interface120. Based on the knowledge information, test execution and repair manager115may determine a repair, for instance, an alternative sequence of operations of the application to circumvent the failure. Test execution and repair manager115may then apply the repair to path the functional test. By repairing the functional test, test execution and maintenance system100can allow for automatic testing without an interruption requesting for QAE's manual corrections. This may accelerate testing speed and improve efficiency. FIGS.2A and2Billustrate simplified operations of application learner110and test execution and repair manager115, respectively. As shown inFIG.2A, application learner110may first receive a learning test definition prescribing a learning test, for instance, from test definitions repository135(block205). Application learning110may execute explorations of the application based on the learning test definition, for instance, on system under test105(block210). If the learning test definition is viewed as a prescribed sequence of operations for the application, application learner110may use the prescribed sequence as a baseline and deviate from the baseline to explore other possible operations of the application. Application learner110may receive feedback indicating status of operations of the application in the explorations, for instance, from system under test105(block215). Application learner110may develop knowledge of operations of the application based on the explorations and feedback, and store the application knowledge in application knowledge database, for example (block220). For instance, based on the explorations and associated feedback, application learner110may learn one or more sequences of operations of the application, alternative to the “baseline” sequence prescribed by the learning test definition, to complete the learning. Knowledge of the alternative sequences of operations of the application may form the knowledge of operations of the application. As shown inFIG.2B, test execution and repair manager115may receive a functional test definition prescribing a regular functional test, for instance, from test definitions repository135(block225). Test execution and repair manager115may command executions of the functionality following the functional test definition step by step, for instance, on system under test105(block230). Test execution and repair manager115may receive feedback from system under test105and detect whether there is a failure of the functional test (block235). Responsive to detecting the failure, test execution and repair manager115may obtain application knowledge, for instance, from application knowledge database130and accordingly repair the functional test (block240). Test execution and maintenance system100may implement application learner110based on machine learning algorithms. Various machine learning approaches may be used, for instance, reinforcement learning, deep learning neural networks, convolutional neural networks, or tabular methods. For purposes of illustration, this disclosure uses Q-learning based reinforcement learning, as one example, to describe operations of application learner110. The operations of application learner110may be represented by a set of states s and actions a. A state s describes a current situation of the software, and an action a is what an agent can do in each state. Regarding the above example of the mobile shopping application, the state may correspond to one webpage of the application, and actions may represent various customer interactions via the application. Referring back to the item search example, state s(1) may represent the homepage of the application. Receiving input data in the search bar may then be denoted as action a(1). When input data in the search bar is successfully verified, the application may proceed to conduct the search and thus transition to a next state s(2). Following this example, for given test case(s) and step(s) prescribed by a test script, the states and actions may be defined accordingly. A sequence of states, connected by associated actions, may form a sequence of operations of the application (or a “path”). Thus, a functional test may be represented by a path: s(1)→a(1)→s(2)→a(2)→s(3) . . . →s(goal) where s(goal) represents the final state (or the final webpage of the application after a test). Note that the states and actions may be defined flexibly in testing. Even in this book search example, states may or may not be a visible webpage. Instead, states may signify an “intermediate” internal condition of the application along the path of a test. What is more important is to complete the test, i.e., being able to arrive at an objective state from an initial state. One goal of application learner110is to navigate the application to develop knowledge as to operations of the application. When test execution and maintenance100detects a failure in a functional test, test execution and maintenance100may use the application knowledge to identify an alternative sequence of operations to repair the failure. The alternative sequence of operations may represent an alternative path (including states and actions) to circumvent the failure to arrive at the objective state s(goal). According to some embodiments, an “optimal” path, for example, with a minimum number of operations to reach the objective state, may be selected as the repair. Consider testing the application as navigating a maze. The entrance to the maze is, for example, the homepage of the application, with an exit being the functionality to be tested. The optimal path may map the shortest “distance” between the entrance and exit. With regards to a functional test, using a shortest path to patch a break of the test may reduce testing time, improve testing efficiency, save compute resources, and provide better testing experience. In the context of reinforcement learning, system under test105(with test interface125) may represent an environment to application learner110. An environment is an external system to an reinforcement learning agent, which may be viewed as a stochastic state machine with finite input (e.g., actions sent from application learner110) and output (e.g., observations and rewards sent to application learner110). For purposes of illustration, the environment may be formulated based on the Markov Decision Process (MDP), for example, where future states are perceived independent from any previous state history given the current state and action. During the interactions with application learner110, the environment may evaluate an action a(i) by application learner110in a current state s(i), determine an observation such as a next state s(i+1), and provide a reward r(i) for the taken action a(i). The reward may be positive or negative. A positive reward may represent a positive evaluation as to the action of application learner110, while a negative reward may signify a “punishment.” As described above, since application learner110aims at identifying the optimal path to repair a failure, the reward may thus be associated with the length of a path. For example, application learner110may be rewarded a (positive or negative) reward for each operation of the application along a path, and the entire path may be then rated with a total reward. A shorter path may render a higher total reward, while a longer path may yield a lower total reward to application learner110. The reinforcement learning is to maximize the total reward, with which application learner110may be trained to recognize the optimal path. Compared to existing testing tools based on random input, the reinforcement learning may provide test execution and maintenance system100with a more directional, intelligent and efficient learning experience (to search for an optimal repair). This again may save computing resources and improve testing speed. For purposes of illustration, training of application learner110may be performed, for example, based on a Q-learning algorithm with function approximation. The goal of Q-learning is to learn an optimal action-selection path, which maximizes an expected value of the total reward for application learner110over any and all successive steps, starting from the current state. Q-learning aims at maximizing not the immediate reward received by an action at a current state but the total (including immediate and future) reward until reaching the final objective state. To do this, Q-learning algorithm may define a value function Q(s, a) to predict the total reward expected to be received by application learner110, given that at state s, the agent takes action a. The value function Q(s, a) may be defined by adding the maximum reward attainable from future states to the reward for achieving its current state, effectively influencing the current action by the potential future reward. Thus, Q(s, a) is an indication for how good it is for an agent to pick an action a while being in state s. For application learner110to identify an optimal path, this is essentially equivalent to search for the maximum (or optimal) value function Q*(s, a). The optimal value function Q*(s, a) means that application learner110, starting in state s, picks action a and then behaves optimally afterwards. In training, the value function Q(s, a) may start from an initialized value, for example, an arbitrary fixed value. Then, at each training step k, application learner110may select action a(k) in state s(k), and estimate future reward r(k), for example, according to equation (1). G=rk+γ max(Qk(sk+1,ak))  (1) where G represents an estimate of the Q value (i.e., the expected total reward) with action a(k) in state s(k), rkrepresents the immediate reward received from environment, max(Qk(sk+1, ak)) represents an estimate of the optimal future return if the agent takes action a(k) at state s(k), and γ is a discount factor. application learner110may take a set of different actions a(k) at each state s(k). Each pair of state and action may represent one operating circumstance of the software, and a sequence of circumstances may form a path. For each chosen path, the Q-learning algorithm may predict an estimated total reward, for example, according to equation (1). Some paths may cause higher total rewards, some may render lower total rewards, and some may result in a failure. Based on the estimated total rewards, application learner110may learn, among the different actions a(k), which one may render an optimal action, at state s(k), to arrive at the goal state s(goal). The Q-learning algorithm may further be combined with function approximation to achieve even improved performance. The function approximation may represent the value function Q(s, a) by a set of predetermined functions q(s, a). For example, the value function Q(s, a) may be approximated by a linear combination of functions q(s, a) according to equation (2): Q⁡(s,a,θ)=∑i=1M⁢qi(s,a)⁢θi(2) where qi(s, a) is the i-th approximation function and θiis the corresponding i-th parameter θ (or weight). Thus, the problem of searching for the optimal Q*(s, a) may now become a problem of identifying (or training) the parameters θ. One approach to tune the parameters θ may be based on the gradient descent of the value function Q(s, a), for example, according to equation (3): θ(i)⁢k+1=θ(i)⁢k+α[G-Qk(sk,ak,θ)]⁢∂Qk(sk,ak,θ)∂θ(i)(3) where θirepresents the i-th parameter of θ, G and Qk(sk, ak, θ) represent the respective new and old estimates of the total reward, ∂ represents a differentiation operation, and α is a learning rate. The training process of application learner110may begin with one or more prior functional test(s). For example, a first training may include a first sequence of circumstances: s(1)→a(1)→s(2)→a(2)→s(3)→a(3)→s(4), while a second case may comprise a second sequence of circumstances: s(1)→a(4)→s(5)→a(5)→s(6)→a(6)→s(7)→a(7)→s(4), as shown inFIG.3, according to some embodiments. InFIG.3, each node represents one individual state of the application, for example, one webpage of the mobile shopping application; each arrow corresponds to one action by application learner110(corresponding to one task action); and each path represents one sequence of operations of the application to complete the test. Besides the prescribed path, application learner110may navigate a neighborhood around the given path to develop knowledge of operations of the application. For example, at each state, application learner110may deviate from a prescribed path, for example, by taking an action different from the prescribed action, to explore an adjacent region of the given path—the shaded area as shown inFIG.4. The chance for application learner110to deviate from a path may be determined, for instance, by a probability8. With probability8, application learner110may take a deviation to explore. Alternatively, with probability (1-s), application learner110may follow the prescribed test definition without exploration. Explorations may help application learner110to develop knowledge about operations of the application, which may be then used to repair a failure in a regular functional test. Note thatFIGS.3and4depict only simple examples for purposes of illustration. The real scenarios may be more complicated. For instance, the sequences of operations of the application (or paths) may become longer involving more operations of the application; there may be more different ways to deviate from the prescribed sequence of operations; and the deviation may create more alternative sequences of operations of the application. FIG.5shows an example training process of application learner110, according to some embodiments. As shown inFIG.5, training process500may begin with initialization (block505). Initialization may provide initial values for training parameters, such as the Q value, discount factor γ, learning rate α, parameters θi (i=1 . . . M) and deviation probability ε. Next, application learner110may receive a learning test definition prescribing a sequence of operations of the application (block510). Application learner110may command operations of the application from an initial state (block515). Taking the above mobile shopping application as an example, the initial state may represent the homepage of the application. Next, application learner110may execute explorations to develop knowledge of operations of the application. For instance, application learner110may take the sequence of operations prescribed by the learning test definition as a “baseline” and determine whether to deviate from the given sequence, for instance, based on a probability ε (block520). Following the decision, application learner110may choose an action a(i) at a state s(i) and send the corresponding task action, for instance, to system under testing105ofFIG.1(block525). Application learner110may then receive output, such as the observation and reward as described above (block530). Application learner110may update an estimate of the Q function, for example, based on equation (1) as described above (block535). Next, application learner110may update the parameters θi (i=1 . . . M) of the function approximation of the value function Q(s, a, θ), for example, based on equation (3) (block540). Then, application learner110may move to a next state s(i+1) (block545). Training process500may determine whether arriving at the objective state s(goal) (i.e., completing the learning test) and, as needed, repeat the above operations (block550). When application learner110deviates from the prescribed sequence of operations to reach the objective state s(goal), application learner110essentially identifies an alternative sequence, different from the prescribed sequence, of operations of the application to finish the prescribed test. Application learner110may store acquired knowledge about the alternative sequence of operations, for instance, into application knowledge database130ofFIG.1. Application learner110may then return to the initial state to repeat the above operations until explorations finish (block555). Application learner110may determine to finish the explorations based on one or more criteria, for instance, a duration for which application learner110has been executing explorations, whether or not application learner110learns alternative sequences of operations, and how many alternative sequences are identified. Repeating the explorations gives application learner110a chance to learn additional alternative sequence(s) of operations because deviation may be controlled based on the probability ε such that application learner110does not simply repeat the same sequence of deviational operations. When explorations complete, all the identified alternative sequences of operations may become part of the knowledge of operations of the application. According to some embodiments, application learner110may further execute explorations based on different learning test definitions, instead of repeatedly using the same prescribed sequence of operations as the “baseline.” This may permit application learner110to navigate an even larger region of operations and develop a more comprehensive knowledge of the application. Once trained, test execution and maintenance system100may be deployed to execute and maintain an application functional test based on acquired knowledge of operations of the application.FIG.6shows an operation600of test execution and repair manager115in deployment, according to some embodiments. As shown inFIG.6, test execution and repair manager115may receive a functional test definition, for instance, from test definitions repository135(block605). The test script may prescribe a sequence of operations of the application for the functional test. Test execution and repair manager115may follow the given functional test definition, step-by-step (block610). For instance at each prescribed state s(i), test execution and repair manager115may take an prescribed action a(i) and send the corresponding task action to system under test105. In return, test execution and repair manager115may receive feedback (block615). Based on the feedback, test execution and repair manager115may detect whether there is a failure in the functional test (block620). According to some embodiments, test execution and repair manager115may detect a failure if the task action does not exit any more. For example, if test execution and repair manager115commands system under test105to display a list of found items but system under test105returns an error that the task action cannot be executed, test execution and repair manager115may determine a failure occurs. Alternatively, test execution and repair manager115may also detect a failure if system under test105“executes” the test action but lands on a state (or condition) different from the state prescribed by the functional test definition. Following the same example above, if system under test105erroneously switches to a webpage displaying the shopping cart instead of the found items, test execution and repair manager115may also determine that a failure occurs in the functional test. When test execution and repair manager115does not detect a failure, test execution and repair manager115may continue the test following the given functional test definition (block625). Conversely, when a failure happens, test execution and repair manager115may determine that a failure is detected (block630). When a failure occurs, the environment may vary as well. As explained above, the environment may be viewed as a state machine with finite states and actions. Thus, when the test breaks, one or more states and/or actions may not exit any more. That means, one or more previously learnt paths may have been lost in the environment.FIG.7shows an example scenario of a functional test failure, according to some embodiment.FIG.7depicts three different paths from state s(1) to state s(goal). Path #1 may present the sequence of operating circumstances of the application based on a functional test. Initially, path #2 may be the optimal alternative path for path #1 because it includes a minimum number of circumstances (e.g., six circumstances) among all the alternative paths as to complete the functional test. Path #3 is less optimal than path #2, because path #3 includes more (e.g. seven) circumstances. However, if the failure is associated with state s(4), as shown inFIG.7, not only can the test not be completed following the prescribed path #1, path #2 is not a viable path any more either. As a result, test execution and maintenance system100may now select path #3, rather than #2, to repair the failure of the functional test. To accommodate the environmental variation due to failures, when a failure is detected, application learner110may optionally re-learn the “change” application to update knowledge of operations of the application (with the detected failure). The re-learning process may be performed same as an initial learning as described above with regards toFIG.5. Alternatively, according to some embodiments, application learner110may use transfer learning to transfer cognitions from previous learnings to the re-learning process. For instance, instead of initializing parameters for instance, θi (i=1 . . . M), to random values, application learner110may perform training with previous values. The previous values of the parameters carry application knowledge from previous learning and thus avoid the re-learning from a complete scratch, which can accelerate the re-learning process.FIG.8shows an example operation to repair a broken functional test by test execution and repair manager115, according to some embodiments. As shown inFIG.8, repair process800may begin with detection of a failure in a functional test by test execution and repair manager115(block805). Responsive to detecting the failure, application learner110may optionally perform a re-learning process and update the application knowledge based on re-learning, for instance, stored in application knowledge database130(block810). As described above, the failure may cause one or more constraints as to operations of the application. Thus, application learner110may utilize the re-learning to update previously acquired application knowledge (block815). The re-learning may be performed same as the steps described above with regards toFIG.5. Alternatively, application learner110may use transfer learning to implement re-learning with previous acquired parameter values. Once the re-learning finishes, test execution and repair manager115may determine a repair for the failure based on the (updated) knowledge of operations of the application (block820). As described above, test execution and repair manager115may obtain the (updated) knowledge of applications of the application, for instance, from application knowledge database130. Test execution and repair manager115may repair the functional test, for instance, by identifying an alternative sequence of operations with a minimum number of operations (block825). FIG.9shows example operation900of test execution and maintenance system100, according to some embodiments. InFIG.9, test execution and maintenance system100may be used to run and maintain a functional test of an application. As shown inFIG.9, test execution and maintenance system100may first receive a functional test definition which prescribe a sequence of operations of an application (block905). Test execution and maintenance system100may execute the functional test, following the given functional test definition (block910). For instance, test execution and maintenance system100may take a prescribed action a in a given state s, and send corresponding task action to system under test105through test interface125. Test execution and maintenance system100may detect a failure in the functional test, for instance, based on feedback from system under test105(block915). Optionally, test execution and maintenance system100may demand application learner105to re-learn operations of the application associated with one or more constraints caused by the failure (block920). Test execution and maintenance system100may update the application knowledge based on the re-learning (block925). Test execution and maintenance system100may determine a repair (block930and repair the broken functional test based on the (updated) knowledge of operations of the application (block935). According to some embodiments, test execution and maintenance system100may include a set of application learners110to execute and maintain applications for different test systems. For example, test execution and maintenance system100may use a first application learner110to develop knowledge of operations of an application on a first system under test105representing an Android device. This knowledge may be stored in application knowledge database130. Additionally, test execution and maintenance system100may use a second application learner110to develop knowledge of operations of the application on a second system under test105representing an iOS device. This knowledge may also be stored in the same application knowledge database130, or alternatively, in a different application knowledge database130. Alternatively, test execution and maintenance system100may use a same, central application learner110to execute and maintain applications for different test systems.FIG.10shows an example asynchronous training process1000of application learner110. As shown inFIG.10, a plurality of systems under test may be represented by a set of environments1005(1)-(n). For instance, environment1005(1) may correspond to a system under test for a Samsung Galaxy510, environment1005(2) may correspond to a Pixel 4, environment1005(3) may represent iPhone 11, etc. Application learner110may train to develop knowledge of operations of an application on the different testing systems, with one single set of parameters, for instance, θi (i=1 . . . M). According to some embodiments, in training, application learner110may update the parameters based on interactions with environments1005(1)-(n), asynchronously. For instance, in a given state s(k), application learner110may take an action a(k)(1) for environment1005(1) to explore operations of the application on environment1005(1) and update the parameters θi based on feedback from1005(1), following the process as described above with regards toFIG.5. Next, application learner110may take an action a(k)(2) for environment1005(2) to explore operations of the application on environment1005(2) and update the parameters θi based on feedback from environment1005(2), and so on. With the asynchronous training, application learner110may train the parameters sequentially based on interactions with environments1005(1)-(n), without waiting for all the feedback to become available and then updating the parameters. FIG.11is a block diagram showing providing application test execution and maintenance as a provider network service, according to some embodiments. InFIG.11, provider network1100may be a private or closed system or may be set up by an entity such as a company or a public sector organization to provide one or more services (such as various types of cloud-based storage) accessible via the Internet and/or other networks1270to one or more client(s)1105. Provider network1100may be implemented in a single location or may include numerous data centers hosting various resource pools, such as collections of physical and/or virtualized computer servers, storage devices, networking equipment and the like (e.g., computing system1200described below with regard toFIG.12), needed to implement and distribute the infrastructure and storage services offered by provider network1100. In some embodiments, provider network1100may implement various computing resources or services, such as a data storage service(s)1110(e.g., object storage services, block-based storage services, or data warehouse storage services), application test execution and maintenance service(s)1115, as well as other service(s)1120, which may include a virtual compute service, data processing service(s) (e.g., map reduce, data flow, and/or other large scale data processing techniques), and/or any other type of network based services (which may include various other types of storage, processing, analysis, communication, event handling, visualization, and security services not illustrated). Data storage service(s)1110may implement different types of data stores for storing, accessing, and managing data on behalf of client(s)1105as a network-based service that enables one or more client(s)1105to operate a data storage system in a cloud or network computing environment. For example, data storage service(s)1110may include various types of database storage services (both relational and non-relational) or data warehouses for storing, querying, and updating data. Such services may be enterprise-class database systems that are scalable and extensible. Queries may be directed to a database or data warehouse in data storage service(s)1110that is distributed across multiple physical resources, and the database system may be scaled up or down on an as needed basis. The database system may work effectively with database schemas of various types and/or organizations, in different embodiments. In some embodiments, clients/subscribers may submit queries in a number of ways, e.g., interactively via an SQL interface to the database system. In other embodiments, external applications and programs may submit queries using Open Database Connectivity (ODBC) and/or Java Database Connectivity (JDBC) driver interfaces to the database system. Data storage service(s)1110may also include various kinds of object or file data stores for putting, updating, and getting data objects or files, which may include data files of unknown file type. Such data storage service(s)1110may be accessed via programmatic interfaces (e.g., APIs) or graphical user interfaces. Data storage service(s)1110may provide virtual block-based storage for maintaining data as part of data volumes that can be mounted or accessed similar to local block-based storage devices (e.g., hard disk drives, solid state drives, etc.) and may be accessed utilizing block-based data storage protocols or interfaces, such as internet small computer interface (iSCSI). In some embodiments, application test execution and maintenance service(s)1115may be provided by provider network1100as a network-based service to test and maintain clients' applications. For instance, provider network1100may include an application test and maintenance repository, in data storage service(s)1110or other service(s)1120. The application test and maintenance repository may contain application test and maintenance models, each representing one test execution and maintenance system100(including application learner110and test execution and repair manager115) as described above, for various applications, tests and testing systems. Client(s)1105may send a request to provider network1100for test execution and maintenance service(s)1115through network1125to test and maintain an application uploaded by client(s)1105. The request may further provide learning test definitions and specify a functional test definition for a specified system under test. Upon receiving the request, test execution and maintenance service(s)1115may identify an appropriate test execution and maintenance model in the repository (for the specified system under test, for example), load the identified model as an instance, and explore operations of the client's application based on the client's learning test definitions, for instance, with the application learner. By exploration, the test execution and maintenance model may develop knowledge of operations of the client's application on the specified system under test. Next, test execution and maintenance service(s)1115may execute the functional test of the client's application, detect failures and repair the functional test based on the application knowledge, for instance, with the test execution and repair manager. Other service(s)1120may include various types of data processing services to perform different functions (e.g., anomaly detection, machine learning, querying, or any other type of data processing operation). For example, in at least some embodiments, data processing services may include a map reduce service that creates clusters of processing nodes that implement map reduce functionality over data stored in one of data storage service(s)1110. Various other distributed processing architectures and techniques may be implemented by data processing services (e.g., grid computing, sharding, distributed hashing, etc.). Note that in some embodiments, data processing operations may be implemented as part of data storage service(s)1110(e.g., query engines processing requests for specified data). Generally speaking, client(s)1105may encompass any type of client configurable to submit network-based requests to provider network1100via network1125, including requests for storage services (e.g., a request to create, read, write, obtain, or modify data in data storage service(s)1110, a request to perform application test execution and maintenance at test execution and maintenance service1115, etc.). For example, a given client1105may include a suitable version of a web browser, or may include a plug-in module or other type of code module configured to execute as an extension to or within an execution environment provided by a web browser. Alternatively, a client1105may encompass an application such as a database application (or user interface thereof), a media application, an office application or any other application that may make use of storage resources in data storage service(s)1110to store and/or access the data to implement various applications. In some embodiments, such an application may include sufficient protocol support (e.g., for a suitable version of Hypertext Transfer Protocol (HTTP)) for generating and processing network-based services requests without necessarily implementing full browser support for all types of network-based data. That is, client1105may be an application configured to interact directly with provider network1100. In some embodiments, client(s)1105may be configured to generate network-based services requests according to a Representational State Transfer (REST)-style network-based services architecture, a document- or message-based network-based services architecture, or another suitable network-based services architecture. In various embodiments, network1125may encompass any suitable combination of networking hardware and protocols necessary to establish network-based-based communications between client(s)1105and provider network1100. For example, network1125may generally encompass the various telecommunications networks and service providers that collectively implement the Internet. Network1125may also include private networks such as local area networks (LANs) or wide area networks (WANs) as well as public or private wireless networks. For example, both a given client1105and provider network1100may be respectively provisioned within enterprises having their own internal networks. In such an embodiment, network1125may include the hardware (e.g., modems, routers, switches, load balancers, proxy servers, etc.) and software (e.g., protocol stacks, accounting software, firewall/security software, etc.) necessary to establish a networking link between given client1105and the Internet as well as between the Internet and provider network1100. It is noted that in some embodiments, client(s)1105may communicate with provider network1100using a private network rather than the public Internet Test execution and maintenance system100, including application learner110and test execution and repair manager115, described herein may in various embodiments be implemented by any combination of hardware and software. For example, in one embodiment, test execution and maintenance system100may be implemented by a computer system (e.g., a computer system as inFIG.12) that includes one or more processors executing program instructions stored on a computer-readable storage medium coupled to the processors. In the illustrated embodiment, computer system1200includes one or more processors1210coupled to a system memory1220via an input/output (I/O) interface1230. Computer system1200further includes a network interface1240coupled to I/O interface1230. WhileFIG.12shows computer system1200as a single computing device, in various embodiments a computer system1200may include one computing device or any number of computing devices configured to work together as a single computer system1200. In various embodiments, computer system1200may be a uniprocessor system including one processor1210, or a multiprocessor system including several processors1210(e.g., two, four, eight, or another suitable number). Processors1210may be any suitable processors capable of executing instructions. For example, in various embodiments, processors1210may be general-purpose or embedded processors implementing any of a variety of instruction set architectures (ISAs), such as the x86, PowerPC, SPARC, or MIPS ISAs, or any other suitable ISA. In multiprocessor systems, each of processors1210may commonly, but not necessarily, implement the same ISA. System memory1220may be configured to store instructions and data accessible by processor(s)1210. In various embodiments, system memory1220may be implemented using any suitable memory technology, such as static random access memory (SRAM), synchronous dynamic RAM (SDRAM), nonvolatile/Flash-type memory, or any other type of memory. In the illustrated embodiment, program instructions (e.g., code) and data implementing one or more desired functions, such as application learning and test execution and repair, are shown stored within system memory1220as code & data1226and code & data1227. In one embodiment, I/O interface1230may be configured to coordinate I/O traffic between processor1210, system memory1220, and any peripheral devices in the device, including network interface1240or other peripheral interfaces. In some embodiments, I/O interface1230may perform any necessary protocol, timing or other data transformations to convert data signals from one component (e.g., system memory1220) into a format suitable for use by another component (e.g., processor1210). In some embodiments, I/O interface1230may include support for devices attached through various types of peripheral buses, such as a variant of the Peripheral Component Interconnect (PCI) bus standard or the Universal Serial Bus (USB) standard, for example. In some embodiments, the function of I/O interface1230may be split into two or more separate components, such as a north bridge and a south bridge, for example. Also, in some embodiments some or all of the functionality of I/O interface1230, such as an interface to system memory1220, may be incorporated directly into processor1210. Network interface1240may be configured to allow data to be exchanged between computer system1200and other devices1260attached to a network or networks1250, such as system under test105, application knowledge database130, test definitions repository135, and/or other computer systems or devices as illustrated inFIGS.1through11, for example. In various embodiments, network interface1240may support communication via any suitable wired or wireless general data networks, such as types of Ethernet network, for example. Additionally, network interface1240may support communication via telecommunications/telephony networks such as analog voice networks or digital fiber communications networks, via storage area networks such as Fiber Channel SANs, or via any other suitable type of network and/or protocol. In some embodiments, system memory1220may be one embodiment of a computer-accessible medium configured to store program instructions and data as described above forFIG.1-11. Generally speaking, a computer-accessible medium may include non-transitory storage media or memory media such as magnetic or optical media, e.g., disk or DVD/CD coupled to computer system1200via I/O interface1230. A non-transitory computer-accessible storage medium may also include any volatile or non-volatile media such as RAM (e.g. SDRAM, DDR SDRAM, RDRAM, SRAM, etc.), ROM, etc., that may be included in some embodiments of computer system1200as system memory1220or another type of memory. Further, a computer-accessible medium may include transmission media or signals such as electrical, electromagnetic, or digital signals, conveyed via a communication medium such as a network and/or a wireless link, such as may be implemented via network interface1240. Various embodiments may further include receiving, sending or storing instructions and/or data implemented in accordance with the foregoing description upon a computer-accessible medium. Generally speaking, a computer-accessible medium may include storage media or memory media such as magnetic or optical media, e.g., disk or DVD/CD-ROM, volatile or non-volatile media such as RAM (e.g. SDRAM, DDR, RDRAM, SRAM, etc.), ROM, etc., as well as transmission media or signals such as electrical, electromagnetic, or digital signals, conveyed via a communication medium such as network and/or a wireless link. The various methods as illustrated in the figures and described herein represent exemplary embodiments of methods. The methods may be implemented in software, hardware, or a combination thereof. The order of method may be changed, and various elements may be added, reordered, combined, omitted, modified, etc. Various modifications and changes may be made as would be obvious to a person skilled in the art having the benefit of this disclosure. It is intended to embrace all such modifications and changes and, accordingly, the above description to be regarded in an illustrative rather than a restrictive sense.
51,812
11860770
While the invention is amenable to various modifications and alternative forms, specific embodiments are shown by way of example in the drawings and are described in detail. It should be understood, however, that the drawings and detailed description are not intended to limit the invention to the particular form disclosed. The intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the present invention as defined by the appended claims. DETAILED DESCRIPTION In the following description, for the purposes of explanation, numerous specific details are set forth to provide a thorough understanding of the present invention. It will be apparent, however, that the present invention may be practiced without these specific details. In some instances, well-known structures and devices are shown in block diagram form to avoid unnecessary obscuring. In recent decades, a number of software development best practices have emerged. One such practice is the use of feature flags (also referred to as, for example, feature toggles, feature switches, conditional features). A given feature flag relates to deployed source code, which (on execution) provides one or more features in a software product. In its simplest form, a feature flag essentially provides a conditional logic wrapper to the feature that enables it to be switched on (i.e., made available) or off (i.e., made unavailable). For example, when a feature flag is set to true, the software product when executed makes the feature(s) associated with that flag available. Conversely, when a feature flag is set to false, the software product when executed makes the feature(s) associated with that flag unavailable. This provides software developers the ability to control whether features included in a deployed/released code base are available or unavailable to customers (i.e., end users of the software product). A simplified example of a feature flag is illustrated in Table A— TABLE AExample Feature Flagif (isFeatureFlagOn(“featureflagKey”)) {runNewCode( );} else {runOldCode( );} Feature flags are often used to roll out a code refactoring, which is the process of restructuring existing software code, i.e., changing the factoring of the code, without changing its external behavior, e.g., to improve code readability, simplify code structure, improve performance, extensibility, etc. Consider the example of a piece of software code in an issue tracking system, such as Jira, that sequentially loads issues from a database. It may be inefficient to load issues one-by-one. To rectify this, a developer may decide to refactor this piece of software code such that all active issues are loaded using one database call. In this scenario, the developer could use a feature flag to progressively switch from the old implementation to the new implementation. In addition to code refactoring, feature flags can be used to introduce a new feature. In such cases, feature flags may be utilized to incrementally roll out the new feature to ensure that addition of the feature does not adversely affect the software application. Similarly, feature flags can be used to remove a feature. In such cases, feature flags may be utilized to roll back the feature incrementally to ensure that removal of the feature does not adversely affect the software application. As used in this disclosure, the term feature refers to a unit of functionality offered by a software application that is either visible to users (e.g., a behavior of a software application in response to a user input, etc.) or not visible to users (e.g., a background task that removes unnecessary records from a database). Further, the term old feature or original feature refers to a feature that is provided when the original or old code is compiled and executed and the term new feature refers to the feature that is provided when the new code is compiled and executed. The use of feature flags provides a number of advantages. For example, feature flags allow software developers granular control over how a given feature is actually made available to customers. Using a feature flag, delivery of a feature may be controlled, for example, to specific customers, customers in a specific user tier, customers in a particular geographic region, customers with a particular product configuration, or a set number/percentage of random customers. This allows for software testing and user acceptance testing by a selected group or segment of customers before the feature is rolled out to a wider customer base. As another example, where availability of a feature is controlled by a feature flag, the feature can easily be rolled back (made unavailable) in the event that customer feedback is sufficiently negative or an issue with the feature is identified. Various software products/systems have been developed to assist with using feature flags—for example LaunchDarkly® and Rollout®. For ease of reference, products/systems such as these will be referred to as feature flag systems. Generally speaking, feature flag systems provide mechanisms for creating feature flags and controlling the roll out and roll back of features enabled by feature flags. In LaunchDarkly, for example, roll out of a feature is controlled by customizable target rules, which dictate when, and to what users/user groups, features behind feature flags are made available. The rules associated with active feature flags may be forwarded to the computing system (server and/or user device) that is configured to execute the software application such that the computing system can determine based on the rules associated with a feature flag which version of the feature to provide when the feature is invoked. In addition to feature flags, in recent years, developers have also begun to use timers in their code. Similar to a feature flag, a timer may be used as a wrapper around certain functionalities. For example, a timer can be wrapped around a piece of code, e.g., an update, a new feature, and/or an enhancement along with a feature flag. At runtime, whenever the corresponding feature is invoked, the timer starts before and stops right after that particular feature finishes executing. An event log of the time taken to execute the feature may then be created and stored along with a unique identifier of the timer. In this manner, a timer can be used to determine how long a particular piece of code or a corresponding feature takes to execute. This provides software developers the ability to monitor execution times and to determine whether changes to the source code improve performance or not. When timers are used in conjunction with feature flags, the timers can be linked to the feature flags, e.g., by employing the same identifier for the timers as that used for the corresponding feature flag. A simplified example of a timer is illustrated in table B. As shown in this table, the timer t can have the same identifier “featureflagKey” as the corresponding feature flag. TABLE BExample timerTimer t = new Timer(“featureflagKey”);t.start( );if (isFeatureFlagOn(“featureflagKey”)) {runNewCode( );} else {runOldCode( );}t.stop( ); Embodiments of the present disclosure are directed to systems and methods for utilizing these feature flags and timers in the underlying source code to identify and determine performance regression caused by portions of the source code and to identify the developer/team responsible for the corresponding source code. In particular, for every feature flag in the source code, using the corresponding timer, the systems and methods monitor the time taken by the corresponding feature to execute with the feature flag turned on (i.e., when the new feature is executed) and with the feature flag turned off (i.e., when the original feature is executed) in a particular period of time. If for a given feature flag it is determined that the corresponding feature takes longer to execute with the feature flag tuned on than it did with the feature flag turned off, the systems and methods determine that the performance of the feature associated with that given feature flag has regressed. Alternatively, if it is determined that the feature takes a shorter period of time to execute with the feature flag turned on than it does with the feature flag turned off, the systems and methods disclosed herein determine that the feature associated with that given feature flag has improved in performance. Further still, the systems and methods disclosed herein may be configured to generate an alert upon detecting a performance regression and may forward the generated alert to the developer(s)/user(s) associated with the corresponding feature flag. In this manner, the disclosed systems and methods can identify even small performance regressions and can notify the responsible party so that corrective action can be taken. In certain embodiments, an alert is only generated if the regression (i.e., the difference in execution times between the original feature and the new feature) is above a threshold value (e.g., 10 ms) or varies by a threshold percentage (10%). Any regressions below this threshold value are ignored. An overview of one example environment illustrating different systems involved in certain embodiments will be described, followed by a description of a computer system, which can be configured in various ways to perform the embodiments/various features thereof as described herein. Operations for detecting performance regression will then be described. Environment Overview FIG.1illustrates an example environment100in which embodiments and features of the present disclosure are implemented. In particular,FIG.1illustrates the systems and platforms that may be utilized to detect performance regression. Example environment100includes a communications network102, which interconnects one or more user devices110, a product platform120, a feature flag server system140, a logging system130and a regression detection system150. For ease of reference, the acronym FF will be used herein in place of “feature flag”. In general, the product platform120is a system entity that hosts one or more software applications and/or content. The platform120may include one or more servers122for hosting corresponding software application(s) and one or more storage devices124for storing application specific data. Examples of software applications hosted by product platforms120may include interactive chat applications (e.g., Slack™, Stride™) collaborative applications (e.g., Confluence™), software code management systems (e.g., Bitbucket™), and issue tracking applications (e.g., Jira™). Jira, Confluence, BitBucket, and Stride are all offered by Atlassian, Inc. It will be appreciated that the software applications need not be offered by the same organization and that the presently disclosed invention can be used with any product platform. In order to run a particular application, the product platform server122includes one or more application programs, libraries, APIs or other software elements that implement the features and functions of the application. For example, in case the product platform120is an issue tracking system such as Jira, the server122allows users to perform various actions with respect to issues—for example, create issues, associate issues with projects and/or other issues, transition issues between workflow states, add/edit information associated with issues, assign issues to specific people/teams, view issues, and/or search for issues. The issue tracking system also allows for management of an issue, for example, user permissions defining: users that can see an issue and its associated information; users who can edit an issue; users who can transition an issue into/out of a particular workflow state; users who should be automatically notified any time an issue changes (either any change or a particular change), etc. While single server architecture has been described herein, it will be appreciated that one or more of the product platform server122can be implemented using alternative architectures. For example, in certain cases a clustered architecture may be used where multiple server computing instances (or nodes) are instantiated to meet system demand. Conversely, in the case of small enterprises with relatively simple requirements, a product platform120may be a stand-alone implementation (i.e., a single computer directly accessed/used by the end user). The product platform server122may be a web server (for interacting with web browser clients) or an application server (for interacting with dedicated application clients). While the product platform120has been illustrated with a single server122, in some embodiments it may provide multiple servers (e.g., one or more web servers and/or one or more application servers). The FF system140(as described previously) provides mechanisms for creating FFs and controlling the rollout and rollback of features enabled by FFs. Further, the FF system140may communicate the rules associated with active FFs to the product platform120such that the product platform120can execute the correct feature (e.g., the original feature or the new feature) at execution time based on the FF rules. In addition to this, in some cases, the FF system140may receive event log data from the product platform120related to the usage of the FFs during execution. This log data may include, e.g., a count of the number of times the FF was switched on and off within a given period and/or operating environment. In order to perform these functions, the FF system140includes an FF server142and an FF data store144. The FF server142configures the FF server system140to provide server side functionality—e.g., by receiving and responding to requests from FF clients (e.g., client114) and storing/retrieving data from the FF data store144as required. The FF data store144stores the information related to FFs. This information may include, e.g., for each FF, a unique identifier for the FF, an FF name, the rules associated with the FF, the owner of the FF and/or any other users/developers associated with the FF. Further still, the FF system140may require an organization associated with the product platform to register a product account and developer accounts with the FF system140such that any FFs created by developers from the organization can be associated with the developer that created the FF and with the corresponding product platform120. The product and developer account information is also stored in the FF data store144. The FF server142may be a web server (for interacting with web browser clients) or an application server (for interacting with dedicated application clients). While FF server system140has been illustrated with a single server142, it may provide multiple servers (e.g. one or more web servers and/or one or more application servers). In certain embodiments, FF server system140is a scalable system including multiple distributed server nodes connected to the shared data store144(e.g. a shared file server). Depending on demand from clients (and/or other performance requirements), FF server system140server nodes can be provisioned/de-provisioned on demand to increase/decrease the number of servers offered by the FF server system140. Each FF server142may run on a separate computer system and include one or more application programs, libraries, APIs or other software that implement server-side functionality. Similarly, FF data store144may run on the same computer system as FF server142, or may run on its own dedicated system (accessible to FF server(s)142either directly or via a communications network). The user device110, e.g., user device110A may be utilized by consumers to access the product platform120. Further, the user device110, e.g., user device110B may be utilized by developers to update/change a software application offered by the product platform120, e.g., to include feature extensions or program enhancements and/or to fix bugs in the software application. When the user device110is utilized by a consumer of the product platform120, the user device110has a product platform client112installed and/or executable thereon. The user device110may also have other applications installed/running thereon, for example, an operating system and a source code management/development client. When executed by the user device110, the product platform client112configures the user device110to provide client-side product platform functionality. This involves communicating (using a communication interface such as218described below) with the product platform120. The product platform client112may be a dedicated application client that communicates with the product platform120using an API. Alternatively, the product platform client112may be a web browser (such as Chrome, Safari, Internet Explorer, Firefox, or an alternative web browser) which communicates with the product platform120using http/https protocols. When the user device110B is utilized by a developer to change/update a software application offered by the product platform120, the user device110includes an FF client114in addition to the product platform client112. The FF client application114configures the user device110to provide client-side FF system functionality. This involves providing a communication interface between the user device110and the FF system140(and, in particular, the FF server142). In some examples, the FF client114may provide an interface for a developer to create a new FF or manage an existing FF. In addition, the FF client114may communicate with the FF server140to allow a developer to view the performance of an FF, for example. The FF client114may be a dedicated application client that communicates with the FF server142using an API. Alternatively, FF client114may be a web browser, which communicates with an FF web server using http/https protocols. While user device110B has been shown with separate product platform and FF clients112and114, a single application may be used as both a product platform and an FF client (e.g., a web browser, in which case the product platform and FF servers are web servers). User device110may be any form of computing device. Typically, user device110is a personal computing device—e.g., a desktop computer, laptop computer, tablet computer, and in some instance even a mobile phone. While only two user devices110have been illustrated, an environment would typically include multiple user devices110used by consumers for interacting with the product platform120and multiple user devices used by developers for updating software applications hosted by the product platform120and creating/managing FFs using the FF system140. The product platform120and the product platform client112(running on the user device110) operate together to provide the functionality offered by the product platform120. For example, consumers may utilize the product platform client112to access and/or interact with the products/services offered by the product platform120. Similarly, the FF server142(running on FF system140) and FF client114(running on user device110) operate together to provide FF system functionalities. For example, the FF server142and the FF client114may operate together to allow a developer to create a new FF, manage a, FF, create an account, etc. FF operations involving the display of data (e.g., performance metrics for a feature flag) involve the user device110as controlled by the FF client114. The data displayed, however, may be generated by the FF server142and communicated to the FF client114. Similarly, FF operations involving user input (e.g., to create an FF) involve the user device110receiving user input (e.g., at input device214ofFIG.2) and passing that input to the FF client114to create and store the feature flag e.g., in the FF data store144. The information input may be processed by the FF client114itself, or communicated by the FF client114to the FF server142to be processed by the FF server142. FF operations involving writing data to the FF data store144involve the FF server142. The data written to the FF data store144may, however, be communicated to the FF server142by the FF client114. Returning toFIG.1, the logging system130stores event log data associated with the product platform120. For example, each time a user device110interacts with a product/service offered by the product platform120, the product platform may generate an event log and forward the event log for storing in the logging system130. The event log may include e.g., the time of the interaction, the particular type of interaction, a user ID of the user attempting the interaction (if available), status of the interaction (e.g., successful, unsuccessful), etc. The product platform120can configure the type of event data that it captures and stores. Further, the product platform120can configure the frequency at which it captures and stores event data. For example, in some cases the product platform may only capture and store event data for failures and/or if an operation takes longer than a threshold time to execute or times out. In other cases, the product platform may only capture and store event data for a small percentage of interactions (e.g., one in every 1000, one in every 100, etc.). Examples of event log data include application logs (i.e., a log of each time an application is accessed), and performance data logs (e.g., a log of the state of the CPU, disk, memory, etc., when an application is accessed). In addition to this, the logging system130stores log data associated with active FFs and log data corresponding to timers in the executed code. For example, if on execution a particular user action, the product platform server122or client112encounters an FF, the event log data may include a unique identifier of the FF, a state of the FF (e.g., true or false), and any performance data related to execution of the feature(s) corresponding to the FF. Similarly, if during execution of a particular user action (e.g., opening a webpage), the product platform server122or client112encounters one or more timers in the executable code, the product platform120may determine the time taken to perform the corresponding feature and then create a timer event log including, e.g., a unique identifier of the timer, the time taken to execute the feature, and (if available) the identifier of any FF associated with the feature and the state of the FF, or an identifier of the feature executed (e.g., the original feature or a new feature). A simplified example of an event log is shown in Table C below. TABLE CExample event log{tenantId: XXXX,requestId: YYYY,feature_flag_states: {feature_flag_1: true,feature_flag_2: false,...},timers: {feature_flag_1: 100,feature_flag_2: 200,...},url: “/editIssue.jsp”,time: “2018-08-09T11: 12: 39Z”} The example event log depicted in table C includes:A tenantID which is a unique identifier for the product platform associated with that event,A requestID which is a unique identifier for the event log,Feature flag states, which include the states for the feature flags that were invoked during execution of the event. For each feature flag, the feature flag states include the unique identifier of the feature flag and the state of the feature flag (i.e., true=feature flag on, false=feature flag off). In this example, feature flag 1 was turned on and feature flag 2 was turned off.Timers, which include the time taken (in milliseconds) to execute the functionalities wrapped in timers during execution of the event. For each timer, this includes the unique identifier of the timer (which is the same as the unique identifier for the corresponding feature flag) and the time taken to provide the feature. In this example, time taken to provide the new feature wrapped in feature flag 1 is 100 ms and the time taken to execute the original feature wrapped in feature flag 2 is 200 ms,A uniform resource locator (URL) of the software application where the event was logged, andThe time at which the event was logged. The product platform120can configure the type of event data that it captures and stores in the logging system130. Further, the product platform120can configure the frequency at which it captures and stores event data. For example, in some cases the product platform may only capture and store event data for failures and/or if an operation takes longer than a threshold time to execute or times out. In other cases, the product platform may only capture and store event data for a small percentage of events (e.g., one in every 1000, one in every 100, etc.). One example of a logging system is Splunk®. However, the embodiments described herein are not limited to be used with Splunk and can be used with any other data logging system or database. In certain embodiments, the logging system130indexes the log data before storing so that the log data can be easily searched and retrieved. Further, in these embodiments, the logging system130includes a search engine (not shown) which may be queried to retrieve data logs. The regression detection system150detects performance regressions (if any) in a software application (e.g., a software product or service) offered by the product platform120. To this end, the regression detection system150communicates with the FF system140and the logging system130to retrieve data associated with FFs and timer event data, respectively. The regression detection system150then analyzes the retrieved timer event data to determine whether the performance of the software application offered by the product platform120has regressed or improved. Operations of the regression detection system150will be described in detail with reference toFIG.3. InFIG.1, the regression detection system150is illustrated as a system separate from the product platform120. However, in some embodiments, the regression detection system150may be executed by the product platform120and, in these embodiments, the regression detection system150may form a part of the product platform120itself. Further, in other embodiments, the regression detection system150may be executed on or form part of the FF system140. In these embodiments, the FF system140may directly communicate with the logging system130or the product platform120to retrieve timer event logs corresponding to the FFs maintained by the FF system140to determine whether the rollout of a feature has resulted in a regression or an improvement. Communications between the various systems in environment100are via the communications network102. Communications network102may be a local area network, a public network (e.g. the Internet), or a combination of both. While environment100has been provided as an example, alternative system environments/architectures are possible. The embodiments and features described herein are implemented by one or more special-purpose computing systems or devices. For example, in environment100each of the user device110, the product platform120, the logging system130, the FF system140, and/or the regression detection system150is or includes a type of computing system. A special-purpose computing system may be hard-wired to perform the relevant operations. Alternatively, a special-purpose computing system may include digital electronic devices such as one or more application-specific integrated circuits (ASICs) or field programmable gate arrays (FPGAs) that are persistently programmed to perform the relevant operations. Further, alternatively, a special-purpose computing system may include one or more general-purpose hardware processors programmed to perform the relevant operations pursuant to program instructions stored in firmware, memory, other storage, or a combination. A special-purpose computing system may also combine custom hard-wired logic, ASICs, or FPGAs with custom programming to accomplish the relevant operations described herein. A special-purpose computing system may be a desktop computer system, a portable computer system, a handheld device, a networking device or any other device that incorporates hard-wired and/or program logic to implement relevant operations. By way of example,FIG.2provides a block diagram that illustrates one example of a computer system200, which may be configured to implement the embodiments and features described herein. Computer system200includes a bus202or other communication mechanism for communicating information, and a hardware processor204coupled with bus202for processing information. Hardware processor204may be, for example, a general-purpose microprocessor, a graphical processing unit, or other processing unit. Computer system200also includes a main memory206, such as a random access memory (RAM) or other dynamic storage device, coupled to bus202for storing information and instructions to be executed by processor204. Main memory206also may be used for storing temporary variables or other intermediate information during execution of instructions to be executed by processor204. Such instructions, when stored in non-transitory storage media accessible to processor204, render computer system200into a special-purpose machine that is customized to perform the operations specified in the instructions. Computer system200further includes a read only memory (ROM)208or other static storage device coupled to bus202for storing static information and instructions for processor204. A storage device210, such as a magnetic disk or optical disk, is provided and coupled to bus202for storing information and instructions. In case the computer system200is the user device110, the computer system200may be coupled via bus202to a display212(such as an LCD, LED, touch screen display or other display), for displaying information to a computer user. An input device214, including alphanumeric and other keys, may be coupled to the bus202for communicating information and command selections to processor204. Another type of user input device is cursor control216, such as a mouse, a trackball, or cursor direction keys for communicating direction information and command selections to processor204and for controlling cursor movement on display212. According to one embodiment, the techniques herein are performed by computer system200in response to processor204executing one or more sequences of one or more instructions contained in main memory206. Such instructions may be read into main memory206from another storage medium, such as a remote database. Execution of the sequences of instructions contained in main memory206causes processor204to perform the process steps described herein. In alternative embodiments, hard-wired circuitry may be used in place of or in combination with software instructions. The term “storage media” as used herein refers to any non-transitory media that stores data and/or instructions that cause a machine to operate in a specific fashion. Such storage media may comprise non-volatile media and/or volatile media. Non-volatile media includes, for example, optical or magnetic disks, such as storage device210. Volatile media includes dynamic memory, such as main memory206. Common forms of storage media include, for example, a floppy disk, a flexible disk, hard disk, solid state drive, magnetic tape, or any other magnetic data storage medium, a CD-ROM, any other optical data storage medium, any physical medium with patterns of holes, a RAM, a PROM, and EPROM, a FLASH-EPROM, NVRAM, or any other memory chip or cartridge. Storage media is distinct from but may be used in conjunction with transmission media. Transmission media participates in transferring information between storage media. For example, transmission media includes coaxial cables, copper wire and fiber optics, including the wires that comprise bus202. Transmission media can also take the form of acoustic or light waves, such as those generated during radio wave and infrared data communications. Computer system200also includes a communication interface218coupled to bus202. Communication interface218provides a two-way data communication coupling to a communication network, for example communication network102of environment100. For example, communication interface218may be an integrated services digital network (ISDN) card, cable modem, satellite modem, etc. As another example, communication interface218may be a local area network (LAN) card to provide a data communication connection to a compatible LAN. Wireless links may also be implemented. In any such implementation, communication interface218sends and receives electrical, electromagnetic or optical signals that carry digital data streams representing various types of information. Computer system200can send messages and receive data, including program code, through the network(s)102, network link220and communication interface218. As noted, computer system200may be configured in a plurality of useful arrangements, and while the general architecture of system200may be the same regardless of arrangements, there will be differences. For example, where computer system200is configured as a server computer (e.g., such as product platform120, logging system130, FF system140, or regression detection system150), it will typically be provided with higher end hardware allowing it to process data, access memory, and perform network communications more rapidly than, for example, a user device (such as device110). The various systems and platforms depicted inFIG.1communicate with the other systems in network100via suitable communication networks102. For example, the user devices110may communicate with the product platforms120via public networks, whereas the regression detection system150may communicate with the FF system140and/or the logging system130via one or more private or public networks. It will be appreciated that based on the required implementation, any suitable communication networks102may be utilized to allow communication between the systems in environment100. Performance Detection Process FIG.3is a flowchart illustrating an example method300for detecting performance regression in a software application using one or more of the systems described with reference toFIG.1. This method is described with respect to a single software application. However, it will be appreciated that in actual implementation the method can be scaled to detect performance regression in multiple software applications. At step302, FFs associated with a product platform/software application are retrieved. In some embodiments, the regression detection system150communicates with the logging system130to retrieve event logs corresponding to FFs that have corresponding timers for a particular product platform120. For example, if the product platform120is a Jira platform, the regression detection system150may request the logging system130to retrieve and forward event logs stored in the logging system130that include feature flags that have corresponding timers and correspond to the Jira platform. In some embodiments, the regression detection system150may request the logging system130to forward the event logs corresponding to all the FFs associated with a requested product platform account. In other embodiments, the regression detection system150may request the logging system to forward relevant data pertaining to these feature flag event logs instead of forwarding the entire event logs the logging system130maintains for the product platform120. For example, the regression detection system150may request the logging system130to forward a list of unique identifiers for the FFs that have corresponding timers. Along with the unique identifiers for the FFs, the regression detection system150may request the logging system130to forward the state of the FFs, a count of the number of times the FF was executed in a threshold period of time, a count of the number of times the FF was turned on and/or a count of the number of times the FF was turned off in that threshold period of time, a rollout percentage of the FF, and/or a count of the number of times timer samples were recorded for a particular FF. It will be appreciated that the above example data fields are illustrative and that any other data fields associated with the FFs may also be considered relevant and could be retrieved at this step. One example query used by the regression detection system150to retrieve the FFs that have corresponding timers is depicted in Table D and an example result received from the logging system is shown in Table E. It will be appreciated that the result shows the relevant data for a single FF, but in other embodiments, the result includes relevant data for multiple FFs. TABLE DExample query to retrieve FFs that have corresponding timerssearch $env_filter JiraMetricsLogger message=“Metrics [jira.request.metrics.work-context-metrics]” |fields + ext.jira.request.metrics.feature_flag_metrics |spath output=flags path=ext.jira.request.metrics.feature_flag_metrics | fields − _raw |rex field=flags “\”(?<f>[{circumflex over ( )}\“]*)\”:\“(?<v>[{circumflex over ( )}\”]*)\“” max_match=0 |eval fields = mvzip(f,v) | mvexpand fields| makemv delim=“,” fields | eval f =mvindex(fields, 0), v = mvindex(fields, 1) |where v in(“true”, “false”, “OLD”, “NEW”, “CHECK_RETURN_NEW”, “CHECK”) |eval v=if(v=“OLD”, “NEW”, v) |eval v=if(v=“CHECK_RETURN_NEW”, “CHECK”, v) |stats count as countByFV by f, v |evenstats sum(countByFV) as countByF by f |eval countByF = if(v=“CHECK_RETURN_NEW” OR v=“CHECK”, countByF +countByFV, countByF) |where countByFV!=countByF AND v!=“false” AND v!=“OLD” |eval countByF=if(v=“NEW”, 0, countByF) |eval v=if(v=“CHECK”, “NEW”, v) |stats sum(countByFV) as countByFv, sum(countByF) as countByF by f, v|eval percent=(countByFV/countByF)*100 |table f v countByFV countByF percent |sort f |join f [search $env_filter JiraMetricsLogger message=“Metrics[jira.request.metrics.work-context-metrics]” | fields + ext.jira.request.metrics.work-context-metrics.timers | spath output=fields path=ext.jira.request.metrics.work-context-metrics.timers | fields − _raw| rex field=fields “\”(?<f>[{circumflex over ( )}\“]*)\”:\{” max_match=0| mvexpand f| stats count as ct by f] TABLE EExample outputf,v,countByFV,countByF,percent,ctgdpr-1395-user-store-use-combined-cache,true,87908,175816,50,12606 In the tables above, f is the unique feature flag identifier. In the example above, this is “gdpr-1395-user-store-use-combined-cache”. V is the feature flag state, i.e., whether the flag controlling the code switch was on or off. In the example above, this is set to ‘True” (i.e., the output corresponds to the FF where the FF was turned on and the new feature was provided. CountByFV indicates the number of times the corresponding feature was executed when the FF state was true. In the example above, this value is 87908. CountByF indicates the number of times the corresponding feature was executed in total (i.e., with the FF turned on or off). In the example above, this value is 175816. Percent indicates the rollout percentage of the FF. This value is determined based on the values of the CountByF and CountByFV values (e.g., percent=countByF*100/countByFV). In the example above, this value is 50%. Ct indicates the number of timer samples that were produced for that feature flag. As noted previously, the product platform may not record event data for each event and may only capture and store event data for a small percentage of events (e.g., one in every 1000, one in every 100, etc.). Accordingly, timer data may not be available for each feature flag execution. This field indicates the number of available timer samples for that feature flag. In this example, the ct value is 12606. At step304, an FF from the list of FFs received at step302is selected. Next, at step306, timer log data corresponding to the selected FF is retrieved. As described previously, timers may be incorporated in the source code and wrapped around certain features—e.g., the same features that are wrapped in FFs. To link the FFs and the corresponding timers, developers may utilize the same identifier for the timers and the corresponding FFs. Each time the feature wrapped in a timer is executed (i.e., the old feature or the new feature, depending on the state of the associated FF), the time taken to provide the feature is determined by the product platform120and at least a percentage of these timer events are recorded and stored as event logs. At step306, the regression detection system150retrieves these timer event logs for the selected FF from the logging system130. In some embodiments, the regression detection system150retrieves timer event logs for events that were recorded in a predetermined period of time. It will be appreciated that many factors can affect the performance of a particular application—e.g., the operating system of the user device, the available memory at the user device, network connectivity, server bandwidth, etc., can all have an effect on the performance times of a particular application and/or a particular feature. Accordingly, it is unwise to make this determination based on a single event instance. The more number of event instances that can be used in the analysis, the better. Accordingly, timer event logs recorded for a particular period of time are retrieved. This period of time is configurable and can be based on the requirements of the systems involved. In one example, the period of time may be the period corresponding to the last 24 hours. One advantage of a 24-hour period is that even if an FF is flipped to provide the new feature to all users in the last 12 hours, data corresponding to the old feature can still be collected and examined. It will be appreciated that in other examples shorter or longer periods of time may be implemented, such as 12 hours, 1 hour, 30 minutes, etc. Shorter periods are preferable because performance regressions can be identified and dealt with faster, however, it may be possible that a sufficient number of timer event logs cannot be obtained for all FF states when shorter periods are collected and therefore sufficient data may not be available to determine the performance of a feature and/or software application accurately. As noted previously, at step306, the regression detection system150is supposed to retrieve timer data corresponding to the FF selected at step304. This can be achieved in many different ways. For example, the regression detection system150may request the logging system130to forward all timer event log data stored by the logging system130that was received from the product platform120within the predetermined period of time that has the same unique identifier as the selected FF. In other embodiments, the regression detection system150may request the logging system to forward relevant data pertaining to the timer event logs instead of forwarding the entire timer event logs the logging system130maintains for that FF in the specified period of time. For example, for each FF, the regression detection system150may request the logging system130to forward timer data for each state of the FF. And for each state of the FF (true or false), the regression detection system150may also request the logging system to forward relevant data such as the number of timer samples recorded, the time taken to execute the corresponding feature each time it was recorded, and/or statistical data for the timers (such as mean value, average value, p90 value, p99 value, etc.). It will be appreciated that the above example data fields are illustrative and that any other data fields associated with the timers may also be considered relevant and could be retrieved at this step. An example query to retrieve timer data for a particular FF is depicted in Table F and the corresponding output data for that FF is depicted in Table G. TABLE Fexample query to retrieve timer datasearch “$key”| fields − _raw| rename ext.jira.request.metrics.work-context-metrics.timers.$key.mean to mean| rename ext.jira.request.metrics.work-context-metrics.timers.$key.count to count| rename ext.jira.request.metrics.feature_flag_metrics.$key to flag| eval totalMs = mean * count / 1000000| stats count p90(totalMs) as p90 p99(totalMs) as p99 by flag TABLE GExample output from logging systemState,count,p90,p99false,7404,0.02121796337393542,0.04926696200000006true,13666,0.051394735446916420,0.110053213398692980 In the example output above, the logging system130forwards timer data for all statuses of the selected FF. In particular, for a particular state of the FF, the logging system130returns a count of the number of timer results available in the predetermined period of time, the 90thpercentile (P90) estimate execution time for the corresponding feature, and the 99thpercentile (P99) estimated execution time for the corresponding feature. The 90thpercentile means that 90% of the feature execution times for the corresponding FF status fall below the P90 estimate. Similarly, the 99thpercentile means that 99% of the feature execution times for the corresponding FF status fall below the P99 estimate. Once the regression detection system150retrieves the timer event data for a particular FF, performance of the feature wrapped in each FF can be examined to determine if the feature causes a performance regression in the software application or not. At step308, the regression detection system150determines whether a threshold number of timer event logs exist for all the FF statuses. As described previously, an FF may be associated with two versions of a feature—e.g., the original version (A) and the updated version (B). It is also possible for an FF to be associated with multiple versions of a feature—e.g., the original version (A), a first new version (B) and a second new version (C). At step308a determination is made whether a sufficient number of timer event logs have been retrieved for each version of the feature associated with an FF. In one embodiment, the regression detection system150may compare the number of timer event logs corresponding to each version of the feature with a threshold number (e.g., 100 event logs/version of functionality). If the number of timer event logs matches or exceeds the threshold number, the regression detection system150determines that a sufficient number for timer event logs have been retrieved and the method proceeds to step310where the regression detection system150calculates performance (or estimate execution time) of each version of the feature associated with the selected FF. The performance (or estimate execution time) can be calculated in many different ways. In one example, for each version of the feature, statistical analysis is performed on the timer execution times. This statistical analysis may be performed by the logging system130or the regression detection system150. For example, where the logging system130has analysis capabilities, the logging system130may perform the required statistical analysis on the timer data before or when communicating the timer data to the regression detection system150. For example, as shown in Table G, the logging system130may determine the 90thpercentile (P90) estimate execution time for each version of the feature. In alternate examples, an average function execution time may be determined for each version or a different probabilistic estimate may be computed, such as the 99thpercentile (P99), or the 80thpercentile (P80)). It will be appreciated that these are only examples, and other techniques may also be contemplated and these are within the scope of the present disclosure. In other embodiments, where the logging system130is simply a database for storing event data logs and timer event logs, the regression detection system150may be configured to perform the statistical analysis on the timer execution times (identified from the retrieved timer event logs). For example, the regression detection system150may calculate the 90thpercentile estimate or the average feature execution time for each version of the feature based on each instance of timer event logs. Once the performance is calculated for each version of the feature associated with the selected flag, the method proceeds to step312, where a determination is made whether regression associated with a new version of the feature exceeds a threshold amount. A new version of the feature will be considered to have regressed in performance if estimated execution time of the new version exceeds the estimated execution time of the original version. For example, in the examples above, a functionality would be deemed to have regressed within the P90 estimate if version B is higher (by X milliseconds or Y %) than the P90 estimate of version A. Similarly, if the P90 estimate for the new version of the feature does not exceed the P90 estimate of the original version of the feature, the regression detection system150determines that the feature has not regressed in performance. The difference between the estimated execution times is the regression time. At step312, if it is determined that a feature has regressed in performance, the regression detection system150determines whether the regression time exceeds a threshold value (e.g., 20 ms) or percentage (e.g., 20%). If the regression exceeds the predetermined threshold, the method proceeds to step314where the regression detection system150identifies the party responsible for the FF. In one embodiment, the regression detection system generates and forwards an ownership request to the FF system140. The request includes the unique identifier of the selected FF. It will be appreciated that when an FF is first created, the FF system140may be configured to associate the created FF with the product platform it corresponds to, the developer that has requested the FF, the team that the developer belongs to, etc. In some embodiments, the FF system140may request the creator of the FF to provide this information at the time of creation of the FF. Alternatively, the FF system140may request developers to provide this information (i.e., developer details, team details, and product platform details) when developers first register with the FF system140such that the FF system can maintain accounts for developers, teams, and/or product platforms. Thereafter, the FF system140may request developers to log into their accounts before they can create or manage FFs. In these cases, the developer, team and product platform details corresponding to a newly created FF may be retrieved from the user account details and associated with the FF when it is created and stored. Accordingly, the FF system140maintains a record of the developer, team, and/or product platform that is responsible for each FF maintained by the FF system140. At step314, the FF system140queries its own internal data store144using the unique identifier of the FF to retrieve and forward the ownership information to the regression detection system150. The regression detection system150in turn sends an alert to the owner(s) of the FF as identified from the ownership information. The owner(s) may be alerted using any suitable communication method based on the information made available to the regression detection system150. For example, if email addresses are provided, the regression detection system may generate and forward an email message to the owners informing them of the regression. The email message may also include any suitable analysis data to allow the developers to identify the regression visually. Alternatively, instead of sending the analysis data, a link to an analysis dashboard may be sent to the owners. In other examples, the owners may be alerted via a chat tool (e.g., Slack®), an SMS, or any other suitable notification. In certain embodiments, the developer may be advised to turn off the feature flag until the regression is investigated and fixed. In other embodiments, if the regression is significant—i.e., it exceeds a predetermined threshold, (e.g., a 200 ms regression or a 30% regression over the original implementation), the regression detection system150may automatically request the feature flag system140to turn off the feature flag. Thereafter, the method300proceeds to step316, where the regression detection system150determines if all the FFs have been evaluated. If more FFs remain to be evaluated, the method returns to step304, where a next FF is selected. This method is then repeated until all the FFs are assessed. When no further FFs remain to be evaluated, the method300ends. Up until this point, the ‘yes’ paths following any decision blocks in flowchart300have been described. The following section describes the ‘no’ paths from the decision blocks in process300. At step308, if the regression detection system150determines that a threshold number of event logs were not retrieved for a version of the feature, the method proceeds to step318where the selected FF is discarded and the method thereafter proceeds to step316. At step312, if the regression detection system150determines that none of the new versions of the feature have resulted in a performance regression, the method directly proceeds to step316. In some embodiments, at step312, if it is determined that the new version(s) of the feature do not result in a regression, the regression detection system may determine whether the new version(s) of the feature result in an improvement in this system—e.g., an improvement above a certain threshold value or percentage. A new version of the feature will be considered to have improved in performance if estimated execution time of the new version is lower than the estimated execution time of the original version. The difference between the estimated execution times is the improvement time. If it is determined that the new version(s) of the feature result in an improvement above a certain threshold value or percentage, the regression detection system150may be configured to identify the owners and generate a message to notify the owners of the performance improvement. In method300, the analysis of the FFs is shown in a sequential manner—i.e., each FF is examined in sequence. However, this is a simplistic depiction, and in reality, two or more of the FFs may be analyzed in parallel. For example, the regression detection system150may retrieve timer data for multiple FFs at step306and then perform method steps308-314in parallel. Developers often deploy new features to users in an incremental fashion. For example, they may first deploy a new feature so that other developers in the team can access the functionality. Once feedback is positive in this deployment and any bug fixes have been identified and corrected, the feature may be deployed to all the employees of the organization. Thereafter, that is, after the functionality has been tested and used by internal users for a period of time, the feature may be deployed to all users (including external users). To manage this, some organizations maintain different deployment environments. for example, testing, staging, and production. When a feature is deployed to a particular environment, only users that are authorized to access that environment can see the feature. In such cases where there are multiple deployment environments, developers may set their FF rules such that the corresponding feature is enabled for different percentages of users in different environments. For example the FF of a particular feature may be set such that for 100% of the users in the testing environment the new feature is executed, for 50% of the users in the staging environment the new feature is executed whereas for the other 50% of the users the original feature is executed, and for all the users in the production environment the original feature is executed. In such cases, method300may further retrieve event logs corresponding to the different environments and calculate performance regression for a particular feature flag for each environment individually. It will be appreciated that in some cases, a feature flag may encapsulate multiple code changes (or multiple corresponding features) which are scattered across the codebase. In such cases, timers may be wrapped around individual code changes or features. Whenever a particular feature associated with the feature flag is invoked, the product platform120may determine which version of the feature to provide. The timer may then record the time taken to provide that version of the feature and event log data associated with that execution may be recorded and stored by the product platform120. Subsequently, in method300, at step306when the regression detection system retrieves timer event log data from the logging system for that particular feature flag, it retrieves timer event logs for all the features encapsulated in the feature flag and may aggregate the execution times for different versions of the features encapsulated in the feature flag to determine whether the features collectively result in a regression or not. It will be appreciated that method300is one example method for utilizing FF and timer event log data for determining the performance of corresponding functionalities. In other examples, the one or more method steps may be rearranged, certain steps may be omitted and other steps may be added without departing from the scope of the present disclosure. In the foregoing specification, embodiments of the invention have been described with reference to numerous specific details that may vary from implementation to implementation. For example, in the foregoing specification, the logging system130is depicted as a separate system that is configured to receive event log data from the product platform120and store this data. However, in other embodiments, the logging system130may be omitted and the product platform120may directly save and store the FF and timer event log data in the product platform's data store124. In this case, if the regression detection system150is also executed on the product platform120, method300can be executed within the product platform120itself. Thus, the sole and exclusive indicator of what is the invention, and is intended by the applicants to be the invention, is the set of claims that issue from this application, in the specific form in which such claims issue, including any subsequent correction. Any definitions expressly set forth herein for terms contained in such claims shall govern the meaning of such terms as used in the claims. Hence, no limitation, element, property, feature, advantage or attribute that is not expressly recited in a claim should limit the scope of such claim in any way. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense. As used herein the terms “include” and “comprise” (and variations of those terms, such as “including”, “includes”, “comprising”, “comprises”, “comprised” and the like) are intended to be inclusive and are not intended to exclude further features, components, integers or steps. Various features of the disclosure have been described using process steps. The functionality/processing of a given process step could potentially be performed in various different ways and by various different systems or system modules. Furthermore, a given process step could be divided into multiple steps and/or multiple steps could be combined into a single step and/or two or more of the steps can be performed in parallel. Furthermore, the order of the steps can be changed and one or more steps can be added or deleted without departing from the scope of the present disclosure. For example, steps to retrieve timer event logs for different environments and calculate performance regression for each environment may be added. It will be understood that the embodiments disclosed and defined in this specification extends to alternative combinations of two or more of the individual features mentioned or evident from the text or drawings. These different combinations constitute various alternative aspects of the embodiments.
60,331
11860771
DETAILED DESCRIPTION The following detailed description of certain embodiments presents various descriptions of specific embodiments of the invention. However, the invention can be embodied in a multitude of different ways as defined and covered by the claims. In this description, reference is made to the drawings where like reference numerals may indicate identical or functionally similar elements. Unless defined otherwise, all terms used herein have the same meaning as are commonly understood by one of skill in the art to which this invention belongs. All patents, patent applications and publications referred to throughout the disclosure herein are incorporated by reference in their entirety. In the event that there is a plurality of definitions for a term herein, those in this section prevail. When the terms “one”, “a” or “an” are used in the disclosure, they mean “at least one” or “one or more”, unless otherwise indicated. Software developers, particularly website, web application and mobile device application developers have a desire to manually test their products on a multitude of hardware and software platforms that their target audience may use. A variety of mobile device manufacturers provide the hardware consumers and businesses use. Examples include, devices manufactured by Apple Inc., Google LLC, Samsung Electronics Co. Ltd., Huawei Technologies Co. Ltd. and others. Similarly, a variety of operating systems for consumer electronic devices exist. Examples include Apple iOS®, Android® operating system (OS), and Windows® Mobile, Windows® Phone and others. Furthermore, users have a variety of choices as far as the web browser application they can use. Examples include Safari®, Chrome®, FireFox®, Windows Explorer®, and others. This variety of choice presents a difficult challenge for a web/app developer to test products on potential target devices. Traditionally, the developer might have to acquire a test device and spend resources configuring it (for example, by installing a target OS, browser, etc.) as well as a secondary hardware device to connect the test device through the secondary hardware device to a local machine of the developer, in order to write code and conduct tests on the test device. The sheer variety of possible devices, operating systems, browsers and combinations of them are numerous and can present a logistical hurdle to the developer. A testing provider can enable a remote test system (RTS), having a multitude of devices for a developer to connect to and conduct tests. The developer can connect to the test system, select a test device, select a configuration (e.g., a particular browser, etc.) and run tests using the selected remote device. The RTS can include a server powering a website or a desktop application, which the developer can use to launch a dashboard for connecting to the RTS and for conducting tests. The dashboard can include a display of the remote device presented to the developer. The RTS system can capture developer inputs and input them to the remote device. The RTS mirrors the display of the remote device on the developer's local machine and simultaneously captures the developer's interactions inputted onto the mirrored display and transfers those commands to the remote device. In a typical case, the developer can use a keyboard and mouse to input interactions onto the mirrored display. When the test device is a smart phone device, the RTS system translates those input interactions compatible with the smart phone. Examples of smart phone input interactions include gestures, pinches, swipes, taps, and others. The remote device display is mirrored on the developer's local machine. In this manner, the developer can experience a seamless interaction with the remote device using the developer's local machine. The RTS can be used both for development of launched and unlaunched products. FIG.1illustrates an example RTS100. Although some embodiments use the RTS100in the context of testing and software development, the RTS100can be used to enable a remote session for any purpose. Testing is merely provided as an example context of usage area of the system and infrastructure of the RTS100. A user102uses a local machine104to launch a browser106to access a dashboard application to interact with the RTS100, connect to a remote device and to conduct tests on the remote device. In some embodiments, the dashboard website/web application may be replaced by a desktop application, which the user102can install on the local machine104. The user102may be a software developer, such as a website developer, web application developer or a mobile application developer. The local machine104, in a typical case, may be a laptop or desktop computer, which the user102can use to write software code, debug, or run tests on a website/web application or mobile application. The user102can enter a uniform resource locator (URL)108in browser106to connect to the dashboard application powered by a server110. The server110can enable the browser106and a remote device114to establish a connection. The RTS100can use the connection for streaming the display of a remote device114onto the browser106in order to mirror the display of the remote device114and present it to the user102. The RTS100can also capture user inputs entered into the mirrored display and input them to the remote device114. The RTS100can include multiple datacenters112in various geographical locations. The datacenters112can include a variety of test devices for the users102to connect with and to conduct tests. In this description, the test devices in datacenters112are referred to as remote devices114, as they are remote, relative to the user102and the user's local machine104. A variety of communication networks116can be used to enable connection between the browser106, the server110and the remote device114. The remote devices114can include various hardware platforms, provided by various manufacturers, different versions of each brand (for example, old, midmarket, new) and optionally various copies of each brand, to enable availability for numerous users102to connect and conduct tests. The RTS100can use a host118connected to one or more remote devices114. In some embodiments, the browser106does not directly communicate with the remote device114. The host118enables communication between the browser106and the remote device114through one or more private and/or public communication networks. The host118can be a desktop, laptop, or other hardware connected with a wired or wireless connection to the remote device114. The hardware used for the host118can depend on the type of the remote device114that it hosts. Examples of host118hardware can include Apple Macintosh® computers for iPhone® and iOS® devices and Zotac® for Android® devices. The RTS100mirrors the display of the remote device114on the browser106, by generating a display120on the browser106. In some embodiments, the display120can be a graphical, or pictorial replica representation of the remote device114. For example, if an iPhone®12device is chosen, the display120can be an image of an iPhone®12. The RTS100mirrors the display of the remote device114on the display120by streaming a video feed of the display of the remote device114on the display120. In some embodiments, the video stream used to mirror the display of the remote device114is generated by capturing and encoding screenshots of the display of the remote device114into a video stream feed of high frames per second to give the user102a seamless interaction experience with the display120. Using input devices of the local machine104, the user102can interact with the display120, in the same manner as if the remote device114were locally present. The RTS100captures and translates the user interactions to input commands compatible with the remote device114and inputs the translated input commands to the remote device114. The display responses of the remote device114are then streamed to the user102, via display120. In some embodiments, the user102has access to and can activate other displays and menu options, such as developer tools display122. An example usage of the RTS100, from the perspective of the user102, includes, the user102, opening a browser on the remote device114, via menu options provided by the dashboard application. The user102can access the dashboard application via the browser106on the user's local machine104. The RTS100opens the user's selected browser on the remote device114and generates a display of the remote device114and the remotely opened browser on the browser106on the user's local machine104. The user102can then use a mouse to click on a URL field124in the display120, which corresponds to the URL field in the browser on the remote device114. The user102can subsequently enter a URL address in the URL field124. Simultaneously, the user's interactions, such as mouse clicks and keyboard inputs are captured and translated to the input commands compatible with the remote device114at the datacenter112. For example, the mouse click in the URL field124is translated to a tap on the corresponding location on the display of the remote device114and the keyboard inputs are translated to keyboard inputs of the remote device114, causing the remote device114to open the user requested URL and download the user requested website. Simultaneously, a video stream of the display of the remote device114is sent to and generated on the display120on browser106. In this manner, the user perceives entering a URL in the URL field124and seeing the display120(a replica of the remote device114) open the requested URL. Additional interactions of the user102can continue in the same manner. The user102can use the RTS100in the manner described above to perform manual or automated testing. The display120is a pictorial and graphical representation of the remote device114. The RTS100does not open a copy of the browser opened on the remote device114or conduct simultaneous parallel processes between the remote device114and the local machine106. Instead, the RTS100streams a video feed from the remote device114to generate the display120. Consequently, the user's interactions is inputted to the display120, appearing as if a functioning browser is receiving the interactions, while the RTS100captures, transfers and translates those interactions to the remote device114, where the functioning browser is operating on the remote device114. FIG.2illustrates a diagram200of an example data flow implementation of the RTS100. The example shown inFIG.2will be described in the context of the user102requesting to start a remote session. The remote session can be used for a variety of purposes. In one example, the remote session can be used to test a web application or a website. The user launches a dashboard application using the browser106, running on the user's local machine104. The dashboard application can provide menu options to the user102to choose initial test session parameters, including a type/brand of a test device, operating system, a browser brand, and an initial test URL to access. The browser106, running the dashboard application, can generate and send a request220for starting a remote session to the server110. The server110can be a central or a distributed server over several geographical locations, enabling access to the RTS100from various locations. The request220can include details, such as a type/brand of a test device, operating system, a browser brand, and an initial test URL to access. In response to the user's request220, the RTS100can select a datacenter112, a test device114, and can dynamically generate a test session identifier (ID). In some embodiments, a communication network is used to enable communication between the browser106and the remote device114. The RTS100can choose a communication initiation server (CIS)202and associate the test session ID with the CIS202. The selected CIS202can be communicated to both the browser106and the remote device114, using an identifier of the selected CIS202or a CIS ID. In some embodiments, the CIS202can help the browser106and the remote device114to establish a peer-to-peer (P2P) communication network to directly connect. Other communication networks can also be used. The server110can provide initial handshake data to both the remote device114and the browser106, in order to establish a communication network. For example, after choosing the CIS202and other initial parameters, the server110can issue a start session response222to the browser106. The start session response222can include details, such as the test session ID and an identifier of the CIS202to be used for establishing communication. The server110can send a session parameter message (SPM)224to the host118. The SPM224can include parameters of the test session, such as the CIS ID, selected device ID, test session ID, browser type, and the requested URL. The host118routes the SPM224via a message226to a communication module (CM)204of the remote device114. The CM204can be a hardware, software or a combination component of the remote device114, which can handle the communication with the browser106. Depending on the type of communication network and protocol used, the structure and functioning of the CM204can be accordingly configured. For example, in some embodiments, the CM204can handle WebRTC messaging, encoding of the screenshots from the remote device114, transmitting them to the browser106and handling the interactions received from the browser106. The browser106, via the start session response222receives the CIS202ID and the test session ID. The CM204, via the message226, receives the same information. The CM204can send a device connection message (DCM)228to the CIS202. The browser106can send a browser communication message (BCM)230to the CIS202. Both DCM228and BCM230use the same test session ID. Therefore, the CIS202can authenticate both and connect them. Once connected, the browser106and the remote device114can exchange communication data and the routes via which they can communicate. For example, they can indicate one or more intermediary servers that may be used to carry on their communication. In some embodiments, Web real-time communication (WebRTC) can be used to enable communication between the remote device114and the browser106, for example, when the remote device114is a smartphone device. In this scenario, the CM204can include, in part, a libjingle module, which can implement the WebRTC protocol handshake mechanisms in the remote device114. The handshake made available through the CIS202allows the remote device114and the browser106to exchange communication data routes and mechanisms, such as traversal using relays around NAT (TURN) servers, session traversal utilities for NAT (STUN) servers, interactive connectivity establishment (ICE) candidates, and other communication network needs. NAT stands for Network Address Translation. Once the communication network between the browser106and the remote device114is established, a plurality of channels can be established between the two. Each channel can in turn include a plurality of connections. For example, the communication network between the browser106and the remote device114can include a video communication channel (VCC)232. The VCC232can include a plurality of connections between the browser106and the remote device114and can be used to transmit a video stream of the display of the remote device114to the browser106. The communication network between the browser106and the remote device114can also include a data communication channel (DCC)234. The DCC234can include a plurality of connections between the browser106and the remote device114and be used to transmit the interactions the user102inputs into the mirrored display of the remote device generated on the browser106. The mirrored display can alternatively be described as a replica display of the remote device114. To generate a mirrored display of the remote device114on the browser106, the captured screenshots from a screen capturing application (SCA)208can be assembled into a video stream and transmitted to the browser106. The process of assembling the screenshots from the SCA208to a video stream may include performing video encoding, using various encoding parameters. Encoding parameters may be dynamically modifiable or may be predetermined. As an example, the available bandwidth in VCC232can vary depending on network conditions. In some embodiments, a frames-per-second encoding parameter can be adjusted based in part on the available bandwidth in the VCC232. For example, if a low bandwidth in VCC232is detected, the video stream constructed from the captured screenshots can be encoded with a downgraded frames-per-second parameter, reducing the size of the video stream, and allowing an interruption free (or reduced interruption) transmission of the live video stream from the remote device114to the browser106. Another example of dynamically modifying the encoding parameters include dynamically modifying, or modulating the encoding parameter, based on the availability of hardware resources of the remote device, or the capacity of the hardware resources of the remote device114that can be assigned to handle the encoding of the video stream. The CM204can use the hardware resources of the remote device114in order to encode and transmit the video stream to the browser106. For example, CM204can use the central processing unit (CPU) of the remote device114, a graphics processing unit (GPU) or both to encode the video stream. In some cases, these hardware resources can be in high usage, reducing their efficiency in encoding. The reduction in hardware resources availability or capacity can introduce interruptions in the encoding. In some embodiments, a frame rate sampling parameter of the encoding parameters can be modulated based on the availability or capacity of hardware resources, such as the CPU and/or the GPU of the remote device114that can be assigned to handle the encoding of the video stream. For example, if a high CPU usage is detected, when the CPU is to be tasked with encoding, the CM204can reduce the sampling rate parameter of the encoding, so the CPU is not overburdened and interruptions in the video feed are reduced or minimized. The CM204can also configure the encoding parameters, based on selected parameters at the browser106. The browser106receives the video stream via the VCC232, decodes the video stream and displays in the video stream in a replica display of the remote device114on the browser106. In some embodiments, a predetermined threshold frames-per-second parameter of the video stream at the browser106can be selected. The predetermined threshold frames-per-second parameter can be based on a preselected level of quality of the video stream displayed on the replica display. For example, in some embodiments, the predetermined threshold frames-per-second parameter at the browser can be set to a value above 25 frames-per-second to generate a seamless and smooth mirroring of the display of the remote device114on the browser106. The CM204can configure the encoding parameters at the remote device114based on the predetermined threshold frames-per-second parameter set at the browser106. For example, the CM204can encode the video stream with a frame rate above 30 fps, so the decoded video stream at the browser106has a frames-per-second parameter above 25 fps. In some embodiments, the screen capturing application (SCA)208can be used to capture screenshots from the remote device114. The SCA208can differ from device to device and its implementation and configuration can depend on the processing power of the device and the mandates of the operating system of the device regarding usage of the CPU/GPU in capturing and generating screenshots. For example, in Android® environment, the Android® screen capture application programming interface (APIs) can be used. In iOS® devices, iOS® screen capture APIs can be used. Depending on the processing power of the selected remote device114, the SCA208can be configured to capture screenshots at a predefined frames per second (fps) rate. Additionally, the SCA208can be configured to capture more screenshots at the remote device114than the screenshots that are ultimately used at the browser106. This is true in scenarios where some captured screenshots are dropped due to various conditions, such as network delays and other factors. For example, in some embodiments, the SCA208can capture more than 30 fps from the display of the remote device114, while at least 20 fps or more are able to make it to the browser106and shown to the user102. In the context of packaging and assembling the captured screenshots into a video stream transmitted to the browser106, screenshots that are received out of order may need to be dropped to maintain a fluid experience of the remote device114to the user102. For example, the captured screenshots are streamed over a communication network to the browser106, using various protocols, including internet protocol suite (TCP/IP), user datagram protocol, and/or others. When unreliable transmission protocols are used, it is possible that some screenshots arrive at browser106out of order. Out of order screenshots can be dropped to maintain chronology at the video stream displayed on browser106. Some captured screenshots might simply drop as a result of other processing involved. For example, some screenshots may be dropped, due to lack of encoding capacity, if heavy animation on the remote device114is streamed to the browser106. Consequently, in some embodiments, more screenshots are captured at the remote device114than are ultimately shown to the user102. The upper threshold for the number of screenshots captured at the remote device114can depend, in part, on the processing power of the remote device114. For example, newer remote devices114can capture more screenshots than older or midmarket devices. The upper threshold for the number of screenshots can also depend on an expected bandwidth of a communication network between the remote device114and the browser106. The SCA208can be a part of or make use of various hardware components of the remote device114, depending on the type of the selected remote device114, its hardware capabilities and its operating system requirements. For example, some Android® devices allow usage of the device's graphical processing unit (GPU), while some iOS® devices limit the usage of GPU. For remote devices114, where the operating system limits the use of GPU, the SCA208can utilize the central processing unit (CPU) of the remote device114, alone or in combination with the GPU to capture and process the screenshots. The SCA208can be implemented via the screen capture APIs of the remote device114or can be independently implemented. Compared to command line screen capture tools, such as screencap command in Android®, the SCA208can be configured to capture screenshots in a manner that increases efficiency and reliability of the RTS100. For example, command line screenshot tools, may capture high resolution screenshots, which can be unnecessary for the application of the RTS100, and can slow down the encoding and transmission of the video stream constructed from the screenshots. Consequently, the RTS100can be implemented via modified native screenshot applications, APIs or independently developed and configured to capture screenshots of a resolution suitable for efficient encoding and transmission. As an example, using command line screen capture tools, a frames-per-second rate of only 4-5 can be achieved, which is unsuitable for mirroring the display of the remote device114on the browser106in a seamless manner. On the other hand, the described embodiments achieve frames-per-second rates of above 20 frames per second. In some embodiments, the CM204can down-sample the video stream obtained from the captured screenshots, from for example, a 4K resolution to a 1080P resolution. Still, in older devices, the down-sampling may be unnecessary, as the original resolution may be low enough for efficient encoding and transmission. In some embodiments, the remote device114and the browser106can connect via a P2P network, powered by WebRTC. The CM204can then include a modified libjingle module. In the context of the RTS100, the relationship between the browser106and the remote device114is more of a client-server type relationship than a pure P2P relationship. An example of a pure P2P relationship is video teleconferencing, where both parties transmit video to one another in equal and substantial size. In the context of the RTS100, the transfer of video is from CM204to the browser106, and no video is transmitted from the browser106to the CM204. Therefore, compared to a P2P libjingle, the CM204and its libjingle module, as well as communication network parameters between the browser106and the remote device114, can be modified to optimize for the transfer of video from the remote device114to the browser106. An example modification of libjingle includes modifying the frames-per-second rate in favor of video transfer from the remote device114. Other aspects of encoding performed by libjingle module of the CM204can include adding encryptions and/or other security measures to the video stream. When WebRTC is used to implement the communication network between the remote device114and the browser106, libjingle module of the CM204can encode the video stream in WebRTC format. WhileFIG.2illustrates messaging lines directly to the CM204, this is not necessarily the case in all embodiments. In some implementations, the DCM28, BCM230, VCC232, and DCC234can be routed through the host118. The communication network between the remote device114and the browser106, having channels, VCC232and DCC234can be implemented over the internet via a WiFi connection at the datacenter112where the remote device114is located, or can be via an internet over universal serial bus (USB) via the host118, or a combination of wired or wireless communication to the internet. In some cases, one or more methods of connecting to the internet is used as a backup to a primary mode of connection to the internet and establishing the communication network between the remote device114and the browser106. The CM204can receive, via the DCC234, user interactions inputted to the replica display on the browser106. The CM204can route the received user interactions to an interaction server206for translation to a format compatible with the remote device114. In a typical case, the user102runs the browser106on a laptop or desktop machine and inputs commands and interacts with the replica display on the browser106, using the input devices of the local machine104. Input devices of the local machine104generate mouse or keyboard user interactions, which are captured and transferred to the CM204. In some embodiments, JavaScripts® can be used to capture user interactions inputted in the replica display on the browser106. The captured user interactions are then encoded in a format, compatible with the format of the communication network established between the browser106and the remote device114. For example, if WebRTC is used, the user interactions are formatted in the WebRTC format and sent over the DCC234to the CM204. The CM204decodes and transfers the user interactions to the interaction server206. The interactions server208translates the mouse and keyboard user interactions to inputs compatible with the remote device114. For example, when the remote device114is a mobile device, such as a smartphone or tablet having a touch screen as an input device, the interaction server206can translate keyboard and mouse inputs to gestures, swipes, pinches, and other commands compatible with the remote device114. The translation of user interactions to remote device inputs also takes advantage of the coordinates of the inputs. For example, a meta data file accompanying the user interactions can note the coordinates of the user interactions on the replica display on the browser106. The meta data can also include additional display and input device information of the user local machine104and the replica display on the browser106. The interaction server206also maintains or has access to the resolution and setup of the display of the remote device114and can make a conversion of a coordinate of an input on the replica display versus a corresponding coordinate on the real display of the remote device114. For example, in some embodiments, the interaction server206can generate coordinate multipliers to map a coordinate in the replica display on the browser106to a corresponding coordinate in the real display of the remote device114. The coordinate multipliers can be generated based on the resolutions of the replica display and the real display. The interaction server206then inputs the translated user interactions to the remote device114. The display output of the remote device114responding to the input of the translated user inputs are captured via the SCA208, sent to the CM204, encoded in a format compatible with the communication network between the remote device114and the browser106(e.g., WebRTC) and sent to the browser106. The browser106decodes the received video stream, displaying the video stream in the replica display on the browser106. The data flow over the DCC234and the VCC232happen simultaneously or near simultaneously, as far as the perception of the user102, allowing for a seamless interaction of the user102with the replica display, as if the remote device114were present at the location of the user102. FIG.3illustrates a flow chart of a method300of enabling a remote session at a first location using a remote device at a second location. The method300utilizes the RTS100as described above. The method300starts at step302. At step304, the browser106at a first location issues a request220to start a remote session at the first location, using a remote device at the second location. The request220can include a type/brand of a remote device, a browser to be opened on the remote device and a test URL to be accessed on the remote device. At step306, the request220is received at a dashboard application of the RTS100. The dashboard application may be locally installed, as a desktop application or may be a web application, accessible via a URL entered in the browser106. The dashboard application can be powered by a server110. At step308, the server110can select a remote device114from a plurality of remote devices at the second location. The selection of the remote device is based on the user choice in the request220. The selected remote device114can launch the browser type/brand, as indicated in the request220. The selected remote device114can access the test URL, as indicated in the request220. At step310, the server110selects a communication initiation server (CIS)202to allow the browser106and the selected remote device114to establish a connection. At step312, both the browser106and the remote device114connect to the CIS202, using the same test session ID. At step314, the browser106and the remote device114, via the CIS202, exchange parameters of a communication network between the two. At step316, the browser106and the remote device114establish the communication network, using the exchanged parameters. The exchanged parameters can include the routes, ports, gateways, and other data via which the browser106and the remote device114can connect. The communication network between the two includes a video channel, VCC232and a data channel, DCC234. At step318, a replica display of the selected remote device114is generated in the browser106. The browser106can receive, via the video channel, a video stream of the display output of the remote device114and use that to generate the replica display. At step320, user interactions with the replica display are captured and transmitted, via the data channel DCC234to the remote device114. At step322, the SCA208captures screenshots of the display screen of the remote device114. The CM204uses the captured screenshots to generate a video stream of the screen of the remote device114. The CM204transmits, via the video channel VCC232, the video stream to the browser106, which uses the video stream to generate the replica display. The method300ends at step324. FIG.4illustrates a flowchart of a method400of an example operation of the RTS100. The method400starts at step402. At step404, a request to start a remote session using a remote device is received at a dashboard application, powered by a server110. The server110selects a CIS202, a remote device114and issues a response to the browser106. The response includes an identifier of the CIS202and an identifier of the test session. At step406, the browser106and the remote device114establish a communication network and connect to one another using the communication network. The remote device114connects to the communication network via a host118. At step408, the CM204generates a video stream from the screenshots captured by the SCA208, based on one or more encoding parameters. An example of the encoding parameters includes a frames-per-second parameter of the encoding. At step410, the CM204modules the encoding parameters based on one or more factors, including bandwidth of the VCC232, and available capacity of hardware resources of the remote device114for encoding operations, including capacity of CPU and/or GPU of the remote device114. The CM204can also modulate the encoding parameters based on a predetermined minimum threshold of frames per second video stream decoded and displayed at the browser106. At step412, the CM204transmits the video stream to the browser106to display. The method400ends at step414. FIG.5illustrates a flowchart of a method500of an example operation of the RTS100. The method starts at step502. At step504, a communication network is established between the browser106and the remote device114. In some embodiments, the communication network can be a P2P, WebRTC. The CM204in the remote device114can handle the translation, encoding and data packaging for transmission over the communication network. At step506, a data channel DCC234is established using the communication network. The data channel can be used to transmit user interactions entered into a replica display in browser106to the remote device114. At step508, the user interactions with the replica display on browser106are captured and transmitted to the CM204. In some embodiments, the DCC234is through the host118and in other embodiments, a WiFi network at datacenter112where the remote device114is located can be used to connect the CM204and the browser106. The CM204transfers the user interactions to the interaction server206. At step510, the interaction server206translates the user interactions to user inputs compatible with the remote device114. For example, if the remote device114is a mobile computing device, such as a smartphone or smart tablet, the interaction server206translates keyboard and mouse inputs to touch screen type inputs, such as taps, swipes, pinches, double tap, etc. The interaction server206may use coordinate multipliers to translate the location of a user interaction to a location on the display of the remote device114. The coordinate multipliers are derived from the ratio of the resolution and/or size difference between the replica display on the browser106and the display screen of the remote device114. At step512, the user inputs are inputted into the remote device114at the corresponding coordinates. The remote devices's display output response to the user inputs are captured via SCA208, turned into a video stream and transmitted to the browser106. The browser106displays the video stream in the replica display. The method500ends at the step514. Example Implementation Mechanism—Hardware Overview Some embodiments are implemented by a computer system or a network of computer systems. A computer system may include a processor, a memory, and a non-transitory computer-readable medium. The memory and non-transitory medium may store instructions for performing methods, steps and techniques described herein. According to one embodiment, the techniques described herein are implemented by one or more special-purpose computing devices. The special-purpose computing devices may be hard-wired to perform the techniques, or may include digital electronic devices such as one or more application-specific integrated circuits (ASICs) or field programmable gate arrays (FPGAs) that are persistently programmed to perform the techniques, or may include one or more general purpose hardware processors programmed to perform the techniques pursuant to program instructions in firmware, memory, other storage, or a combination. Such special-purpose computing devices may also combine custom hard-wired logic, ASICs, or FPGAs with custom programming to accomplish the techniques. The special-purpose computing devices may be server computers, cloud computing computers, desktop computer systems, portable computer systems, handheld devices, networking devices or any other device that incorporates hard-wired and/or program logic to implement the techniques. For example,FIG.6is a block diagram that illustrates a computer system600upon which an embodiment of can be implemented. Computer system600includes a bus602or other communication mechanism for communicating information, and a hardware processor604coupled with bus602for processing information. Hardware processor604may be, for example, special-purpose microprocessor optimized for handling audio and video streams generated, transmitted or received in video conferencing architectures. Computer system600also includes a main memory606, such as a random access memory (RAM) or other dynamic storage device, coupled to bus602for storing information and instructions to be executed by processor604. Main memory606also may be used for storing temporary variables or other intermediate information during execution of instructions to be executed by processor604. Such instructions, when stored in non-transitory storage media accessible to processor604, render computer system600into a special-purpose machine that is customized to perform the operations specified in the instructions. Computer system600further includes a read only memory (ROM)608or other static storage device coupled to bus602for storing static information and instructions for processor604. A storage device610, such as a magnetic disk, optical disk, or solid state disk is provided and coupled to bus602for storing information and instructions. Computer system600may be coupled via bus602to a display612, such as a cathode ray tube (CRT), liquid crystal display (LCD), organic light-emitting diode (OLED), or a touchscreen for displaying information to a computer user. An input device614, including alphanumeric and other keys (e.g., in a touch screen display) is coupled to bus602for communicating information and command selections to processor604. Another type of user input device is cursor control616, such as a mouse, a trackball, or cursor direction keys for communicating direction information and command selections to processor604and for controlling cursor movement on display612. This input device typically has two degrees of freedom in two axes, a first axis (e.g., x) and a second axis (e.g., y), that allows the device to specify positions in a plane. In some embodiments, the user input device614and/or the cursor control616can be implemented in the display612for example, via a touch-screen interface that serves as both output display and input device. Computer system600may implement the techniques described herein using customized hard-wired logic, one or more ASICs or FPGAs, firmware and/or program logic which in combination with the computer system causes or programs computer system600to be a special-purpose machine. According to one embodiment, the techniques herein are performed by computer system600in response to processor604executing one or more sequences of one or more instructions contained in main memory606. Such instructions may be read into main memory606from another storage medium, such as storage device610. Execution of the sequences of instructions contained in main memory606causes processor604to perform the process steps described herein. In alternative embodiments, hard-wired circuitry may be used in place of or in combination with software instructions. The term “storage media” as used herein refers to any non-transitory media that store data and/or instructions that cause a machine to operation in a specific fashion. Such storage media may comprise non-volatile media and/or volatile media. Non-volatile media includes, for example, optical, magnetic, and/or solid-state disks, such as storage device610. Volatile media includes dynamic memory, such as main memory606. Common forms of storage media include, for example, a floppy disk, a flexible disk, hard disk, solid state drive, magnetic tape, or any other magnetic data storage medium, a CD-ROM, any other optical data storage medium, any physical medium with patterns of holes, a RAM, a PROM, and EPROM, a FLASH-EPROM, NVRAM, any other memory chip or cartridge. Storage media is distinct from but may be used in conjunction with transmission media. Transmission media participates in transferring information between storage media. For example, transmission media includes coaxial cables, copper wire and fiber optics, including the wires that comprise bus602. Transmission media can also take the form of acoustic or light waves, such as those generated during radio-wave and infra-red data communications. Various forms of media may be involved in carrying one or more sequences of one or more instructions to processor604for execution. For example, the instructions may initially be carried on a magnetic disk or solid state drive of a remote computer. The remote computer can load the instructions into its dynamic memory and send the instructions over a telephone line using a modem. A modem local to computer system600can receive the data on the telephone line and use an infra-red transmitter to convert the data to an infra-red signal. An infra-red detector can receive the data carried in the infra-red signal and appropriate circuitry can place the data on bus602. Bus602carries the data to main memory606, from which processor604retrieves and executes the instructions. The instructions received by main memory606may optionally be stored on storage device610either before or after execution by processor604. Computer system600also includes a communication interface618coupled to bus602. Communication interface618provides a two-way data communication coupling to a network link620that is connected to a local network622. For example, communication interface618may be an integrated services digital network (ISDN) card, cable modem, satellite modem, or a modem to provide a data communication connection to a corresponding type of telephone line. As another example, communication interface618may be a local area network (LAN) card to provide a data communication connection to a compatible LAN. Wireless links may also be implemented. In any such implementation, communication interface618sends and receives electrical, electromagnetic or optical signals that carry digital data streams representing various types of information. Network link620typically provides data communication through one or more networks to other data devices. For example, network link620may provide a connection through local network622to a host computer624or to data equipment operated by an Internet Service Provider (ISP)626. ISP626in turn provides data communication services through the world wide packet data communication network now commonly referred to as the “Internet”628. Local network622and Internet628both use electrical, electromagnetic or optical signals that carry digital data streams. The signals through the various networks and the signals on network link620and through communication interface618, which carry the digital data to and from computer system600, are example forms of transmission media. Computer system600can send messages and receive data, including program code, through the network(s), network link620and communication interface618. In the Internet example, a server630might transmit a requested code for an application program through Internet628, ISP626, local network622and communication interface618. The received code may be executed by processor604as it is received, and/or stored in storage device610, or other non-volatile storage for later execution. Video Feed for Generating a Mirrored or Replica Display Some remote devices114do not provide a high-performance screenshot capturing API, suitable for efficient operations of the RTS100. On the other hand, some operating systems of the remote devices114can support a video capturing API for the purposes of recording and/or broadcasting the display of the remote device114in real time. In these scenarios, the SCA208can be implemented using a video capturing API of the operating system of the remote device114. As an example, for some iOS® devices, when the SCA208is implemented, using a native screenshot application, the FPS achieved on browser106can drop to as low as 5 FPS in some cases. At the same time, iOS® in some versions, provides a video capturing facility, such as ReplayKit, which can be used to implement the operations of the SCA208. When a video capturing API is used, corresponding modifications to the data flow and operations of the RTS100are also implemented as will be described below. FIG.7illustrates an example data flow diagram700of the operations of the RTS100, using a video capturing API for implementing the SCA208. The diagram700is provided as an example. Persons of ordinary skill in the art can modify the diagram700, without departing from the spirit of the disclosed technology. Some platforms and operating systems may provide an API for capturing a video stream of the remote device114. For example, iOS® provides such an API in ReplayKit. The captured video stream can be used to replicate the display of the remote device114in lieu of using static screenshots to generate the video stream. In some cases, the SCA208can be implemented using the video capturing API provided by the remote device114. For example, a launcher application can include a broadcaster extension, which can output a video stream of the display of the remote device114. In other embodiments, a broadcast extension, broadcasting the video stream, can be an extension to a launcher application, which the host118uses to control the operations of the remote device114. Various implementations are possible. Some are described below. At step702, the browser106can send a request220to start a remote session to the server110. At step704, the server110can respond by sending a response222to the browser106. At step706, the server110can send a SPM224to the host118. At step708, the host118can send a message226to the CM204. The steps702-708enable the remote device114and the browser106to login to a communication initiation server (CIS)202with the same credentials, such as a common remote session identifier, thereafter, exchange communication network parameters, and establish communication using the communication network. At step710, the CM204can signal a broadcaster712to launch and begin capturing a video stream of the display of the remote device114. As described earlier, the broadcaster712can be a stand-alone application or can be an extension to a launcher application that the host118runs on the remote device114to perform the operations of the RTS100. For example, when ReplayKit is used, the ReplayKit API provides a broadcaster extension which can run as an extension of an application and provide a video stream of the display of the remote device114to that application. At this stage, the DCM228and the BCM230have already occurred between the browser106and the CM204, allowing the browser106and the CM204to exchange network communication parameters via the CIS202. The network communication parameters can include network pathways, servers, and routes via which the two can establish one or more future communication networks. The browser106and the CM204establish a communication network and connect using these network communication parameters. At step714, the CM204can establish a DCC234with the browser106. The DCC234can be used in the future operations of the RTS100to capture user interactions on the replica display generated on the browser106and transmit them to the remote device114. At step716, the host can extract a requested URL and a type of browser from the user's initial request (at step720) and launch the chosen browser on the remote device114, with a request for the remote device browser to access the user requested URL. At step718, the broadcaster712can query the host118for session and user data to determine where and how to establish a video channel to broadcast the video stream feed of the display of the remote device114. At step720, the host118responds to the broadcaster712with session and user data. The session and user data can include an identifier of the session, a user identifier, network details, gates and ports, pathways or other information related to the remote session and/or the communication network established between the CM204and the browser106. At step722, the broadcaster712can use the session and/or user data, received at step720, to establish the VCC232and begin broadcasting the video stream of the display of the remote device114to the browser106. A dashboard application, executable on and/or by the browser106, can generate a replica display of the remote device114on the browser106and use the video stream received on the VCC232to populate the replica display with a live video feed of the display of the remote device114. In some implementations, the CM204can set up or modify the encoding parameters of the video from the broadcaster712. For example, the CM204can be configured to determine the bandwidth of the VCC232and modify the FPS encoding parameter of the video stream to increase the likelihood of an efficient, stable and/or performant video stream on the browser-end. The dashboard application executable on the browser106can decode the video received on the VCC232and use the decoded video to generate the replica display on the browser106. Other examples of the CM204modifying the encoding parameters of the video sent on the VCC232are described above in relation to the pervious embodiments. The CM204can apply the same techniques to the embodiments, where a broadcaster712is used. As described earlier, having the VCC232consume a video stream, via the broadcaster712, can offer advantages, such as more efficient encoding, and having a higher and more stable FPS performance. Multisession Mode In some applications of the RTS100, the user102may desire to test and/or develop an application, for example a web application, and/or a mobile application on multiple remote devices114, simultaneously, and observe the display of each remote device114on the browser106simultaneously. In this scenario, the user102can test and develop an application on multiple platforms and devices at the same time, receiving a display of each device on the user's browser106. This can substantially increase the efficiency of test and development using the RTS100. As an example, in the context of users developing financial applications, the ability to test multiple devices simultaneously offers substantial efficiencies. Many governments impose manual testing requirements on financial applications that operate in financial markets and institutions. In this scenario, the user102may be a developer tasked with certifying the compliance of an application with financial and governmental regulations on multiple devices and platforms. Such a user can benefit from receiving a display feed of multiple remote devices114and observing the behavior of the application in those devices simultaneously. For example, as part of testing, the user may have to enter data into the application and observe the behavior of the application in response to the entered data. Testing and development would be more efficient if the user entered the test in one application and on one remote device114and the same input was synced across multiple remote devices114that the user may wish to simultaneously test. Therefore, the user102can benefit and realize efficiencies if the remote devices114can receive and respond to the interactions with the user102in a synchronized manner. Examples include the user102scrolling a webpage, clicking on an element, and entering a login credential. The ability to test multiple remote devices simultaneously in a synchronized manner can be referred to as using the RTS100in “multisession mode,” or enabling the “multisession” mode in the RTS100. FIG.8illustrates an environment800of multisession operations of the RTS100. The user102is selecting to test or develop the web application802in multisession mode. The user can select any number of remote devices114to conduct test and development activities. Choosing two or more remote devices114can trigger the multisession mode. In the example shown, the user102has selected three remote devices114. The remote devices114can be from any datacenter112. The RTS100can assign remote devices114from the same datacenter112or from different datacenters112and/or a combination of one or more datacenters112. In some cases, the remote devices assigned to user102for multisession can be from datacenters in different countries and/or even different continents. The user102can connect with the selected remote devices114, via the embodiments described above. For example, a data channel and a video channel can be established between each remote device114and the local computer of the user102. In some embodiments, an audio channel can also be established between the remote device114and the local computer of the user102. For ease of discussion, the embodiments of multisession are described where the web application802is a webpage and the user102utilizes the browser106to conduct multisession testing. However, the user can also test a mobile or other programming application in the same or similar manner as described herein with respect to the web application802. Furthermore, in some embodiments, the user102can utilize a desktop application on the user's local machine to access the RTS100and conduct multisession testing. A video feed of each selected remote device114is used to generate a mirrored display of each selected remote device on the browser106. In this manner, whatever is displayed on the screen of the remote device114is also displayed on the browser106. Furthermore, the RTS100can generate the mirrored display in a shell or skin or perimeter graphics resembling the selected remote device114to provide a visual aid to the user102as to which remote device the user102is testing and viewing on the browser106. For example, if the selected remote device114is an iPhone®, the mirrored display generated on the browser106can be presented to the user102in a shell of the same iPhone®. Other methods of identification, such as texts, graphics or other visual identifiers can also be used, in addition to or in lieu of replicating the selected remote device114, to distinguish the selected remote devices in a multisession mode. In the example shown, the user102has chosen three remote devices114. The display of each remote device114is mirrored on the browser106, generating mirrored displays804,806,808. For ease of illustration three remote device mirrored displays are shown, but a multisession mode can be enabled with only two remote devices114or with more than three remote devices114. The user102can interact with the mirrored displays804,806,808, via any input devices or hardware available at local machine of the user. Example input devices can include keyboard, mouse, and touch screen. The user interactions with the mirrored displays are captured and transmitted to the remote devices114. As described earlier, the user interactions are translated to gestures and/or inputs compatible with the remote device114and inputted to the remote device114. In multisession mode, the user interactions with one mirrored display are synced with the other mirrored displays in the multisession. For example, if the user102scrolls the web application802in the mirrored display806, the web application802also scrolls in the mirrored displays804and806. If the user102enters a login credential on the mirrored display804, those credentials are also synced and displayed simultaneously on the other mirrored displays806,808, as the user102types them in an appropriate webpage field of the web application802. FIG.9illustrates a block diagram900of operations of multisession mode of the RTS100. For ease of discussion, only two remote devices114in the multisession of the diagram900are shown, but the same technology can be used to enable three or more remote devices114in a multisession mode. The user102selects a webpage902to test in the browser106, using a multisession mode with two remote devices904and906. In this scenario, a connection session between the remote devices904,906and the browser106is established. The connection session includes establishing a data channel912and a video channel914between the remote device904and the browser106. The connection session also includes a data channel916and a video channel918between the remote device906and the browser106. The video channel914broadcasts a video feed of the display of the remote device904to the browser106, generating the mirrored display908. The video channel918broadcasts a video feed of the display of the remote device906to the browser106, generating the mirrored display910. In some embodiments, a multisession mode can use coordinate mapping to perform simultaneous testing and synced interactions between multiple remote devices. In this approach, when the user102enters an interaction in a mirrored display, the coordinates of the location of the interactions on the corresponding remote device are determined. Then corresponding locations in the displays of the other remote devices in the multisession are determined and the same interaction is entered in each remote device, via their respective data channels. In other words, the coordinates and the type of interactions are transmitted to each remote device via their respective data channels. In some cases, the coordinate mapping approach can be challenging, as the remote devices in a multisession can vary in specification, display resolution, size and other characteristics that can make coordinate mapping between multiple devices challenging. Consequently, in some embodiments, coordinate mapping can be replaced or augmented by use of a sync server. The sync server receives or otherwise detects user interactions on one of the remote devices and serves a synced output to all remote devices in a multisession. The user102enters user interactions to one of the mirrored displays908,910. For example, the user102can enter interactions with the mirrored display908. The user interactions entered in the mirrored display908are transmitted via the data channel912to the remote device904. The RTS100can determine the coordinates and/or other indicator of the location of user interactions with the mirrored display908. The RTS100further can determine the user interface (UI) element upon which the user interactions are entered. For example, in the case of the webpage902, the RTS100can determine an HTML element upon which the user interactions are entered. The user interactions sent to the remote device904can be detected and/or otherwise transmitted to the sync server920, along with the identity of the UI element upon which the user interactions are entered. In other embodiments, the functionality of determining the UI element corresponding to the coordinates and/or location of a user interaction can be implemented as part of the sync server920functionality and/or in other parts of the RTS100. The sync server920can generate a synced version of the output and/or the result of the user interaction on the webpage902and serve the synced version to all remote devices in the multisession (the remote device906in this example). If the user102starts interacting with the mirrored display910, the same operations are performed with the roles of the remote devices904,906reversed, where the user interactions are sent to the remote device906. The sync server920detects and/or otherwise receives these interactions, coordinates and/or UI elements upon which the user interactions are entered and serves a synced version of the output, in this case the webpage902, to all remote devices in the multisession, including the remote device904. The sync server920serves a synced version of the output to all remote devices in a multisession and all remote devices broadcast their displays via their respective video channels back to the browser106. In this manner, the user102observes a synced display in each mirrored display908,910. In the case of a webpage902as the test application, a browser local to each remote device and running on each remote device can render the output as received from the sync server920, suitable to the remote device in which the browser is running. Therefore, each remote device receiving an input from the sync server920renders the webpage902on the remote device correctly. In this manner, the need for coordinate mapping from one remote device to another is alleviated. Each remote device, receiving the same copy of the webpage902from the sync server920renders the webpage902on its display correctly. The same dynamic can be applicable when testing programming applications other than websites. The sync server can provide the same copy of the output to all remote devices, and each remote device can render the output correctly according to the internal facilities and specifications of each remote device. In some embodiments, the sync server920serving the webpage902to each remote device can perform maintaining and updating operations, where the sync server920maintains and updates the same copy of the webpage902on each remote device in real time. In this manner, changes in one remote device reflect, in real time, in the other remote devices in the multisession. While the above embodiments are described in terms of testing a webpage902in a multisession environment, the same or similar technology can be applied to testing other programming applications. For example, the sync server920and/or components or processes of the RTS100can determine the UI elements upon which the user interactions are operating and serve a synced version of the programming application, with the user interactions implemented, to all remote devices in a multisession. Some user interactions can include a request that can be routed to a host924hosting the webpage902. The sync server920can route the request via a network922to the host924. The network922can be the Internet, a public, private network and/or a combination of various networks. Similarly, the host924can be a public and/or private host. The sync server920receives a response from the host924and uses the response to serve a synced copy of the webpage920to all remote devices in a multisession. An example of a user interaction triggering a request/response type operation in the multisession is “entering login credentials” or “clicking on a link” in the webpage902. Not every user interaction triggers a request/response type of operation requiring communication with the host924. In some embodiments, a user interaction can be a command for scrolling action. In this scenario, the sync server920can perform a corresponding scrolling action on the synced output, where the browser of each remote device performs a corresponding scrolling action on the webpage902and displays the result on its respective screen. Since the remote devices in a multisession can be of different sizes, orientation, and/or resolution, the displayed result of the scrolling can be different on each remote device. In the case of testing a webpage using multisession, such as the webpage902, the webpage can be loaded by a browser local to each remote device. The browser in each remote device can access the sync server920via the same uniform resource locator (URL) address. As the sync server920detects, captures, or receives user interactions from a remote device, the sync server920updates the webpage902. Each browser in each remote device in a multisession, accessing the sync server920via the same URL, receives the same copy of the webpage902updated with the user interactions received from the other remote devices in the multisession. The display of each remote device is broadcast to the browser106, thereby displaying a synced version of the webpage902on the browser106. FIG.10illustrates a flowchart of a method1000of performing a multisession mode. The method starts at step1002. At step1004, a connection session between the browser106and two or more remote devices114is established. Establishing the connection session can include establishing various communication channels between the remote devices and the browser106, including data channels and video channels. The video channels can be used to broadcast or stream a video feed of the display of the remote devices114to the browser106. The data channels can be used to transmit user interactions from the browser106to the remote devices114. At step1006, mirrored displays of the remote devices114are generated on the browser106by streaming a video feed of each remote device114to the browser106. The mirrored displays can be generated in a manner to replicate an image of the remote device114from which they are originated. At step1008, user interactions with a mirrored display are captured and transmitted to the corresponding remote device114via the data channel of that remote device114. At step1010, a UI element upon which the user interactions are performed is determined. In some embodiments, determining the UI element can include determining the coordinates of where the user interactions are received on a mirrored display and mapping the mirrored display coordinates to a coordinate on the corresponding remote device114and to a UI element known to be at those coordinates in the remote device114. At step1012, a synced output of the user interactions are generated, based on the user interactions and the UI element upon which the user interactions are performed. In some embodiments, a sync server920can generate or update the synced output by performing the user interaction upon the determined UI element and triggering the operations corresponding to the determined UI element. The synced output can depend on the programming application and/or the webpage that is the subject of testing and development in the multisession. For example, a synced output can be a version of the webpage902updated with the user interactions inputted on the determined UI elements. In the case of a programming application tested and developed in a multisession using the method1000, the synced output can be the output display of the programming application during and after the user interactions are inputted via the determined UI element. At step1014, the synced output is received by the other remote devices114in the multisession. In some embodiments, the sync server920can maintain and update a synced output in each remote device114in the multisession. In some embodiments, each remote device114in the multisession can receive the synced output and display the synced output according to its specification and internal facilities. In this manner, the task of rendering the display of the synced output in each remote device114is performed by the remote device114and the sync server is relieved from having to provide a compatible and unique synced output to each remote device. At step1016, the display of each remote device114in the multisession is transmitted to the browser106via the video channel of each remote device114. The user102viewing the mirrored displays on the browser106observes a synced version of the application or webpage in each mirrored display. The method ends at step1018. FIG.11illustrates a flowchart of a method1100of multisession operations including a request/response type of operation that can occur when developing and testing a webpage using the RTS100. The method starts at step1102. At step1104, user interactions with one of the mirrored displays are received. The user interactions in this case can be a user interaction, which can be associated with generating a request to a host of a webpage that is being tested or developed using the RTS100. At step1106, the user interaction is transmitted as a request to the host which accepts and responds to requests relating to the webpage. In some embodiments, an intermediary, such as the sync server920can receive the user interactions and send the user interactions as a request to the host, such as the host924providing services for the webpage902. At step1108, a response to the request is received from the host. For example, the sync server920can receive the response. At step1110, the sync server920can generate a synced output using the response. For example, the sync server920can generate or update a version of the webpage based on the received response from the host. In some embodiments, the updated version of the webpage is the response received from the host. The synced output can be provided to the remote devices in the multisession, from which mirrored displays on the browser106can be generated. The method ends at step1112. EXAMPLES It will be appreciated that the present disclosure may include any one and up to all of the following examples. Example 1: A method comprising: establishing a connection session between a browser and a first device and the browser and a second device, wherein the first and second devices are in one or more locations remote relative to the browser, and the connection session between the browser and the first device comprises a first device data channel and a first device video channel, and the connection session between the browser and the second device comprises a second device data channel and a second device video channel; generating a first device display on the browser by broadcasting, via the first device video channel, a first device video feed from the first device to the browser; generating a second device display on the browser by broadcasting, via the second device video channel, a second device video feed from the second device to the browser; transmitting, via the first device data channel, one or more user interactions with a webpage in the generated first device display on the browser, to the first device; transmitting the interactions to a sync server; and serving, from the sync server, a synced version of the webpage to the first and second devices. Example 2: The method of Example 1, further comprising establishing the connection session further between a plurality of second devices, wherein the serving, from the sync server, comprises serving the synced webpage to the first device and the plurality of the second devices simultaneously. Example 3: The method of some or all of Examples 1 and 2, wherein serving, from the sync server, further comprises: transmitting the interactions from the sync server to a host server hosting the webpage; receiving a response from the host server; and serving the response to the first and second devices. Example 4: The method of some or all of Examples 1-3, wherein the interactions comprise commands for scrolling the webpage and the sync server performs a corresponding scrolling on the synced webpage served to the first and second devices. Example 5: The method of some or all of Examples 1-4, wherein the sync server performs syncing between the first and second devices by determining webpage HTML elements corresponding to the user interactions. Example 6: The method of some or all of Examples 1-5, wherein the first and second devices access the sync server via a shared URL corresponding to the webpage. Example 7: The method of some or all of Examples 1-6, wherein transmitting the interactions to the sync server further comprises determining coordinates of the interactions on the generated first device display on the browser and identifying corresponding HTML elements of the webpage and transmitting the interactions and the coordinates to the sync server. Example 8: A non-transitory computer storage that stores executable program instructions that, when executed by one or more computing devices, configure the one or more computing devices to perform operations comprising: establishing a connection session between a browser and a first device and the browser and a second device, wherein the first and second devices are in one or more locations remote relative to the browser, and the connection session between the browser and the first device comprises a first device data channel and a first device video channel, and the connection session between the browser and the second device comprises a second device data channel and a second device video channel; generating a first device display on the browser by broadcasting, via the first device video channel, a first device video feed from the first device to the browser; generating a second device display on the browser by broadcasting, via the second device video channel, a second device video feed from the second device to the browser; transmitting, via the first device data channel, one or more user interactions with a webpage in the generated first device display on the browser, to the first device; transmitting the interactions to a sync server; and serving, from the sync server, a synced version of the webpage to the first and second devices. Example 9: The non-transitory computer storage of Example 8, wherein the operations further comprise establishing the connection session further between a plurality of second devices, wherein the serving, from the sync server, comprises serving the synced webpage to the first device and the plurality of the second devices simultaneously. Example 10: The non-transitory computer storage of some or all of Examples 8 and 9, wherein serving, from the sync server, further comprises: transmitting the interactions from the sync server to a host server hosting the webpage; receiving a response from the host server; and serving the response to the first and second devices. Example 11: The non-transitory computer storage of some or all of Examples 8-10, wherein the interactions comprise commands for scrolling the webpage and the sync server performs a corresponding scrolling on the synced webpage served to the first and second devices. Example 12: The non-transitory computer storage of some or all of Examples 8-11, wherein the sync server performs syncing between the first and second devices by determining webpage HTML elements corresponding to the user interactions. Example 13: The non-transitory computer storage of some or all of Examples 8-12, wherein the first and second devices access the sync server via a shared URL corresponding to the webpage. Example 14: The non-transitory computer storage of some or all of Examples 8-13, wherein transmitting the interactions to the sync server further comprises determining coordinates of the interactions on the generated first device display on the browser and identifying corresponding HTML elements of the webpage and transmitting the interactions and the coordinates to the sync server. Example 15: A system comprising a processor, the processor configured to perform operations comprising: establishing a connection session between a browser and a first device and the browser and a second device, wherein the first and second devices are in one or more locations remote relative to the browser, and the connection session between the browser and the first device comprises a first device data channel and a first device video channel, and the connection session between the browser and the second device comprises a second device data channel and a second device video channel; generating a first device display on the browser by broadcasting, via the first device video channel, a first device video feed from the first device to the browser; generating a second device display on the browser by broadcasting, via the second device video channel, a second device video feed from the second device to the browser; transmitting, via the first device data channel, one or more user interactions with a webpage in the generated first device display on the browser, to the first device; transmitting the interactions to a sync server; and serving, from the sync server, a synced version of the webpage to the first and second devices. Example 16: The system of Example 15, wherein the operations further comprise establishing the connection session further between a plurality of second devices, wherein the serving, from the sync server, comprises serving the synced webpage to the first device and the plurality of the second devices simultaneously. Example 17: The system of some or all of Examples 15 and 16, wherein serving, from the sync server, further comprises: transmitting the interactions from the sync server to a host server hosting the webpage; receiving a response from the host server; and serving the response to the first and second devices. Example 18: The system of some or all of Examples 15-17, wherein the interactions comprise commands for scrolling the webpage and the sync server performs a corresponding scrolling on the synced webpage served to the first and second devices. Example 19: The system of some or all of Examples 15-18, wherein the sync server performs syncing between the first and second devices by determining webpage HTML elements corresponding to the user interactions. Example 20: The system of some or all of Examples 15-19, wherein the first and second devices access the sync server via a shared URL corresponding to the webpage. While the invention has been particularly shown and described with reference to specific embodiments thereof, it should be understood that changes in the form and details of the disclosed embodiments may be made without departing from the scope of the invention. Although various advantages, aspects, and objects of the present invention have been discussed herein with reference to various embodiments, it will be understood that the scope of the invention should not be limited by reference to such advantages, aspects, and objects. Rather, the scope of the invention should be determined with reference to patent claims.
79,250
11860772
DETAILED DESCRIPTION FIG.1shows an example of a system100for testing a software application102that operates in association with a database104. In particular, the system100can include a test manager106that distributes code for a set of test cases108among different test sets110associated with different parallel threads on a computing device. The test manager106can cause the test cases108in the test sets110to execute, substantially simultaneously, in the parallel threads on the computing device in association with one or more instances of the software application102. Accordingly, executing the test cases108in parallel threads can increase the speeds at which the test cases108can be executed, relative to executing the test cases108in sequence in a single thread. The software application102can be a software program comprising computer-executable instructions associated with one or more functions. As a non-limiting example, the software application102can be an insurance policy management system that manages insurance policies by enabling creation, renewal, and/or termination of insurance policies associated with an insurance company, enabling users associated with the insurance company to view and/or edit information about insurance policies, and/or performing other tasks associated with management of insurance policies. In this example, information about insurance policies can be stored in the database104, such that the software application102can access and/or edit information about the insurance policies in the database104. In other examples, the software application102can be a billing and payment system, a customer management system, an order tracking system, an electronic commerce system, a database management system, or any other type of software that operates at least in part based on data stored in the database104. The software application102can be a compiled and/or executable version of code written in a programming language such as Gosu®, Java®, C++, C#, Python®, or any other programming language. For instance, in examples in which code for the software application102is written using Gosu® or Java®, the code can be compiled into an executable and/or deployable file, such as a web application archive (WAR) file or a Java® archive (JAR) file. Over time, software developers can write new and/or updated code to create new and updated versions of the software application102. For example, software developers may write new and/or updated code to implement new features of the software application102, update or enhance existing features of the software application102, update the software application102to communicate with other software applications, or for any other reason. As a non-limiting example, if the software application102is the insurance policy management system described above, software developers may write new code to integrate the insurance policy management system with a separate billing system, a separate customer management system, and/or other separate systems or applications. The database104can be a relational database, non-relational database, object-oriented database, network database, hierarchical database, a flat file or other flat structured data storage element, or any other type of database or data storage element that stores records. In some examples, the database104may organize data into one or more tables that each have rows and columns. However, in other examples, the database104may store data in other formats without the use of tables. The database104can contain any number of records112A,112B,112C, etc. (referred to collectively herein as “records112”) having any number of attributes114A,114B,114C, etc. (referred to collectively herein as “attributes114”). In some examples, different records112can be represented as different rows of a table, while different attributes114of the records112can be represented as different columns of the table. In other examples, different records112can be represented in the database104without the use of rows and columns, or tables. FIG.1shows a non-limiting example in which records112in the database104, including a first record112A, a second record112B, and a third record112C, are associated with insurance policies. These example records112can each include attributes114, such a name attribute114A, an address attribute114B, a policy number attribute114C, and/or other types of attributes114. In other examples, records112and their attributes114can be associated with any other type of data, such as customer information, order information, billing information, or any other data. Data in the database104can be accessed or changed using a query language, such as the Structured Query Language (SQL), and/or other types of commands or input. For example, the software application102may use SQL queries or other commands to add new records112, edit existing records112, delete records112, retrieve one or more attributes114of one or more records112, and/or otherwise interact with the database104. In some examples, the software application102can be tightly coupled with the database104. For example, when software application102executes, software application102can create the database104as a tightly coupled database, such that the database104can store data used by the software application102during execution of the software application102. In other examples, the software application102can be loosely coupled with the database104. For instance, the database104can be initiated and/or maintained separately from the software application102. In some examples, the database104can be an in-memory database that stores data in random-access memory (RAM) or other volatile memory of a computing device. In other examples, the database104can stored in persistent memory such as hard disk drives, solid-state drives, or other non-volatile memory. In some examples, the database104can be an “H2mem” database, a “Postgres” database, or any other type of database. In some examples, the database104can be instantiated as an in-memory database when the software application102is loaded or instantiated in memory. For example, when the software application102is loaded into memory by a computing device for execution, the software application102may create a new instance of the database104as an in-memory database by defining one or more columns of one of more tables, or by otherwise instantiating the database104. Thereafter, the software application102can add records112to the database104, or otherwise access the database104, after the in-memory database has been instantiated. For example, records112in the database104can be accessed based on test cases108in one or more test sets110that are executed in association with the software application102and the database104, as discussed further below. Software developers can write code to test the functionality of new or existing versions of the software application102. For example, one or more software developers can create the set of test cases108for unit testing and/or integration testing of the software application102. The test cases108can be designed to verify whether a new or existing version of the software application102passes the set of test cases108and operates as intended by the software developers. The test cases108can test various scenarios regarding how the software application102can interact with the database104. For instance, the test cases108may test whether the software application102can access the database104, can access particular tables of the database104, and/or can access particular records112in the database104. In some examples, there may be a relatively large number of test cases108in part due to regulatory requirements, state laws, business requirements, rules, and/or other factors. For instance, when the software application102is a policy management system that manages insurance policies, different states or jurisdictions may have different laws that impact how such insurance policies are to be managed. Accordingly, the set of test cases108can include one set of tests that attempt to verify that the software application102can successfully manage insurance policies according to the rules of one jurisdiction, as well as another set of tests that attempt to verify that the software application102can also successfully manage insurance policies according to the rules of another jurisdiction. Code for the test cases108can be expressed in one or more classes116A,116B, . . .116N, etc. (referred to collectively herein as “classes116”). Each of the classes116can include one or more methods118. For example, different classes116can be associated with different class files, each of which includes code for one or more methods118. Individual methods118can set up data in the database104for tests, test functionality associated with the software application102, output test results, or perform any other operation associated with testing of the software application. Each method in a class may be a function, such as a function written in Gosu®, Java®, or another programming language. As an example, a file for a class can be a .gs file containing Gosu® code for one or more methods118. The test cases108can include any number of files for classes116, such as class116A, class116B, . . . and class116N, as shown inFIG.1. Each class file can include any number of methods118as shown inFIG.1. For example, class116A can include methods118A(1),118A(2),118A(3), etc. Different class files can include the same or a different amount of methods118. As will be discussed further below, some of the classes116and/or methods118may be related or dependent on one another. For example, a software developer may have written a test case that depends on, or builds on, operations the developer assumed another related test case would already have performed. As another example, a test case may be designed to use or edit data in the database104that a software developer assumed another related test case would already have created. As still another example, different test cases may be configured to access the same data in the database104. The system100for testing the software application102can include the test manager106. The test manager106can be an executable software component or script that is configured to manage the execution of the test cases108, including classes116and/or methods118, with respect to the software application102. In some examples, the test manager106can be associated with a build automation tool, such as Gradle®, that can build new versions of the software application102, initialize an execution environment for the software application102, and/or define tasks to be executed with respect to the software application102. For example, the test manager106may create one or more Gradle® executors, or other scripts, code, configurations, or computer-executable instructions that cause one or more instances of the software application102to execute test cases108of different test sets110in association with the database104, or otherwise cause the different test sets110to be executed in association with the database104. The test manager106can be configured to distribute the test cases108among different test sets110. For example, as shown inFIG.1, the test manager106can distribute the test cases108into any number of test sets110A,110B, . . .110N, etc. (referred to collectively herein as “test sets110”). In some examples, the test manager106can obtain a list object, or other data, that identifies individual test cases108. The test manager106can then distribute the individual test cases108among the different test sets110at a class level and/or a method level, as discussed further below. In some examples, the test manager106can distribute test cases108to test sets110at a class level. In these examples, the test manager106can distribute all of the methods118of a particular class to the same test set, and may distribute methods118from other classes116to the same test set or other test sets110. As a non-limiting example, the test manager106may assign the methods118A of class116A to the first test set110A, and the methods118B of class116B to the second test set110B. As another non-limiting example, the test manager106may assign the methods118A of class116A, and the methods118B of class116B, to the first test set110A, and assign methods of other classes116to other test sets110. In some examples, the test manager106can also, or alternately, distribute test cases108to test sets110at a method level. In these examples, the test manager106can distribute subsets of the methods118of the same class among different test sets110. As a non-limiting example, the test manager106may assign methods118A(1) and118A(2) of class116A to the first test set110A, but assign method118A(3) of class116A to the second test set110B. In some examples, the test manager106may also allocate subsets of methods118from different classes116to the same test set when distributing test cases at a method level. As a non-limiting example, the test manager106may assign method118A(3) of class116A to the second test set110B, and also assign method118B(3) to the second test set110B. In some cases, the test manager106may dynamically distribute an equal number of test cases108to different test sets110. In other cases, the test manager106may dynamically distribute an unequal number of test cases108to different test sets110, for instance based on predicted execution times or other factors. As a first non-limiting example of distributing an unequal number of tests cases108to different test sets110, the test manager106may determine that a set of five hundred methods from a first class is expected to take approximately five minutes to execute. However, the test manager106may determine that a second class and a third class each have only two hundred methods, and that together the four hundred methods of the second class and the third class may also take approximately five minutes to execute. Accordingly, the test manager106may assign the five hundred methods from a first class to a first test set, and assign the four hundred methods from the second class and the third class to a second test set. This may eliminate or reduce idle processor cycles or server time, relative to assigning each of the three classes to different test sets and waiting for the four hundred methods of the first class to complete after the two hundred methods of the second class and the third class have each completed separately in other parallel threads. As a second non-limiting example of distributing an unequal number of tests cases108to different test sets110, the test manager106may determine that a first subset of three hundred test cases108is expected to take approximately five minutes to execute, and may determine that a second subset of one thousand other test cases108is also expected to take approximately five minutes to execute. Accordingly, the test manager106may assign the first subset of the three hundred test cases108to a first test set, and assign the second subset of one thousand test cases108to a second test set, such that both test sets are expected to take approximately the same amount of time to execute despite differing numbers of test cases in the two test sets. The test manager106can also be configured to cause different test sets110to execute simultaneously, in parallel, in association with the software application102and the database104. For example, a computing device can initiate a first instance of the software application102in a first parallel thread, and also initiate a second instance of the software application102in a second parallel thread. The computing device can accordingly execute a first set of methods118assigned to the first test set110A in the first parallel thread in association with the database104, and simultaneously execute a second set of methods118assigned to the second test set110B in the second parallel thread in association with the database104. Methods118assigned to a test set may execute in sequence with respect to other methods118within the same test set, however the computing device can execute different methods118associated with different test sets110in different parallel threads at substantially the same time. The test manager106may also collect test results associated with the methods and/or classes of different test sets110that execute in different parallel threads, and combine the test results into an aggregated test result report, for example as discussed below with respect toFIG.2. The computing device can use virtual machines, hyperthreading, parallel threads, and/or any other type of parallelization to execute test cases108of different test sets110in parallel at substantially the same time. The computing device can set up and use any number of parallel threads, depending on the memory, processing power, and/or other computing resources available to the computing device. For example, the computing device can be a server that has 128 GB of memory and 16 CPUs. In this example, if different instances of the software application102each use approximately 15 GB of memory when executed via virtual machines, the computing device may initialize eight parallel threads that are each allocated 16 GB of memory. The test manager106can accordingly distribute the test cases108among eight test sets110that correspond to the eight parallel threads. In some examples, the test manager106can use a Java® concurrent “ExecutorService,” or other system, to initialize a fixed number of parallel threads in a thread pool, distribute the methods118of the test cases108among different test sets110associated with different initialized parallel threads, and use one or more callable objects to execute the methods118of the different test sets110in association with the parallel threads. In some examples, the test manager106can use tasks or scripts associated with a build automation tool, such as Gradle®, to distribute the methods among test sets110and to cause execution of the test sets110in parallel. For example, the test manager106can use Gradle® tasks or other scripts to dynamically set up different executors associated with the software application102in different parallel threads, distribute methods118of the test cases108among different test sets110associated with the parallel threads at a class level and/or a method level, cause the different executors to execute the methods118of the test sets110in parallel, and/or to combine corresponding sets of test results into an aggregated test result report. As a non-limiting example, if the test cases108contain seven thousand classes116, the test manager106may divide the seven thousand classes116, at a class level, into seven different test sets110that each contain methods118aggregated from a different set of approximately one thousand classes116. Each of these seven test sets110can be associated with a different thread of a pool of seven parallel threads. The test manager106can cause the seven test sets110to be executed in parallel, substantially simultaneously, via the seven different parallel threads. Accordingly, the full set of seven thousand classes116may execute in parallel more quickly than the seven thousand classes116could execute in sequence in a single thread. Executing the classes116in parallel can also reduce server time or usages of other computing resources, relative to executing the classes116in a single thread. As another non-limiting example, if a particular class includes a set of one hundred methods118, the test manager106may divide those one hundred methods118, at a method level, into multiple test sets110that can be executed simultaneously in parallel threads. Accordingly, the full set of one hundred methods118can execute in parallel more quickly than executing the full set of one hundred methods118in sequence in a single thread. Executing the methods118in parallel can also reduce server time or usages of other computing resources, relative to executing the methods118in a single thread. Although the test manager106may distribute classes116and/or methods118of the test cases108among multiple test sets110, and cause the test sets110to execute in parallel, the test cases108may not have been written by software developers with parallelization in mind. For example, the test cases108may have been written by software developers under the assumption that the classes116and/or methods118, including related or dependent classes116and/or methods118, would be executed sequentially in a particular intended order in a single thread. Accordingly, executing the test cases108in parallel as part of different test sets110may cause classes116and/or methods118to execute in a different order than the developer's intended order. As an example, different methods118in a single class, and/or in different classes116, may attempt to access the same table in the database104. For instance, a first method may access the first record112A in the table shown inFIG.1to edit an address attribute114B, while a second method may access the second record112B in the same table to edit a name attribute114A. These two methods may have been written under an assumption that the two methods would execute at different times. For instance, the two methods may be functions within the same class file, and a developer may assume from the structure of the class file that the first method will execute prior to the second method. As another example, the first method may be within the first class116A, the second method may be within the second class116B, and a developer may assume that the methods of the first class116A will execute before methods of the second class116B. However, because the test manager106may distribute methods into different test sets110that execute in parallel, there is a chance that the developers' assumptions about the execution order of the first method and the second method may be incorrect. For instance, the first method and the second method may execute at substantially the same time in different parallel threads as part of different test sets110, such that the first method and the second method attempt to access the same table in the database104simultaneously. Accordingly, at least during testing of the software application102in association with parallel threads as described herein, the database104can be instantiated as a record-locking database so that different methods118of different test sets110executing in different parallel threads can access different records112in the database104simultaneously. A record-locking database can lock individual records112in the database104when such records112are accessed. The record-locking database may accordingly be different from a table-locking database that locks an entire table when any record of the table is accessed. The record-locking database may also be different from other types of databases that lock an entire database, or other data structure larger than an individual record, when any individual record is accessed. As a non-limiting example, the database104can be an “H2mem” in-memory database that uses Multi-Version Concurrency Control (MVCC) to set database locks at a record level, rather than an “H2mem” in-memory database with a default “MVStore” engine that is configured to at least briefly lock an entire database table when the table is accessed. A record-locking database may at least briefly lock individual records112when software elements access such individual records112, for instance to delete, insert, and/or update individual records112. A software element connected to the record-locking database may have access to committed data within the record-locking database, as well as to changes made by the software element that have not yet been committed to the database. For example, if a first software element updates a row of a table but has not yet committed the change, a second software element may not have access to the updated data in that row until the updated data is committed to the database. In some examples, the record-locking database may use an exclusive lock on a table when adding or removing columns of a table, or when dropping the table. As an example, if the software application102accesses record112A, the record-locking database can respond by locking record112A while record112A is being accessed. In the example shown inFIG.1, a lock icon120indicates that record112A is locked. However, even though record112A is locked in the table, record112B may simultaneously be unlocked in the same table. In the example shown inFIG.1, an unlock icon122indicates that record112B is unlocked. Accordingly, because record112B is unlocked, the software application102can access record112B at the same time that record112A is locked. In contrast to table-locking databases that may lock an entire table as changes are processed and committed, here the record-locking database may instead at least briefly lock individual records112of a table when the software application102accesses such individual records112. Accordingly, even if one or more individual records112of a table are locked in the record-locking database at a particular moment in time, other records112of the same table can remain accessible during that particular moment in time. As such, due to the database104being instantiated as a record-locking database during testing, one or more instances of the software application102may simultaneously access more than one record in a particular table in the database104during testing. Similarly, in contrast to other types of databases that may lock the entire database, or lock data structures larger than individual records when the database is accessed, the record-locking database can be configured to lock individual records when the individual records are accessed. Accordingly, even if one or more individual records112in the record-locking database are locked at a particular moment in time, other records112in the same record-locking database can remain accessible during that particular moment in time. As such, due to the database104being instantiated as a record-locking database during testing, one or more instances of the software application102may simultaneously access more than one record in the database104during testing. Accordingly, different test cases108in different test sets110that execute concurrently may be able to access different records112in the same database104during testing. As a non-limiting example, a first method in test set110A may execute in a first parallel thread a millisecond before a second method in test set110B executes in a second parallel thread. The first method may access the first record112A shown inFIG.1, and thereby cause the record-locking database to lock record112A as shown inFIG.1. However, if the second method attempts to access the second record112B a millisecond later, the second record112B can be unlocked as shown inFIG.1, even if record112A is still locked and is still being accessed by the first method. Accordingly, the second method can succeed in accessing the second record112B in the record-locking database, despite the first method simultaneously accessing the first record112A. It may be relatively common for methods118of different test sets110to attempt to access the same table in the database104, or to attempt to access the same database104overall, at substantially the same time when the test sets110are executed in parallel. However, it may be much less likely that methods118of different test sets110executing in parallel will simultaneously attempt to access the exact same record of the same database. Accordingly, testing the software application102by executing different test sets110in parallel in association with a record-locking database that permits different records112of the same table, and/or of the entire database104, to be accessed simultaneously can reduce the number of test failures and/or database conflict errors that might otherwise occur if the database104was a table-locking database or another type of database configured to lock data structures larger than individual records112. Reducing the number of such test failures and/or database conflict errors by using a record-locking database can decrease the chances that testing of the software application102will fail, thereby reducing the chances that software developers will need to recheck code and/or re-run the testing. In turn, reducing the number of test failures and/or database conflict errors by using a record-locking database can also reduce the overall usage of server time or other computing resources associated with testing the software application102. In some examples, if a particular method of a test set does attempt to access a record that is locked in the record-locking database, for instance because the record is currently being accessed by another method executing simultaneously as part of another test set, the test manager106can cause execution of the particular method to be retried at a later point in time. As a non-limiting example, if a first method accesses record112A and thereby causes record112A to be locked in the database104for three milliseconds, a second method in another test set may attempt to also access record112A during the three-millisecond lock period. Although record112A being locked might otherwise cause the second method to fail, and in turn lead to a failed test of the software application102, the test manager106may detect a record locking error reported by the database104when the second method attempts to access record112A. In response to the record locking error, the test manager106can cause execution of the second method to be retried after the three-millisecond lock period, or after the test manager106determines that record112A has been unlocked. Accordingly, the second method can be retried after record112A is unlocked again, such that the second method can succeed and test failures due to record-locking in the database104can be avoided. In some examples, the test manager106can maintain one or more test failure logs that track errors or other test failures that occur with respect to one or more test sets110. For example, the test manager106can maintain a first test failure log associated with test set110A, a second test failure log associated with the test set110B, and other test failure logs associated with other groups of test cases108assigned to other test sets110. Accordingly, the test manager106can maintain different individual test failure logs associated with test sets110corresponding to different parallel threads. In other examples, the test manager106may maintain a single test failure log associated with all of the test sets110and parallel threads. For example, the single test failure log may identify failed test cases of test set110A, failed test cases of test set110B, and/or failed test cases of other test sets. Each of the test failure logs may identify specific test cases108, such as specific classes116and/or methods118, that failed during an initial execution attempt in a parallel thread. In some examples, a test failure log may also indicate a failure code for each failed test case that indicates a reason why the test case failed. Failure codes may be alphanumeric characters, integers, text values, or any other type of code or data that indicates a reason why a corresponding test case failed. For instance, a test failure log may indicate that a particular test case failed due to a record-locking error. Accordingly, the test manager106can determine from a test failure log that the test case failed due to a record-locking error, and cause that test case to be re-executed at a later point in time at which the record-locking error may be less likely to re-occur. Overall, in a testing environment in which different test sets110are executed in parallel against the same database104, the use of a record-locking database can reduce the risk of test cases108failing because multiple test cases108attempt to access the same database or same table in the database simultaneously. Additionally, if multiple test cases108do attempt to simultaneously access the same record in the database104, the test manager106can cause any of those test cases108that fail to be re-executed at a later time when it may be less likely that multiple test cases108are attempting to access the same record simultaneously. Accordingly, testing of the software application102may be more likely to complete successfully, without errors or test failures that may be caused by parallelization of the testing process and not by bugs or other errors in the code of the software application102. FIG.2shows a flowchart of a first example process200for generating and testing the software application102using parallel threads and the record-locking database. At block202, a computing system can generate a new version of the software application102. For example, a developer may have written new code, and/or changed existing code, associated with the software application102and/or the test cases108. The developer can submit the new code to a compiler or code management system, and request that the new and/or changed code be compiled, along with unchanged code, into a new executable version of the software application102. For example, the computing system can generate anew version of the software application102as a new executable and/or deployable WAR file or JAR file. In some examples, the computing system can use tasks of a build automation tool, such as Gradle®, to build the new version of the software application102. At block204, the computing system can initiate a set of parallel threads. For example, the test manager106can use Gradle® tasks, or tasks of another build automation tool, to dynamically set up an execution environment that includes a set of parallel threads, and different executors associated with the software application102in different parallel threads. In some examples, the test manager106can use a Java® concurrent “ExecutorService” to initialize a fixed number of parallel threads in a thread pool. At block206, the computing system can instantiate the database104as a record-locking database. For example, the test manager106can use Gradle® tasks, or tasks of another build automation tool, to instantiate the database104in memory as a record-locking database. As another example, the software application102can create the database104when the software application102is loaded or executed. For instance, the software application102can be tightly coupled to the database104, such that the software application102creates the database104when the software application102executes. In some examples, the database104can be instantiated as an “H2mem” in-memory database that uses MVCC to set database locks at a record level. At block208, the computing system can dynamically distribute test cases108among different test sets110. For example, the test manager106can identify classes116in the test cases108, and can distribute the test cases108among the test sets110at a class level and/or at a method level. Each test set can be associated with a different parallel thread of the parallel threads initiated at block204. At block210, the computing system can cause the test sets110to execute, in the parallel threads, in association with the record-locking database. For example, the test manager106can use one or more callable objects to execute the methods118of the different test sets110in association with the parallel threads and the database104. Individual test cases108in the test sets110may succeed or fail when executed. At block212, the computing system may lock a first record in the database104. For example, a first method, executing in a first parallel thread in association with a first test set, may attempt to edit the first record in the database104. As the first method is updating data for the first record, the database104can lock the first record in the database104. At block214, a second method may access a second record in the database104. In some examples, the second record may be in the same table of the database104as the first record. However, although the computing system may have locked the first record in the table during access of the first record by the first method at block212, other records of the table may remain unlocked because the database104is a record-locking database. Accordingly, the second record can be unlocked and accessible to the second method at block214. The second method can thus successfully access the second record as part of a test case, which may lead a successful test of the software application102associated with the second method. At block216, the computing system can unlock the first record in the database104. As discussed above, the first record may have been locked at block214due to the first method accessing the first record. However, at block216, the first method can have finished accessing the first record, and the first record can accordingly be unlocked in the database104. The first method can have successfully accessed the first record as part of a test case, which may lead a successful test of the software application102associated with the first method. In some examples, if the second method had, at block214, instead attempted to access the first record while the first record was locked, the test manager106may delay or retry execution of the second method. In these examples, the test manager106may cause the second method to be re-executed after a predetermined period of time, or after an indication that the first record has become unlocked. Accordingly, when two methods attempt to access the same record substantially simultaneously in the record-locking database, retrying the second method after that record becomes unlocked can avoid a test failure that would have been based solely on the second method being unable to access the first record while it was locked in the database104. At block218, the computing system can aggregate results of the tests associated with the test cases108. In some examples, the test manager106can receive different sets of test results associated with different test sets110that execute in different parallel threads. The test manager106can combine the different sets of test results into a single aggregated test result report, or otherwise aggregate the test results. The test manager106may output the single aggregated test result report, for instance in a format that is visible and/or usable by a user. In other examples, the test manager106may aggregate test results into any other format. The aggregated test results may indicate whether the new version of the software application102passed all of the test cases108. In some examples, if the new version of the software application102did not pass all of the test cases108, the aggregated test results may identify which test cases108did not pass, and/or provide corresponding diagnostic details about test failures and/or errors. In some examples, after generating and/or outputting the aggregated test results, the test manager106may discard the different individual sets of test results that were generated in association with different parallel threads. In some examples, at the conclusion of testing, the computing system can also de-allocate memory or other computing resources associated with the software application102and the database104. For example, if server memory had been allocated to execute eight test sets110in association with the database104, the allocated memory can be deallocated and/or cleaned up to prepare the server for a subsequent round of testing. Process200can be used to determine whether a version of the software application102passes all of the test cases108. If the version of the software application102does not pass all the test cases108, the version of the software application102can be re-coded or otherwise changed until it does pass all of the test cases108. However, if the version of the software application102does pass all the test cases108, passing all the test cases108may indicate that the version of the software application102is suitable to deploy in other environments, such as other testing environments, in other development environments, and/or in production environments. In some examples, process200can be used for local testing of the software application102based on changes made by a single developer, and/or for global testing of the software application based on changes made by multiple developers, as discussed further below with respect toFIG.3. In both situations, by distributing test cases108among different test sets110and executing the test sets110in parallel, overall testing times can be reduced relative to executing the test cases108in sequence. Additionally, by executing different test sets110in parallel against a record-locking database, occurrences of database errors that might occur if different test cases108execute in parallel in association with a different type of database can be reduced and/or eliminated. FIG.3shows a flowchart of a second example process300for generating and testing the software application102using parallel threads and the record-locking database. At block302, a computing device associated with a software developer can check out source code associated with the software application102and/or the test cases108from a main code branch. For example, the source code can be stored in a main code branch at a central repository available to a team of software developers. The computing device may receive a “check out” command from the software developer. Such a “check out” command, or other command, can cause the computing device to retrieve a copy of the source code from the central repository, and thereby check out the source code from the main code branch. Accordingly, at block302, a computing device associated with a particular software developer can transfer a copy of the source code from the central repository and thus check out a copy of the source code from the main code branch. At block304, the computing device can make changes to the source code associated with the software application102and/or the test cases108locally. The computing device can display the source code within a text editor, an integrated development environment (IDE), or other programming environment. The software developer can type new code and/or input other commands via the text editor, IDE, or other programming environment, which can cause the computing device to edit the source code checked out at block302. As an example, input provided by the software developer can cause the computing device to add new code or change existing code associated with the software application102. As another example, input provided by the software developer can cause the computing device to add or change configuration files or other settings associated with the software application102. As yet another example, input provided by the software developer can cause the computing device to add edit code for test cases108, including adding one or more new test cases108, removing one or more test cases108, and/or changing source code for one or more test cases108. At block306, the computing device can initiate local testing of the software application102based on the local changes to the source code associated with the software application102and/or the test cases108. The local testing can include the operations of example process200shown and described with respect toFIG.2. For instance, the computing device, or a separate centralized computing device, can build a new version of the software application based on the local code changes associated with the software application102and/or the test cases108made at block304, initiate parallel threads, instantiate a record-locking database, distribute test cases108among different test sets110corresponding to different parallel threads, execute the test sets110in the parallel threads in association with the new version of the software application102, and output an aggregated test result report based on test results corresponding to the different test sets110. At block308, the system can determine whether the new version of the software application102, built and tested based on local changes to the source code, passed all of the test cases108. If the aggregated test result report indicates that the new version of the software application102did not pass all of the test cases108executed locally at block306(Block308—No), the software developer can provide input that causes the computing device to make further changes to the source code locally at block304, and the computing device can retry local testing at block306. However, if the aggregated test result report indicates that the new version of the software application102did pass all of the test cases108executed locally (Block308—Yes), the computing device can submit a merge request at block310. For example, based on the test result report indicating that local testing has succeeded, the software developer can input a command that causes the computing device to submit a merge request. The merge request submitted at block310can be a request to combine the local code changes associated with the software application102and/or the test cases108, made at block304, with the main code branch, such that the code changes become part of the main code branch, override previously existing code in the main code branch, and can be checked out from the main code branch by computing devices associated with other software developers. Although the local code changes may have passed local testing, the main code branch may have changed between blocks302and310, for instance if other code changes made by other software developers have been merged into the main branch such that the local changes made at block304to code checked out from the main code branch at block302would conflict with more recent changes to the main code branch. To determine if the local code changes would conflict with any other changes made to the main code branch, main branch testing can be performed at block312. In some examples, the merge request submitted at block310may initiate a code review process by which one or more other software developers review the local code changes associated with the software application102and/or the test cases108made at block304. In these examples, if the other software developers do not approve local code changes made by a software developer as part of the code review process, the software developer can return to make further changes to the source code locally via the computing device at block304. However, if the other software developers do approve the local code changes as part of the code review process, the main branch testing can be performed at block312. The main branch testing performed at block312can include the operations of example process200shown and described with respect toFIG.2. For instance, a centralized computing device can build a new version of the software application based on the local code changes associated with the software application102and/or the test cases108made at block304as well as the most recent version of the main code branch. The centralized computing device can also initiate parallel threads, instantiate a record-locking database, distribute test cases108among different test sets110corresponding to different parallel threads, execute the test sets110in the parallel threads in association with the new version of the software application102, and output an aggregated test result report based on test results corresponding to the different test sets110. At block314, the system can determine whether the new version of the software application102, built based on a combination of the local changes to the source code associated with the software application102and/or the test cases108and the most recent version of the source code in the main branch, passed all of the test cases108. If the aggregated test result report indicates that the new version of the software application102did not pass all of the test cases108executed at block312(Block314—No), the software developer can provide input that causes the computing device to make further local changes to the source code at block304. The computing device can then cause local testing to be retried at block306and/or main branch testing to be retried at block312. However, if the aggregated test result report indicates that the new version of the software application102built during main branch testing at block312did pass all of the test cases108executed at block312(Block314—Yes), the new version of the software application102built during block312can be deployed in other environments at block316. For example, at block316, the new version of the software application102can be deployed in other testing environments, in other development environments, and/or in production environments. Process300can be used to determine whether a version of software application passes all of the test cases108at a local level and/or when merged into a main code branch. In both situations, by distributing test cases108among different test sets110and executing the test sets110in parallel, overall testing times can be reduced relative to executing the test cases108in sequence. Additionally, by executing different test sets110in parallel against a record-locking database, occurrences of database errors that might occur if different test cases108execute in parallel in association with a different type of database can be reduced and/or eliminated. Process300can be performed by a computing device associated with a single developer and/or by a central server or other computing device associated with multiple developers, an example of which is shown and described below with respect toFIG.4. FIG.4shows an example system architecture400for a computing device402associated with the test manager106and/or software application102described herein. The computing device402can be a server, computer, or other type of computing device that executes the test manager106, and/or executes instances of the software application102and/or test sets110in parallel in association with the same database104. The computing device402can include memory404. In various examples, the memory404can include system memory, which may be volatile (such as RAM), non-volatile (such as ROM, flash memory, etc.) or some combination of the two. The memory404can further include non-transitory computer-readable media, such as volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information, such as computer readable instructions, data structures, program modules, or other data. System memory, removable storage, and non-removable storage are all examples of non-transitory computer-readable media. Examples of non-transitory computer-readable media include, but are not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile discs (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other non-transitory medium which can be used to store desired information and which can be accessed by the computing device402. Any such non-transitory computer-readable media may be part of the computing device402. The memory404can store modules and data406. The modules and data406can include data associated with the test manager106, the software application102, the test cases108, the test sets110, the database104, and/or other data. The modules and data406can also include any other modules and/or data that can be utilized by the computing device402to perform or enable performing any other actions. Such other modules and data can include a platform, operating system, and applications, and data utilized by the platform, operating system, and applications. As discussed above, the test manager106may cause portions of the memory404to be allocated to different parallel threads. For instance, the test manager106may allocate portions of an overall amount of the memory404to different parallel threads, and cause different instances of the software application102to execute in the different parallel threads. The computing device402can also have processor(s)408, communication interfaces410, displays412, output devices414, input devices416, and/or a drive unit418including a machine readable medium420. In various examples, the processor(s)408can be a central processing unit (CPU), a graphics processing unit (GPU), both a CPU and a GPU, or any other type of processing unit. Each of the one or more processor(s)408may have numerous arithmetic logic units (ALUs) that perform arithmetic and logical operations, as well as one or more control units (CUs) that extract instructions and stored content from processor cache memory, and then executes these instructions by calling on the ALUs, as necessary, during program execution. The processor(s)408may also be responsible for executing computer applications stored in the memory404, which can be associated with common types of volatile (RAM) and/or nonvolatile (ROM) memory. The communication interfaces410can include transceivers, modems, interfaces, antennas, telephone connections, and/or other components that can transmit and/or receive data over networks, telephone lines, or other connections. The display412can be a liquid crystal display or any other type of display commonly used in computing devices. For example, a display412may be a touch-sensitive display screen, and can then also act as an input device or keypad, such as for providing a soft-key keyboard, navigation buttons, or any other type of input. The output devices414can include any sort of output devices known in the art, such as a display412, speakers, a vibrating mechanism, and/or a tactile feedback mechanism. Output devices414can also include ports for one or more peripheral devices, such as headphones, peripheral speakers, and/or a peripheral display. The input devices416can include any sort of input devices known in the art. For example, input devices416can include a microphone, a keyboard/keypad, and/or a touch-sensitive display, such as the touch-sensitive display screen described above. A keyboard/keypad can be a push button numeric dialing pad, a multi-key keyboard, or one or more other types of keys or buttons, and can also include a joystick-like controller, designated navigation buttons, or any other type of input mechanism. The machine readable medium420can store one or more sets of instructions, such as software or firmware, that embodies any one or more of the methodologies or functions described herein. The instructions can also reside, completely or at least partially, within the memory404, processor(s)408, and/or communication interface(s)410during execution thereof by the computing device402. The memory404and the processor(s)408also can constitute machine readable media420. The computing device402can execute testing of a software application more quickly and/or with fewer computing resources than other systems. For example, by executing the test manager106to distribute test cases108among different test sets110that can be executed in parallel, the test cases108can be executed more quickly than the test cases108could be executed in sequence. As a non-limiting example, although the computing device402might take up to two hours to execute a full set of test cases in sequence, executing different test sets110simultaneously in parallel on the computing device402may allow the full set of test cases108to complete in 30 minutes or less. Developers can thus be notified whether their code changes pass the full set of test cases108more quickly, and allow code changes that have passed the full set of test case108to be merged into a main code branch more quickly. Similarly, executing test cases108in parallel can also reduce overall usage of computing resources on the computing device402when multiple new builds of a software application are tested. As a non-limiting example, although the computing device402might take up to two hours to execute a full set of test cases in sequence, and thus take up to 7200 minutes to test sixty different builds in a day, reducing the testing for each build down to 30 minutes or less by executing different test sets110simultaneously in parallel may allow the computing device402to execute the full set of test cases108for all sixty builds in only 1800 minutes. Moreover, because the database104is a record-locking database that allows different records112to be accessed simultaneously by different test cases108that are executing in parallel, database errors that could prolong testing that may otherwise occur when executing test cases108in parallel against other types of databases can be reduced and/or eliminated. Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example embodiments.
57,515
11860773
DETAILED DESCRIPTION Systems, apparatuses, and methods related to monitoring access statistics are described. A monitoring component can be included on a hybrid memory system and can include circuitry to monitor an amount of times particular pages of memory are accessed in a particular period of time. The memory device on which the monitoring component is implemented can receive an allocation of pages of memory and the monitoring component can track a number of times the allocated pages of memory were accessed over a particular period of time. The allocated pages of memory can be a portion of the total pages of memory for an application executed on a memory device. The access statistics for other pages of memory for the program can then be determined based on the access statistics of the allocated pages of memory for the program. Hybrid memory systems can include multiple (e.g., different) types of memory devices. Some memory devices can include a controller capable of maintaining (e.g., via a monitoring component) access statistics of the pages of memory stored on the memory device. However, memory devices that include such a controller may have a higher latency as compared to memory devices that do not have a controller or that have a less complex controller (e.g., control circuitry incapable of tracking access statistics). A hybrid memory system can comprise different types of memory devices. The memory devices can be memory modules such as a DRAM DIMM or NVDIMM that may not have processing or monitoring capability (e.g., via an on-die memory controller) in order to reduce latency. Memory devices can also include other types of modules or memory sub-systems that can include a memory controller such as an SSD coupled to host via NVMe bus, for example, or such as a CXL device (i.e., a memory device coupled to the host via CXL bus) that may employ different memory technologies (e.g., DRAM, FeRAM, etc.) To monitor the access statistics of the pages of memory within and among various memory devices, a host (e.g., host CPU) can maintain only limited page access statistics to identify heavily used pages of memory. Such access statistics can be useful, for instance, for an operating system that employs page scheduling to allocate virtual pages of memory to various memory devices. Various page scheduling schemes exist for determining how to allocate pages of memory in order to achieve desired system performance by providing increased speed of application execution, for example. As an example, a page scheduling algorithm can use access statistics to predict the demand of particular pages and can move the most heavily accessed pages to faster memory when/if available in order to maximize performance (e.g., execution speed). Some memory devices include a controller having sufficient processing capability to monitor and/or maintain access statistics of the memory device's pages. Such statistics can include read and/or write access counts, which can be at a page and/or subpage granularity, among other access statistics. However, many memory devices do not include a controller capable of monitoring and/or maintaining detailed access statistics. For such memory devices (e.g., memory devices not capable of monitoring detailed access statistics) it would still be useful for a page scheduler to be able to use more detailed access statistics (e.g., access statistics that are more detailed than those maintained by a host CPU). In some approaches, every page of memory corresponding to an application would be allocated to a “smart” memory device that includes a monitoring component so that every page of memory for the application is monitored by a monitoring component. This can lead to a decrease in performance of the computing system because monitoring every page of memory for an application during execution of the application can increase the latency associated with application execution. This approach of monitoring access statistics for an application increases the time it takes to execute a monitored application and therefore decreases the performance of the computing system implementing this approach. Various embodiments address the above deficiencies by employing a page sampling method in which virtual pages of memory corresponding to applications are allocated (e.g., remapped) to physical addresses (e.g., physical pages) on a memory device having a monitoring component such as those described herein. In this manner, access statistics corresponding to memory pages that would likely be mapped to “faster” memory devices that may be incapable of monitoring access statistics can still be obtained, in a sampled manner, by a “slower” memory device that is capable of detailed access statistic monitoring. Although such page sampling may reduce the speed of application execution, the otherwise unavailable access statistics can be accumulated and used by the page scheduler for improving system performance. In the following detailed description of the present disclosure, reference is made to the accompanying drawings that form a part hereof, and in which is shown by way of illustration how one or more embodiments of the disclosure may be practiced. These embodiments are described in sufficient detail to enable those of ordinary skill in the art to practice the embodiments of this disclosure, and it is to be understood that other embodiments may be utilized and that process, electrical, and structural changes may be made without departing from the scope of the present disclosure. It is also to be understood that the terminology used herein is for the purpose of describing particular embodiments only, and is not intended to be limiting. As used herein, the singular forms “a,” “an,” and “the” can include both singular and plural referents, unless the context clearly dictates otherwise. In addition, “a number of,” “at least one,” and “one or more” (e.g., a number of memory banks) can refer to one or more memory banks, whereas a “plurality of” is intended to refer to more than one of such things. Furthermore, the words “can” and “may” are used throughout this application in a permissive sense (i.e., having the potential to, being able to), not in a mandatory sense (i.e., must). The term “include,” and derivations thereof, means “including, but not limited to.” The terms “coupled” and “coupling” mean to be directly or indirectly connected physically or for access to and movement (transmission) of commands and/or data, as appropriate to the context. The terms “data” and “data values” are used interchangeably herein and can have the same meaning, as appropriate to the context. FIG.1illustrates an example computing system that includes a host and memory devices in accordance with some embodiments of the present disclosure. The computing system100can include a host102that includes a processor103, an operating system105, and a page scheduler111. The computing system100can also include one or more memory devices106-1,106-2,106-3(individually or collectively referred to as memory devices106) coupled to the host102via interfaces107-1,107-2, . . . ,107-N (individually or collectively known as interfaces107). The computing system100can be a computing device such as a desktop computer, laptop computer, server, network server, mobile device, a vehicle (e.g., airplane, drone, train, automobile, or other conveyance), Internet of Things (IoT) enabled device, embedded computer (e.g., one included in a vehicle, industrial equipment, or a networked commercial device), or such computing device that includes memory and a processing device. The computing system100can include a host system102that is coupled to one or more memory devices106. In some embodiments, the host system102is coupled to different types of memory devices106. As used herein, “coupled to” or “coupled with” generally refers to a connection between components, which can be an indirect communicative connection or direct communicative connection (e.g., without intervening components), whether wired or wireless, including connections such as electrical, optical, magnetic, and the like. The host102can include an operating system105. As used herein, the term “operating system” refers to system software that manages computer hardware, software resources, and provides common services for computer programs. For hardware functions such as input and output and memory allocation, the operating system105can act as an intermediary between programs and the computer hardware. Applications can make use of the operating system105by making requests for services through a defined application interface. The operating system can include a page scheduler111. The page scheduler111can execute a page scheduling operation to allocate pages of memory to a memory device106based on the access statistics113of the pages of memory. The host102can be coupled to memory devices106via a physical host interface (e.g., interfaces107-1,107-2, . . . ,107-N). Examples of a physical host interface include, but are not limited to, a CXL interface, a serial advanced technology attachment (SATA) interface, a peripheral component interconnect express (PCIe) interface, universal serial bus (USB) interface, Fibre Channel, Serial Attached SCSI (SAS), Small Computer System Interface (SCSI), a double data rate (DDR) memory bus, a dual in-line memory module (DIMM) interface (e.g., DIMM socket interface that supports Double Data Rate (DDR)), Open NAND Flash Interface (ONFI), Double Data Rate (DDR), Low Power Double Data Rate (LPDDR), Gen-Z, Cache Coherent Interconnect for Accelerators (CCIX), Open Coherent Accelerator Processor Interface (CAPI), or any other interface. The physical host interface can be used to transmit data between the host system102and the memory devices106. In general, the host system102can access multiple memory devices106via a same communication connection, multiple separate communication connections, and/or a combination of communication connections. In some embodiments, the memory device can be a Compute Express Link (CXL) compliant memory device (e.g., the memory device can include a PCIe/CXL interface). CXL is a high-speed central processing unit (CPU)-to-device and CPU-to-memory interconnect designed to accelerate next-generation data center performance. CXL technology maintains memory coherency between the CPU memory space and memory on attached devices, which allows resource sharing for higher performance, reduced software stack complexity, and lower overall system cost. CXL is designed to be an industry open standard interface for high-speed communications, as accelerators are increasingly used to complement CPUs in support of emerging applications such as artificial intelligence and machine learning. CXL technology is built on the peripheral component interconnect express (PCIe) infrastructure, leveraging PCIe physical and electrical interfaces to provide advanced protocol in areas such as input/output (I/O) protocol, memory protocol (e.g., initially allowing a host to share memory with an accelerator), and coherency interface. The memory devices106can include any combination of the different types of non-volatile memory devices (e.g., memory devices106) and/or volatile memory devices. The volatile memory devices can be, but are not limited to, random access memory (RAM), such as dynamic random-access memory (DRAM) and synchronous dynamic random access memory (SDRAM). Some examples of non-volatile memory devices (e.g., memory devices106) include negative-and (NAND) type flash memory and write-in-place memory, such as three-dimensional cross-point (“3D cross-point”) memory device, which is a cross-point array of non-volatile memory cells. NAND type flash memory includes, for example, two-dimensional NAND (2D NAND) and three-dimensional NAND (3D NAND). Although non-volatile memory components such as three-dimensional cross-point arrays of non-volatile memory cells and NAND type memory (e.g., 2D NAND, 3D NAND) are described, the memory devices106can be based on various other type of non-volatile memory or storage device, such as such as, solid state drives (SSD), read-only memory (ROM), phase change memory (PCM), self-selecting memory, other chalcogenide based memories, ferroelectric transistor random-access memory (FeTRAM), ferroelectric random access memory (FeRAM), magneto random access memory (MRAM), Spin Transfer Torque (STT)-MRAM, conductive bridging RAM (CBRAM), resistive random access memory (RRAM), oxide based RRAM (OxRAM), negative-or (NOR) flash memory, and electrically erasable programmable read-only memory (EEPROM). The host102can be configured to remap an address for at least one page of memory for an application to a first memory device106-1coupled to the host102from a second memory device106-2coupled to the host102. In some embodiments, the first memory device106-1can include a memory device controller108and a monitoring component110. In other embodiments, the memory device controller108and the monitoring component110can be on the second memory device106-2and/or the third memory device106-3instead of the first memory device106-1. In some embodiments, the memory device controller108can be configured to store the access statistics113of the at least one page of memory corresponding to the application in the monitoring component110of the first memory device106-1. The memory device controller108is configured to save the access statistics113of the at least one page of memory to a table of statistics stored in the monitoring component110of the controller (e.g. SRAM in108) or in memory of the memory device106-1. In some embodiments, the table of statistics can be accessible (e.g., by host102) through the memory device controller108. The table of statistics can be stored entirely on a controller chip, stored in a reserve portion of the underlying memory, and/or cached by the memory device controller108. The host102can extrapolate access statistics113for other pages of memory corresponding to the application executed on the second memory device106-2and/or third memory device106-3. If one page of memory out of a group of related (in terms of expected memory access pattern, e.g. from a single application memory allocation call) pages of memory is allocated to the first memory device106-1while remaining pages remain on the second memory device106-2or third memory device106-3, the access statistics113of the other pages of memory in the group can be extrapolated using the access statistics113of the one page of memory because all of the pages of memory in the group are likely to have similar access statistics113. In some embodiments, pages of memory allocated to the first memory device106-1are picked uniformly because an application may allocate its data to contiguous pages of memory. As used herein, the term “uniformly” refers to picking pages of memory to allocate to the first memory device such that there is an equal number of pages of memory between each allocated page of memory. This allows the access statistics113of the pages of memory contiguous to each allocated page of memory to be extrapolated since the contiguous pages of memory may have similar access statistics113to the allocated pages of memory. For example, if every fourth page of memory for an application is allocated to the first memory device106-1, the access statistics113of the three intermediate pages of memory for the application can be extrapolated from the access statistics113monitored by the first memory device106-1. In some embodiments, the host102can allocate the at least one page of memory corresponding to the application and the other pages of memory corresponding to the application to the first memory device106-1or the second memory device106-2based on the access statistics113of the at least one page of memory corresponding to the application and the other pages of memory corresponding to the application. The pages of memory can also be allocated to the third memory device106-3. The pages of memory can be allocated based on a page scheduling policy executed by the host102. The monitoring component110is configured to monitor access statistics113of the at least one page of memory that was mapped to the first memory device106-1. As used herein, the term “access statistics” refers to information about how often certain pages of memory are accessed and when certain pages of memory are accessed. In some embodiments, the access statistics113of the at least one page of memory for the application can include a number of times the at least one page of memory was accessed, which of the pages of memory were least recently used, an order in which the pages of memory were accessed, which cache lines in the memory were accessed, or a combination thereof. In some embodiments, the at least one page of memory for the application and other pages of memory for the application can be virtual memory. If the pages of memory for the application are virtual memory, the operating system105can maintain a map of virtual addresses to physical addresses (e.g., page table109). We define a sampling interval as a period of time for which a page remains allocated in memory device106-1. After each sampling interval, the OS105can optionally move pages between memory types/devices to monitor different pages within a group. This helps to increase the accuracy of extrapolated page statistics. In some embodiments, the at least one page of memory for the application can be chosen sequentially over multiple sampling intervals (i.e., rotating through all pages in a data structure). In other embodiments, the at least one page of memory for the application can be chosen at random. In embodiments where the at least one page of memory is chosen sequentially over multiple sampling intervals, the at least one page of memory can be one page of memory out of every group of, for example, four contiguous pages of memory for an application. In this embodiment, every fourth page of memory for the application can be allocated from the second memory device106-2and/or the third memory device106-3to the first memory device106-1and the monitoring component110can monitor the access statistics113of the at least one page of memory that was allocated to memory device106-1. Data structures in program memory can have a consistent access pattern throughout the entire structure. Since the access across pages of memory for a program are consistent, access statistics113for the other pages of memory for the application can be extrapolated based on the access statistics113of the at least one page of memory for the application. In some embodiments, the operating system104can be configured to execute a page scheduling policy to map the at least one page of memory for the application and the other pages of memory for the application to either the first memory device106-1or the second memory device106-2and/or third memory device106-3based on the access statistics113of the at least one page of memory for the application and the access statistics113for the other pages of memory for the application. The monitoring statistics can be used to determine, for example, which pages of memory have been accessed more frequently the other pages of memory. To improve the performance of the computing device and increase the speed at which the computing system executes applications, the operating system105can execute a page scheduling policy to allocate the most frequently used pages of memory to faster memory devices106and allocate the least frequently used pages of memory to slower memory devices106. The faster memory devices106may be memory devices106that do not include a monitoring component (e.g., such as memory device106-2and memory device106-3) and the slower memory device106can be a memory device that includes a monitoring component (e.g., memory device106-1). In some embodiments, the first memory device106-1can be as fast or faster than the second memory device106-2and/or the third memory device106-3. The speed of a memory device106can be determined by, at least, the capacity of the memory device106, the power consumption of the memory device106, and the endurance of the memory device106. FIG.2Aillustrates an example of page table mappings to a number of memory devices in accordance with various embodiments of the present disclosure. The page table209can include page table entries219-1,219-2,219-3, . . . ,219-N (individually or collectively referred to as page table entries219) and memory devices206-1and206-2. The memory devices206-1and206-2can be analogous to respective memory devices106-1and106-2shown inFIG.1. The page table209structure is not limited to the one illustrated. For example, a multi-level page table may be used as per modern Operating Systems. Each page table entry219can include a virtual address (VA) and a corresponding physical address (PA). As shown inFIG.2A, in this example, each page table entry is being allocated to the second memory device206-2. This indicates that, inFIG.2A, none of the pages of memory in the page table entries219are being monitored by the monitoring component210on the first memory device206-1. In some embodiments, every page table entry219can be allocated to the second memory device206-2because the page table entries219have not yet been allocated to the first memory device206-1so the first memory device206-1can monitor the access statistics of the page table entries219. In other embodiments, every page table entry219has been allocated to the second memory device206-2because they were previously allocated to the first memory device206-1and were allocated to the second memory device206-2because the page table entries219include more frequently accessed pages of memory. FIG.2Billustrates an example of page table mappings to a number of memory devices in accordance with various embodiments of the present disclosure. Similar toFIG.2A,FIG.2Bincludes a page table209that includes page table entries219that are allocated to memory devices206-1and206-2. In the embodiment shown inFIG.2B, the first page table entry219-1and the third page table entry219-3are allocated to the first memory device206-1. The first page table entry219includes a first virtual address (VA1) and corresponding physical address (PA1) and the third page table entry219-3includes a third virtual address (VA3) and corresponding physical address (PA3). By being allocated, by the page scheduling policy, to the first memory device206-1, the pages of data in the first page table entry219-1and third page table entry219-3can be monitored by the monitoring component210. The page scheduling policy can use the access statistics of the first page table entry219-1and the third page table entry219-3to allocate pages logically grouped with the first page table entry219-1and/or the third page table entry219-3to either the first memory device206-1or the second memory device206-2. In some embodiments,FIG.2Billustrates an embodiment in which the monitoring component210has already monitored the access statistics of the page table entries219and allocated the pages of memory in the page table entries219to either the first memory device206-1or the second memory device260-2. In these embodiments, the pages of memory in the first page table entry219-1and the pages of memory of the third page table entry219-3may have been allocated to the first memory device206-1because the pages of data in the first page table entry219-1and the third page table entry219-3have been determined to be less frequently accessed pages data. Therefore, the pages of data in the first page table entry219-1and the third page table entry219-3may have been allocated to the first memory device206-1that has a higher latency than the second memory device206-2. Further, in these embodiments, access statistics of the pages of data in the second page table entry219-2and the Nth page table entry219-N may have been monitored by the monitoring component210and may have been determined to be more frequently accessed pages of data. Therefore, the pages of memory in the second page table entry219-2and the Nth page table entry219-N may have been allocated to the second memory device206-2that has a lower latency than the first memory device206-1. In some embodiments, the access statistics can include the least recently used pages of memory, an order in which the pages of memory were accessed, whether a cache line was accessed, and separate statistics for read applications and write applications performed on the pages of memory. When monitoring whether a cache line was accessed, a sub-page bit map that tracks which individual cache lines were accessed within a page of memory can be used. In some embodiments, a bit can set for every time a cache line is accessed to signify that the location was accessed. Due to monitoring the access statistics of pages of memory, a latency of the first memory device206-1may be greater than or equal to a latency of the second memory device206-2. Further, due to monitoring the access statistics of pages of memory, a bandwidth of the first memory device206-1can be less than or equal to a bandwidth of the second memory device. The first memory device206-1can have a higher latency and a lower bandwidth because the monitoring component210monitors the access statistics of the pages of memory allocated to the first memory device206-1. Monitoring the access statistics of the pages of memory allocated to the first memory device206-1consumes time and resources, therefore, the latency of the first memory device206-1can be greater than or equal to the latency of the second memory device206-1and the bandwidth of the first memory device206-1can be less than or equal to the bandwidth of the second memory device, which does not include a monitoring component210. In some embodiments, the first memory device206-1can also be configured to receive at least one page of memory corresponding to an application executed on a third memory device (e.g., memory device106-3inFIG.1). Similar to the second memory device206-2, the third memory device may not include a memory device controller or monitoring component. Therefore, the latency of the first memory device206-1can be greater than or equal to a latency of the third memory device. Further, the bandwidth of the first memory device206-1can be less than or equal to a bandwidth of the third memory device. FIG.3illustrates an example memory device306that includes a monitoring component310in accordance with some embodiments of the present disclosure. The monitoring component310can include counters312-1,312-2, . . . ,312-N (individually or collectively referred to as counters312). The memory device306can also include a memory array314that includes pages of memory316-1,316-2, . . . ,316-N (individually or collectively referred to as pages of memory316). The counters312can include hardware that can store one or more values (e.g., logical values, numerical values, etc.). For example, the counters312can be a cache (e.g., an SRAM cache), register/registers, latches, or the like. The values written to, or stored by, the counters312can correspond to access statistics of the pages of memory that are collected by the monitoring component310. In some embodiments, the counters312can be stored in the memory array314. As shown inFIG.3, the pages of memory316are coupled to respective counters312within the monitoring component310. In some embodiments, the monitoring component310can monitor the pages of memory316on a page-by-page basis. That is, the monitoring component310can monitor each (or at least some) of the pages of memory316individually to determine access statistics associated with the pages of memory316during execution of the application on the pages of memory. As stated earlier, the access statistics can include, but are not limited to, the least recently used pages of memory, an order in which the pages of memory are accessed, whether cache lines are accessed, and a number of reads and writes executed on the pages of memory. The counters312can be incremented in response to a determination that one or more of the above enumerated access statistics, among others, has been detected by the monitoring component310. The monitoring component310can analyze information stored by the counters312to determine the access statistics of the different pages of memory316on a page-by-page basis. In some embodiments, the count of the counters312can be reset when the at least one page of memory for the application is removed from the first memory device. Further, the count of the counters312can be reset after the host reads the access statistics of the at least one page of memory for the application. This can result in a reduced decrease in performance of the computing system while the monitoring component310is monitoring access statistics of pages of memory relative to previous approaches in which the monitoring component310monitors the access statistics of every page of an application. In some embodiments, the memory device306can be configured to receive at least one page of memory316corresponding to an application page that was previously allocated on a different memory device (e.g., memory device106-2inFIG.1). As shown inFIG.1the monitoring component310can be embedded in the memory device controller (e.g., memory device controller108inFIG.1). The memory device306can receive at least one page of memory316corresponding to an application whose other pages may reside on second memory device and/or third memory device (e.g., memory device106-3inFIG.1). Further, the monitoring component310can be configured to monitor access statistics of the at least one page of memory316. In some embodiments, the access statistics can include the least recently used pages of memory, an order in which the pages of memory were accessed, whether a cache line was accessed, and separate statistics for read applications and write applications performed on the pages of memory316. When monitoring whether a cache line was accessed, a sub-page bit map that tracks which individual cache lines were accessed within a page of memory316can be used. In some embodiments, a bit can set for every time a cache line is accessed to signify that the location was accessed. Due to monitoring the access statistics of pages of memory316, a latency of the memory device306may be greater than or equal to a latency of the second memory device. Further, due to monitoring the access statistics of pages of memory316, a bandwidth of the memory device306can be less than or equal to a bandwidth of the second memory device. The memory device306can have a higher latency and a lower bandwidth because the monitoring component310monitors the access statistics of the pages of memory316allocated to the memory device306. Monitoring the access statistics of the pages of memory316allocated to the memory device306consumes time and resources, therefore, the latency of the memory device306can be greater than or equal to the latency of the second memory device and the bandwidth of the memory device306can be less than or equal to the bandwidth of the second memory device, which does not include a monitoring component310. In some embodiments, the memory device306can also be configured to receive at least one page of memory316that may have previously been allocated on a third memory device (e.g., memory device106-3inFIG.1). Similar to the second memory device, the third memory device may not include a memory device controller or monitoring component. Therefore, the latency of the memory device306can be greater than or equal to a latency of the third memory device. Further, the bandwidth of the memory device306can be less than or equal to a bandwidth of the third memory device. In some embodiments, the memory device controller can be configured to store the access statistics of the at least one page of memory corresponding to the application in the memory device306. The memory device controller is configured to save the access statistics of the at least one page of memory to a table in the memory device306. In some embodiments, the table of statistics can be accessible through the memory device controller. The table of statistics can be stored entirely on a controller chip, stored in a reserve portion of the underlying memory, and/or cached by the memory device controller if it is not in the memory device306. FIG.4is a flow diagram corresponding to a method418for a monitoring component for monitoring access statistics in accordance with some embodiments of the present disclosure. The method418can be performed by processing logic that can include hardware (e.g., processing device, circuitry, dedicated logic, programmable logic, microcode, hardware of a device, integrated circuit, etc.), software (e.g., instructions run or executed on a processing device), or a combination thereof. Although shown in a particular sequence or order, unless otherwise specified, the order of the processes can be modified. Thus, the illustrated embodiments should be understood only as examples, and the illustrated processes can be performed in a different order, and some processes can be performed in parallel. Additionally, one or more processes can be omitted in various embodiments. Thus, not all processes are required in every embodiment. Other process flows are possible. At block420, the method418can include mapping, by a host, pages of memory for applications to a number of memory devices coupled to the host. In some embodiments, an address for the at least one page of memory for the application and the other pages of memory for the application can be permanently mapped to the first memory device. If the addresses for the pages of memory are permanently mapped to the first memory device, the mapped addresses can continue to be allocated to the first memory device after the access statistics for the mapped pages of memory have been monitored. In other embodiments, the address for the at least one page of memory for the application and the other pages of memory for the application can be temporarily mapped to the first memory device. If the addresses for the pages of memory are temporarily mapped to the first memory device, the mapped address for the pages of memory can be mapped to the second memory device and/or the third memory device after the access statistics of the pages of memory have been monitored. At block422, the method418can include monitoring, by a first memory device comprising a monitoring component, access statistics of pages of memory mapped to the first memory device. In some embodiments, the operating system can decide which pages of memory are monitored by the monitoring component and the interval of time over which the memory pages are monitored. For example, the operating system can decide how long each period of access statistics monitoring lasts and the how long each period of time between each period of access statistics monitoring lasts. At block424, the method418can include mapping, by the host, a portion of pages of memory for an application to the first memory device in order to obtain access statistics corresponding to the portion of pages of memory upon execution of the application, wherein the portion of pages of memory for the application are mapped to the first memory device despite there being space available on the second memory device. In some embodiments the portion of pages of memory for the application are mapped to the first memory device despite there being space available on the third memory device. The host can include a table of statistics to store access statistics of pages of memory. In some embodiments, the table of statistics can be accessible through the memory device controller. Further, the table of statistics can be stored entirely on a controller chip or stored in reserve portion of the underlying memory and/or cached by the memory device controller if it is not in that memory. At block426, the method418can include adjusting, by the host, mappings of the pages of memory for the application based on the obtained access statistics corresponding to the portion of pages. In some embodiments, at least one page of memory for the application and the other pages of memory of the application can be mapped to the first memory device when the at least one page of memory for the application is accessed less than a threshold amount of times within a certain period of time. In some embodiments, the at least one page of memory for the application and the other pages of memory of the application can be mapped to the second memory device when the at least one page of memory for the application and the other pages of memory for the application are accessed more than a threshold amount of times within a certain time period. FIG.5illustrates an example machine of a computer system500within which a set of instructions, for causing the machine to perform any one or more of the methodologies discussed herein, can be executed. In some embodiments, the computer system500includes, is coupled to, or utilizes a memory sub-system or can be used to perform the operations of a controller (e.g., to execute an operating system to perform operations corresponding to the monitoring component110ofFIG.1). In alternative embodiments, the machine can be connected (e.g., networked) to other machines in a LAN, an intranet, an extranet, and/or the Internet. The machine can operate in the capacity of a server or a client machine in client-server network environment, as a peer machine in a peer-to-peer (or distributed) network environment, or as a server or a client machine in a cloud computing infrastructure or environment. The machine can be a personal computer (PC), a tablet PC, a set-top box (STB), a Personal Digital Assistant (PDA), a cellular telephone, a web appliance, a server, a network router, a switch or bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine. Further, while a single machine is illustrated, the term “machine” shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein. The example computer system500includes a processing device534, a main memory (e.g., memory device)506(e.g., read-only memory (ROM), flash memory, dynamic random access memory (DRAM) such as synchronous DRAM (SDRAM) or Rambus DRAM (RDRAM), etc.), a static memory528(e.g., flash memory, static random access memory (SRAM), etc.), and a data storage system532, which communicate with each other via a bus530. The processing device534represents one or more general-purpose processing devices such as a microprocessor, a central processing unit, or the like. More particularly, the processing device can be a complex instruction set computing (CISC) microprocessor, reduced instruction set computing (RISC) microprocessor, very long instruction word (VLIW) microprocessor, or a processor implementing other instruction sets, or processors implementing a combination of instruction sets. The processing device534can also be one or more special-purpose processing devices such as an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), network processor, or the like. The processing device534is configured to execute instructions538for performing the operations and steps discussed herein. The computer system500can further include a network interface device536to communicate over the network540. The data storage system532can include a machine-readable storage medium542(also known as a computer-readable medium) on which is stored one or more sets of instructions538or software embodying any one or more of the methodologies or functions described herein. The instructions538can also reside, completely or at least partially, within the main memory506and/or within the processing device534during execution thereof by the computer system500, the main memory506and the processing device534also constituting machine-readable storage media. In one embodiment, the instructions534include instructions to implement functionality corresponding to a monitoring component510(e.g., the monitoring component110ofFIG.1) for monitoring access statistics of pages of data. While the machine-readable storage medium542is shown in an example embodiment to be a single medium, the term “machine-readable storage medium” should be taken to include a single medium or multiple media that store the one or more sets of instructions. The term “machine-readable storage medium” shall also be taken to include any medium that is capable of storing or encoding a set of instructions for execution by the machine and that cause the machine to perform any one or more of the methodologies of the present disclosure. The term “machine-readable storage medium” shall accordingly be taken to include, but not be limited to, solid-state memories, optical media, and magnetic media. Some portions of the preceding detailed descriptions have been presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the ways used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is here, and generally, conceived to be a self-consistent sequence of operations leading to a desired result. The operations are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common access, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like. It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. The present disclosure can refer to the action and processes of a computing system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computing system's registers and memories into other data similarly represented as physical quantities within the computing system memories or registers or other such information storage systems. The present disclosure also relates to an apparatus for performing the operations herein. This apparatus can be specially constructed for the intended purposes, or it can include a general purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program can be stored in a computer readable storage medium, such as, but not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, and magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, or any type of media suitable for storing electronic instructions, each coupled to a computer system bus. The algorithms and displays presented herein are not inherently related to any particular computer or other apparatus. Various general purpose systems can be used with programs in accordance with the teachings herein, or it can prove convenient to construct a more specialized apparatus to perform the method. The structure for a variety of these systems will appear as set forth in the description below. In addition, the present disclosure is not described with reference to any particular programming language. It will be appreciated that a variety of programming languages can be used to implement the teachings of the disclosure as described herein. The present disclosure can be provided as a computer program product, or software, that can include a machine-readable medium having stored thereon instructions, which can be used to program a computer system (or other electronic devices) to perform a process according to the present disclosure. A machine-readable medium includes any mechanism for storing information in a form readable by a machine (e.g., a computer). In some embodiments, a machine-readable (e.g., computer-readable) medium includes a machine (e.g., a computer) readable storage medium such as a read only memory (“ROM”), random access memory (“RAM”), magnetic disk storage media, optical storage media, flash memory devices, etc. In the foregoing specification, embodiments of the disclosure have been described with reference to specific example embodiments thereof. It will be evident that various modifications can be made thereto without departing from the broader spirit and scope of embodiments of the disclosure as set forth in the following claims. The specification and drawings are, accordingly, to be regarded in an illustrative sense rather than a restrictive sense.
45,353
11860774
DETAILED DESCRIPTION OF THE EMBODIMENTS Hereinafter, exemplary embodiments of the inventive concept will be described with reference to the accompanying drawings. However, this inventive concept should not be construed as limited to the embodiments set forth herein. Below, a NAND flash memory device is used as an example for illustrating characteristics and functions of the inventive concept. However, other memory devices can be used in accordance with exemplary embodiments of the inventive concept. For example, phase-change random access memory (PRAM), magnetoresistive random access memory (MRAM), resistive random access memory (ReRAM), ferroelectric random access memory (FRAM), and/or NOR flash memories may be used. FIG.1is a block diagram illustrating a layer structure of software for driving a nonvolatile memory device according to an exemplary embodiment of the inventive concept. In a software layer of a user device (e.g., mobile device) according to an exemplary embodiment of the inventive concept, a write mode WMi or a read mode RMj is provided from a host side to a memory system side. Referring toFIG.1, application software10and a file system/device driver20are software driven at the host side. In particular, the application software10or software associated with a user interface or an operating system (OS) will hereinafter be referred to as a “platform”. A write mode or a read mode based on a situation of the host may be decided by upper layer software of the host. In an exemplary embodiment of the inventive concept, the write mode WMi or the read mode RMj for the nonvolatile memory device40may be decided by lower layer software of the host such as fixed file system device driver20. The write mode WMi or the read mode RMj corresponds to a program mode considering write speed or integrity of data to be written into the nonvolatile memory device40or retention characteristics of the data. For example, a write bias for providing improved integrity of data when the data is written into the nonvolatile memory device40may be referred to as one write mode. Alternatively, a write bias for improving a data program speed may be referred to as another write mode. An example of the write mode WMi will be explained in detail later with reference to accompanying drawings. Additionally, a read bias for providing improved integrity or read speed of data when data is read from the nonvolatile memory device40may be referred to as one read mode. A flash translation layer30provides interfacing to conceal an erase operation of the nonvolatile memory device40between the host and the nonvolatile memory device40. Aspects of the nonvolatile memory device40, such as erase-before-write and a mismatch between erase and write units, may be complemented by the flash translation layer30. In addition, the flash translation layer30maps a logical address LA generated by the host onto a physical address PA of the nonvolatile memory device40during a write operation of the nonvolatile memory device40. The flash translation layer30may establish an address mapping table to map a physical address PA of the nonvolatile memory device40to a corresponding logical address LA. Various address mapping methods, which depend on mapping units, may be used by the flash translation layer30. Exemplary address mapping methods include a page mapping method, a block mapping method, a hybrid mapping method and the like. The flash translation layer30may control the nonvolatile memory device40according to various erase and write modes based on a request of the host or its decision algorithm. For example, the flash translation layer30may control the nonvolatile memory device40to erase a selected memory block according to one of various erase conditions. In other words, the flash translation layer30may vary the level of an erase voltage applied during an erase operation of the nonvolatile memory device40. A conventional erase operation is performed under the same condition with respect to all memory blocks. According to a conventional technique for extending lifetime, the nonvolatile memory device40is accessed by applying wear leveling based on an erase count. The flash translation layer30applies effective wearing EW of a relatively low value to a memory block managed by a relatively low erase voltage. Thus, wear leveling may not be performed by an erase count uniformly applied, but may be performed based on the degree of substantial exposure to a stress. The flash translation layer30may manage a write mode and a read mode according to an erase mode applied to respective memory blocks. In other words, the flash translation layer30may generate a command CMD or a mode setup signal to set up a bias for a selected memory block according to the write mode WMi or the read mode RMj provided from the host. The nonvolatile memory device40may vary various driving conditions according to an access mode decided by the flash translation layer30. A read voltage, an erase voltage, a program voltage, and the like may be selectively generated such that the nonvolatile memory device40performs the access mode selected by the flash translation layer30. For this reason, the nonvolatile memory device40may further include an interface to receive a separate command or control signal for the mode setup. The nonvolatile memory device40may include elements to adjust a direct current (DC) voltage in response to the received mode setup command or control signal. The user device's read and write access performance should be very high to satisfy the performance required from many applications which are executed in parallels. But, maximum operation speed is not needed always. If the host can give a speed information such as the write mode WMi or read mode RMj to indicate how much performance is needed to the storage device, the storage device could utilize the speed information to reduce the power consumption or to increase the device lifetime by using lower erase start voltage for erasing operation and lower ISPP voltage for program operation, or increase the system throughput by preparing free block or read reclaiming in advance and so on. In other words, the host may indicate the storage device which slower access mode is acceptable by setting the write mode WMi or read mode RMj. Depending on those set values the storage device can operate in lower access speed mode, and in this slower mode device may execute it's internal works mentioned above. And when a new access requests are received from the host during the slower access mode, such requests can be delayed until those internal works has been completed depending on the storage device implementation. One of useful situation for utilizing this feature could be when display screen of the smart phone was off. When owner of smart phone sleeps during the night, requests to the storage device by background Applications are usually not urgent. In such case, the host may set slower access mode by configure the write mode WMi or read mode RMj, and the storage device can be operated more efficient way internally, for example, by relaxing the response time to the host and instead performing the device internal works in parallel for improving the throughput and so on. FIG.2is a block diagram of a user device according to an exemplary embodiment of the inventive concept. As illustrated, the user device includes a host100and a memory system200. The memory system may include a memory controller210and a nonvolatile memory device230. When a write request WR occurs, the host100may determine a write mode WMi for the write request WR. The host100includes a write mode manager150to decide the write mode WMi. The write mode manager150may decide a write mode WMi depending on an attribute of write-requested data, a kind of application software issuing a write request, a user input, and a state of the host100such as a queue size of a write buffer. The write mode manager150may be provided as hardware including any one of control units of the host100. The write mode manager150may be provided as at least one software module incorporated in an application, an OS, a file system or a device driver. The host100may include, for example, a personal/handheld computer, a handheld electronic device such as a personal digital assistant (PDA), a portable multimedia player (PMP), and an MP3 player, a high definition television (HDTV), and the like. The memory controller210interfaces the host100with the nonvolatile memory device230. The memory controller210writes data provided from the host100into the nonvolatile memory device230in response to a write command of the host100. The memory controller210controls a read operation of the nonvolatile memory device230in response to a read command from the host100. The memory controller210may variously set up an operation bias for a selected memory area in response to the write mode WMi provided from the host100. The memory controller210may provide an access mode AMi for setting up an operation bias of the nonvolatile memory device230through a command or a control signal. The memory controller210may maintain erase mode information on respective memory blocks of the nonvolatile memory device230. The memory controller210decides the access mode AMi with reference to an erase mode EM of a memory block corresponding to a logical address LA when a write request WR and a write mode WMi are input from the host100. For example, the memory controller210may control a write mode WMi for a memory block using a relatively low erase voltage and a relatively high erase verify voltage at a low speed during an erase operation. In addition, the memory controller210may control a write mode WMi for a memory block using a relatively high erase voltage and a relatively low erase verify voltage at a high speed during an erase operation. In addition, the memory controller210may first erase a selected memory area (e.g., memory block, superblock, chip, etc.) for a write operation. The memory controller210may decide or adjust read and write modes according to effective wearings EW of all memory blocks. The memory controller210may perform wear leveling or garbage collection based on cumulative effective wearing. Although selection of the access mode AMi for a selected memory block has been explained herein, the inventive concept is not limited to the above explanation. In other words, the memory controller210may select a physical address of a memory block, which may be accessed according to the access mode AMi, depending on its use. In the situation where data is written at a high speed, the memory controller210may map a physical address of an erase-state memory block, which may be programmed at a high speed, onto a logical address. The memory controller210may decide an access mode AMi including a write mode, a read mode, and an erase mode with reference to the number of free blocks, a size of write-requested data, a time interval between requests, and so on. These operations will be described in detail later. The nonvolatile memory device230is provided as a storage medium of a memory system. For example, the nonvolatile memory device230may include a NAND-type flash memory having a large storage capacity. Alternatively, the nonvolatile memory device230may include next-generation nonvolatile memories such as PRAM, MRAM, ReRAM, and FRAM or a NOR-type flash memory. In particular, the nonvolatile memory device230may adjust the level of a bias such as a DC voltage in response to a command or control signal corresponding to the access mode AMi provided from the memory controller210. In the above-described user device according to an exemplary embodiment of the inventive concept, a write mode WMi of write-requested data is decided according to the situation of the host100. The decided write mode WMi may be provided to the side of the memory system200simultaneously with a write request or through a separate command. The memory system200includes a nonvolatile memory device230where the level of an erase voltage and a threshold voltage in an erase state may be variously set up. The memory controller210may assign different effective wearings EW according to the level of an erase voltage applied to respective memory blocks and a stress received during an erase operation. The memory controller210may manage a memory with reference to cumulative effective wearing CEW for respective memory blocks. When a write mode is set up by the host100, a state of a user or an input/output system or a user device may be applied more effectively to manage memory. Thus, lifetime of the memory system200may be extended while minimizing performance degradation that the user experiences. FIG.3is a block diagram of a hardware structure of the host100inFIG.2, according to an exemplary embodiment of the inventive concept. Referring toFIG.3, software of the host100may include a platform110and a file system/device driver140. A write mode manager150may be included in the file system/device driver140as a function module. The platform110includes application software120and a user context monitor130. The platform110is the common name of an OS of a user device or application software120driven on the OS and basic software for various services. According to an exemplary embodiment of the inventive concept, a first write mode set signal WMS1may be decided by the application software120that is currently driven on the platform110. The user context monitor130may monitor whether there is a user input of a user interface135or monitor a period of the user input. The user context monitor130may monitor an ON/OFF state of a liquid crystal display (LCD)136. Although an LCD is shown other displays may be used with the inventive concept such as an organic light emitting diode display. The user context monitor130may generate a second write mode set signal WMS2with reference to the state of the user interface135or the LCD136. The first write mode set signal WMS1and the second write mode set signal WMS2may be independently generated. These signals WMS1and WMS2are collected by the write mode manager150. The file system/device driver140may include the write mode manager150and a write queue monitor160. The write queue monitor160monitors a state of a write buffer166managed by an input/output scheduler165. In other words, the write queue monitor160may monitor a queue size of data accumulated on the write buffer166to generate a third write mode set signal WMS3. The third write mode set signal WMS3may be provided to the write mode manager150to act as a reference for mode decision. The write mode manager150transfers a write mode WMi to the memory system200with reference to the first to third write mode set signals WMS1˜WMS3. The write mode WMi may be decided by the respective first to third write mode set signals WMS1˜WMS3or through a combination of the first to third write mode set signals WMS1˜WMS3. A flash translation layer (FTL) of the memory system200accesses a selected memory area according to the provided write mode WMi. In other words, the FTL may write data into a selected memory block or write data into a selected memory block after the selected memory block is erased according to a specific erase mode. Heretofore, a software structure of the host100according to an exemplary embodiment of the inventive concept has been described. However, a position on a software layer of the write mode manager150is not limited to the example shown. FIG.4is a block diagram illustrating hardware elements of the memory controller210inFIG.2, according to an exemplary embodiment of the inventive concept. As illustrated, the memory controller210includes a processing unit211, a working memory212, a host interface213, an error correction unit (ECC)214, and a memory interface215. However, it will be understood that elements of the memory controller210are not limited to the above elements. For example, the memory controller210may further include a read only memory (ROM) configured to store code data required for initial booting. The processing unit211includes a central processing unit (CPU) or a microprocessor. The processing unit211controls the overall operation of the memory controller210. The processing unit211is configured to drive firmware for controlling the memory controller210. The firmware is loaded on the working memory212to be driven. Software (or firmware) and data are loaded on the working memory212to control the memory controller210. Stored software or data is driven or processed by the processing unit211. According to an exemplary embodiment of the inventive concept, an FTL (not shown inFIG.4) and a per-block mode table216may be loaded on the working memory212. Information on effective wearing EW, cumulative effective wearing CEW, a write mode WMi or the like for respective memory blocks may be stored and maintained in the per-block mode table216. The host interface213provides an interface between the host100and the memory controller210. The host100and the memory controller210may be connected through one of various interfaces. Alternatively, the host100and the memory controller210may be connected through a plurality of various interfaces. The interfaces include advanced technology attachment (ATA), serial-ATA (SATA), external SATA (e-SATA), small computer system interface (SCSI), serial attached SCSI (SAS), peripheral component interconnection (PCI), PCI express (PCI-E), universal serial bus (USB), IEEE 1394, and card interface, for example. The ECC214may correct an error of data corrupted by various causes. For example, the ECC214may perform an operation to detect and correct an error of data read from the nonvolatile memory device230. The memory interface215provides an interface between the memory controller210and the nonvolatile memory device230. For example, data processed by the processing unit211is stored in the nonvolatile memory device230through the memory interface215. As another example, data stored in the nonvolatile memory device230is provided to the processing unit211through the memory interface215. The memory interface215may perform a setup operation on the nonvolatile memory device230for executing a decided access mode AMi. A bias of the nonvolatile memory device230may be set up through a command or a control signal. FIG.5is a block diagram of a nonvolatile memory device230according to an exemplary embodiment of the inventive concept. As illustrated, the nonvolatile memory device230includes a cell array231, a decoder232, a page buffer233, a control logic234, and a voltage generator235. The cell array231is connected to the decoder232through wordlines WLs or selection lines SSL (e.g., string selection lines) and GSL (e.g., ground selection lines). The cell array231is connected to the page buffer233through bitlines BLs. The cell array231includes a plurality of memory blocks BLK1˜BLKz. Each of the memory blocks BLK1˜BLKz includes a plurality of NAND cell strings. An erase operation is performed units of the memory blocks BLK1˜BLKz. During the erase operation, an erase voltage Vers_i (with i varying depending on a mode) generated by the voltage generator235is supplied to a selected memory block of the cell array231. After the erase voltage Vers_i is supplied, an erase verify voltage Vevf_i may be supplied to wordlines of the memory block. The erase voltage Vers_i may vary depending on an erase mode EMi. For example, a level of the erase voltage Vers_i may be lowered in an erase mode where a threshold voltage of memory cells corresponding to an erase state is relatively high. Alternatively, when the erase voltage Ver_i is provided m the form of an incremental step pulse, an erase start voltage may be relatively low. A level of the erase verify voltage Vevf_i may be decided depending on the erase mode EMi. The decoder232may select any one of the memory blocks BLK1˜BLKz of the cell array231in response to an address PA. The decoder232may provide a wordline voltage VWLcorresponding to an operation mode to a wordline of the selected memory block. For example, during a program operation, the decoder232transfers a program voltage Vpgm_i and a verify voltage Vvf_i to a selected wordline and transfers a pass voltage Vpass to an unselected wordline. The decoder232may provide a selection signal to the selection lines SSL and GSL to select a memory block, a sub-block or the like. During a read operation, a read voltage Vrd_i is supplied to a selected wordline of a memory block. During the read operation, a pass read voltage Vreadi may be supplied to unselected wordlines of the memory block. The page buffer233acts as a write driver or a sense amplifier according to an operation mode. During a program operation, the page buffer233transfers a bitline voltage corresponding to data to be programmed to a bitline of the cell array231. During a read operation, the page buffer233senses data stored in a selected memory cell through a bitline. The page buffer233latches the sensed data and transfers the latched data to an external entity. The control logic234controls the page buffer233and the decoder232in response to an externally transmitted command CMD. In particular, the control logic234may control the page buffer233and the voltage generator235to perform an access operation on a selected memory block according to an externally provided access mode AMi. For example, the control logic234may control the voltage generator235to generate a program voltage and a verify voltage to be provided to a selected memory block according to a write mode WMi. The control logic234may control the voltage generator235to generate various sets of read and pass voltages Vrd_i and Vreadi according to a read mode. The voltage generator235generates various kinds of wordline voltages to be supplied to respective wordlines and a voltage to be supplied to a bulk (e.g., well region) where memory cells are formed, according to the control of the control logic234. The wordline voltages to be supplied to the respective wordlines include a program voltage Vpgm_i, a pass voltage Vpass, a read voltage Vrd_i, a pass read voltage Vreadi, and the like. The voltage generator235may generate selection line voltages VSSLand VGSLsupplied to the selection lines SSL and GSL during the read and program operations. In particular, the voltage generator235may generate an erase voltage Vers_i with various levels. The voltage generator235may adjust a start pulse level of the erase voltage Vers_i to be supplied to a bulk region of a selected memory block to various values according to the erase mode EMi. In addition, the voltage generator235may generate an erase verify voltage Vevf_i of a level corresponding to the erase voltage Vers_i. The voltage generator235may generate the overall DC voltage corresponding to various access modes AMi such as read, write, and erase modes. The nonvolatile memory device230according to an exemplary embodiment of the inventive concept may vary erase, write, and read biases for a selected memory block in response to an access mode AMi provided by the memory controller210. Thus, an erase voltage stress applied to a memory block of the nonvolatile memory device230may be minimized. As the erase voltage stress is alleviated, the lifetime of the nonvolatile memory device230may be extended. FIG.6is a graph showing effective wearing EW according to an exemplary embodiment of the inventive concept. InFIG.6, a level of effective wearing EW based on an erase voltage Vers_i is shown as a linear function. When a memory block is erased by a maximum erase voltage Vers_Max, a size of the effective wearing EW may be decided to be “1”. When the memory block is erased by a lowest erase voltage Vers_m, the effective wearing EW of the memory block may be set to “0.4”. Effective wearing EW of a memory block erased by the erase voltage Vers_1may be mapped to “0.9”. The setting of the effective wearing EW is merely exemplary. It will be understood that a corresponding relationship between the erase voltage Vers_i and corresponding effective wearing EW may be variously set using test values. In addition, a corresponding relationship between the erase voltage Vers_i and effective wearing EW may be set not in the form of a linear function but in the form of a parabolic function, an exponential function or a logarithmic function. The erase voltage Vers_i may be an erase start voltage applied to a bulk region of a memory block during each erase operation. However, it will be understood that the definition of the erase voltage Vers_i is not limited to the above. FIG.7illustrates an erase mode EMi according to an exemplary embodiment of the inventive concept. Referring toFIG.7, an erase mode EMi may be divided depending on a level of an erase voltage Vers_i or a level of an erase verify voltage Vevf_i. First, a default erase mode EM0will now be described. When there is no assignment to an erase mode, a selected memory block may be erased according to the default erase mode EM0. Alternatively, when a selected memory block is set to be erased according to the default erase mode EM0, the selected memory block may be erased based on an erase start voltage Vers_Max and an erase verify voltage Vevf_0. When a memory block is erased according to an incremental step pulse scheme, the erase start voltage Vers_Max may be supplied to a bulk region of a selected memory block. Then, the erase verify voltage Vevf_0may be supplied to wordlines of the selected memory block. When there are memory cells whose threshold voltage is made higher than the erase verify voltage Vevf_0by an erase verify operation, an erase operation based on an erase voltage Vers_Max+ΔV and the erase verify voltage Vevf_0is followed. A further erase operation based on an erase voltage Vers_Max+2ΔV and the erase verify voltage Vevf_0may follow. When an erase voltage gradually increases in level and a threshold voltage of all memory cells is made lower than the erase verify voltage Vevf_0, the erasure is determined to be completed. A threshold voltage of memory cells of a memory block selected by an erase operation corresponding to the default erase mode EM0is shifted to a level corresponding to an erase state E0. As the default erase mode EM0is executed, a threshold voltage of memory cells in the erase state E0and program states P1, P2, and P3may be made lower than the erase verify voltage Vvef_0. Next, a first erase mode EM1will now be described. When a selected memory block is decided to be erased according to the first erase mode EM1, the erase memory block is erased based on an erase start voltage Vers_1and an erase verify voltage Vevf_1. An erase start voltage Vers_1may be supplied to a memory block selected for erasure. An erase verify voltage Vevf_1may be supplied to wordlines of the selected memory block. When there are memory cells whose threshold voltage is made higher than the erase verify voltage Vevf_1by an erase verify operation, an erase operation based on an erase voltage Vers_1+ΔV and the erase verify voltage Vevf_1is followed. A further erase operation based on an erase voltage Vers_1+2ΔV and the erase verify voltage Vevf_1may follow. When an erase voltage gradually increases in level and a threshold voltage of all memory cells is made lower than the erase verify voltage Vevf_1, the erasure is determined to be completed. A threshold voltage of memory cells of a memory block selected by an erase operation corresponding to the first erase mode EM1is shifted to a level corresponding to erase state E1. After the first erase mode EM1is executed, the threshold voltage of the memory cells in the erase state E1and program states P1, P2and P3may be made lower than the erase verify voltage Vevf_1. When a second erase mode EM2is executed, a selected memory block is erased based on an erase start voltage Vers_2and an erase verify voltage Vevf_2. The erase start voltage Vers_2may be supplied to a selected memory block, and the erase verify voltage Vevf_2may be supplied to wordlines of the selected memory block. When the operation is determined to be incompletely erased, a subsequent erase operation based on an erase voltage Vers_2+ΔV and an erase verify voltage Vevf_2is followed. A further erase operation based on an erase voltage Vers_2+2ΔV and the erase verify voltage Vevf_2may follow. When an erase voltage gradually increases in level and a threshold voltage of all memory cells is made lower than the erase verify voltage Vevf_2, the erasure is determined to be completed. A threshold voltage of memory cells selected by an erase operation corresponding to the second erase mode EM2is shifted to a level corresponding to an erase state E2. After the second erase mode EM2is executed, the threshold voltages of the memory cells in the erase state E2and program states P1, P2and P3may be made lower than the erase verify voltage Vevf_2. When a third erase mode EM3is executed, a selected memory block is erased based on an erase start voltage Vers_3and an erase verify voltage Vevf_3. The erase start voltage Vers_3may be supplied to a selected memory block, and the erase verify voltage Vevf_3may be supplied to wordlines of the selected memory block. When the operation is determined to be incompletely erased, a subsequent erase operation based on an erase voltage Vers_3+ΔV and an erase verify voltage Vevf_3is followed. A further erase operation based on an erase voltage Vers_3+2ΔV and the erase verify voltage Vevf_3may follow. When an erase voltage gradually increases in level and a threshold voltage of all memory cells is made lower than the erase verify voltage Vevf_3, the erasure is determined to be completed. A threshold voltage of memory cells selected by an erase operation corresponding to a third erase mode EM3is shifted to a level corresponding to an erase state E3. A threshold voltage of memory cells in the erase state E3and program states P1, P2and P3may be made lower than the erase verify voltage Vevf_3after execution of the third erase mode EM3. Heretofore, voltage waveforms and threshold voltage distributions for erase operations of plural erase modes have been described. Although the erase modes include four modes, the number of the erase modes may be set to be greater or smaller than four. Any one of the default mode EM0to the third erase mode EM3may be selected on the basis of a size of the cumulative effective wearing CEW. In other words, the default erase mode EM0may be assigned to a memory block of cumulative effective wearing CEW where oxide deterioration of a memory cell is negligible, and the third erase mode EM3with relatively less stress may be assigned to a memory block of cumulative effective wearing CEW where oxidation deterioration of a memory cell is significant. In addition, a threshold voltage distribution corresponding to an erase mode is not limited to the above-described distributions. Voltage width of each of the erase states E0, E1E2, and E3is not limited to that described above. Distribution width of erase states may be extended as the level of an erase mode increases. For example, distribution width of the erase state E3may be made greater than that of the erase states E0, E1, and E2. FIG.8is a table showing the per-block mode table216inFIG.4, according to an exemplary embodiment of the inventive concept. Referring toFIG.8, the per-block mode table216may include an average erase count (Average EC)216a,an average cumulative effective wearing (Average CEW)216b,and a per-block state table216c. The average erase count216aprovides a reference value fora wear leveling operation. For example, use frequency of a memory block with a relatively great erase count may decrease with reference to the average erase count216a,and use frequency of a memory block with a relatively small erase count may increase with reference to the average erase count216a.This operation may be performed at an FTL. The average cumulative effective wearing216bis a reference for wear leveling according to an exemplary embodiment of the inventive concept. The average cumulative effective wearing216bcorresponds to an average of cumulative effective wearing CEW of respective memory blocks. The FTL may perform wear leveling with reference to the average cumulative effective wearing216b.For example, the FTL may assign a low selection priority to a memory block with a relatively greater cumulative effective wearing CEW than the average cumulative effective wearing216b.The FTL may grant a high selection priority to a memory block with a relatively smaller cumulative effective wearing CEW than the average cumulative effective wearing216b.A deviation in the cumulative effective wearing216bbetween memory blocks may be reduced by the operation of the FTL. The per-block state table216cstores erase states of all the memory blocks. In other words, the per-block state table216cmay store an erase count EC, effective wearing EW, and cumulative effective wearing CEW with respect to each memory block. In addition, the per-block state table216cmay store a write mode WM or a read enhanced write mode REWM. The write mode WM may be decided according to the effective wearing EW. Alternatively, it will be understood that the write mode WM may be forcibly set up irrespective of the effective wearing EW. The per-block mode table216may hold and update the above-mentioned information. The per-block mode table216may provide state information of a selected memory block according to the request of the FTL. FIG.9illustrates write modes according to an exemplary embodiment of the inventive concept. InFIG.9, a threshold voltage distribution formed by memory cells selected according to an application of four types of write modes WM0˜WM3is briefly shown. However, it will be understood that a write mode WM may be divided into more than or less than four types. The write mode WM may depend on erase states E0, E1, E2, and E3. However, the write mode WM may be assigned to the host100irrespective of an erase state or may be forcibly assigned in an urgent case. First, a default write mode WM0will now be described. In the default write mode WM0, a threshold voltage of memory cells may be applied to a memory block corresponding to an erase state E0by an erase operation. For example, the default write mode WM0may be applied to a free block prepared by the default write mode WM0. Verify voltage set (Vvf0_1, Vvf0_2, Vvf0_3) may be supplied to memory cells selected for a program operation corresponding to the default write mode WM0. In addition, a level of a program voltage Vpm to incremental step pulse programming (ISPP) may vary depending on a write mode. For example, an incremental step of a program voltage applied in the default write mode WM0may be a relatively great value. Thus, write speed in the default write mode WM0may be higher than that in the other write modes WM1˜WM3. As the first write mode WM1is applied, a threshold voltage of memory cells may be formed with an erase state E1and program states P1, P2, and P3. A program voltage Vpgm and verify voltage set (Vvfl1_1, Vvf1_2, Vvf1_3) may be supplied to a wordline of memory cells selected for a program operation corresponding to the first write mode WM1. Program speed depending on the first write mode WM1may be lower than that depending on the default write mode WM0. A memory block erased by the first write mode WM1may be selected to execute the first write mode WM1. A threshold voltage of memory cells to which the second write mode WM2is applied may be formed with an erase state E2and the program states P1, P2, and P3. A program voltage Vpgm and verify voltage set (Vvf2_1, Vvf2_2, Vvf2_3) may be applied to a wordline of memory cells selected for a program operation corresponding to the second write mode WM2. Program speed depending on the second write mode WM2may be lower than that depending on the first write mode WM1. A memory block erased by the second write mode WM2may be selected to execute the second write mode WM2. Accordingly, the speed of programming memory cells from the erase state E2to the program states P1, P2, and P3may be improved. A threshold voltage of memory cells to which the third write mode WM3is applied may be formed with an erase state E3and the program states P1, P2, and P3. A program voltage Vpgm and verify voltage set (Vvf3_1, Vvf3_2, Vvf3_3) may be applied to a wordline of memory cells selected for a program operation corresponding to the third write mode WM3. Program speed depending on the third write mode WM3may be lower than that depending on the second write mode WM2. A memory block erased b the third write mode WM3may be selected to execute the third write mode WM3. Heretofore, the write modes WM0˜WM3have been described in brief. The write modes WM0˜WM3may be decided by the erase modes EM0˜EM3, respectively. However, although a write mode is set up irrespective of the erase modes EM0˜EM3, there is no problem in performing a program operation. Moreover, it will be understood that a level of a verify voltage set corresponding to the respective write modes WM0, WM1, WM, and WM3is merely exemplary and can vary. InFIG.9, Vread0is a default pass read voltage. FIG.10illustrates a read enhanced write mode according to an exemplary embodiment of the inventive concept. InFIG.10, a threshold voltage distribution formed by memory cells selected according to application of a default write mode WM0and read enhanced write modes REWM1˜REWM3is briefly shown. It will be understood that the number of write modes may be variously changed. First, a default write mode WM0will now be described. The default write mode WM0may be considered to be identical to the default write mode WM0described inFIG.9. Therefore, a default pass read voltage Vread0is applied to read memory cells programmed by the default write mode WM0. The pass read voltage Vread0corresponds to a highest level among a plurality of pass read voltages Vread0, Vread1, Vread2, and Vread3. A threshold voltage of memory cells may be generated with an erase state E0and program states P1, P2, and P3according to application of a first read enhanced write mode REWM1. A program voltage Vpgm and verify voltage set (Vvf1′_1, Vvf1′_2, Vvf1′_3) may be supplied to a wordline of memory cells selected for a program operation corresponding to the read enhanced write mode REWM1. Levels of the verify voltage set Vvf1′_1, Vvf1′_2, Vvf1′_3) may confirm that locations of the program states P1, P2, and P3move closer to the side of the erase state E0as compared to the default write mode WM0. According to the application of the first read enhanced write mode REWM1, memory cells may be supplied with a first pass read voltage Vread1lower than the default pass read voltage Vread0during a subsequent read operation. Thus, a size of a read disturbance caused by the supply of the default read voltage Vread0may be reduced. The program voltage Vpgm and verify voltage set (Vvf2′_1, Vvf2′_2, Vvf2′_3) may be supplied to a wordline of memory cells selected for a program operation corresponding to a second read enhanced write mode REWM2. Levels of the verify voltage set (Vvf2′_1, Vvf2′_2, Vvf2′_3) may confirm that locations of the program states P1, P2and P3move closer to the side of the erase state E0as compared to the first read enhanced write mode REWM1. According to the application of the second read enhanced write mode REWM2, memory cells may be supplied with a second pass read voltage Vread2lower than the first pass read voltage Vread1during a subsequent read operation. The program voltage Vpgm and verify voltage set (Vvf3′_1, Vvf3′_2, Vvf3′_3) may be supplied to a wordline of memory cells selected for a program operation corresponding to a third read enhanced write mode REWM3. Levels of the verify voltage set (Vvf3′_1, Vvf3′_2, Vvf3′_3) may confirm that locations of the program states P1, P2, and P3move closer to the side of the erase state E0as compared to the second read enhanced write mode REWM2. According to the application of the third read enhanced write mode REWM3, memory cells may be supplied with a third pass read voltage Vread3lower than the second pass read voltage Vread2during a subsequent read operation. It will be understood that levels of verify voltage sets respectively corresponding to the read enhanced write modes REWM1, REWM2, and REWM3are merely exemplary and may be variously changed. FIG.11is a block diagram illustrating a write mode decision method according to an exemplary embodiment of the inventive concept. Referring toFIG.11, a write mode manager150may decide a write mode WMi depending on a kind of application software120issuing a write request WR. The write mode manager150classifies application software120in units of groups to decide the write mode WMi. The write mode manager150classifies applications App_1and App_5being executed into a first execution group121. The write mode manger150manages a write request issued from the applications App_1and App_5corresponding to the first execution group in a default write mode WM0. The write mode manger150classifies applications App_3, App_8, and App_12being executed into a second execration group122. The write mode manager150manages a write request issued from the applications App_3, App_8, and App_13corresponding to the second execution group122in the first write mode WM1. The write mode manger150classifies applications App_6and App_9being executed into a third execution group123. The write mode manager150manages a write request issued from the applications App_6and App_9corresponding to the third execution group123in the second write mode WM2. The write mode manger150classifies an application App_16being executed into a fourth execution group124. The write mode manager150manages a write request issued from the application App_16corresponding to the fourth execution group124in the third write mode WM3. If each of applications App_0, App_2, App_4, App_7, App_10, App11, . . . of a non-execution group125is executed any time, the write mode manager150may classify the executed application into any one of the execution groups121˜124. FIG.12is a flowchart illustrating a write mode decision method according to an exemplary embodiment of the inventive concept. A method of deciding a write mode WMi depending on a kind of executed application will now be described with reference toFIG.12. At operation S110, the write mode manager150detects and receives a write request from applications that are under execution. The write request may be issued in various situations such as a procedure that each application processes data or receives data from a network. At operation S120, the write mode manager150detects which execution group includes an application issuing a write request. When the application issuing a write request belongs to a first execution group (Group1), the flow proceeds to operation S130. When the application issuing a write request belongs to a second execution group (Group2), the flow proceeds to operation S140. When the application issuing a write request belongs to a third execution group (Group3), the flow proceeds to operation S150. When the application issuing a write request belongs to a fourth execution group (Group4), the flow proceeds to operation S160. At operation S130, the write mode manager150decides to write write-requested data into the nonvolatile memory device230(seeFIG.2) from an application according to a default write mode WM0. In other words, the write-requested data issued by an application corresponding to the first execution group (Group1) may be written into the nonvolatile memory device230at a high speed. At operation S140, the write mode manager150decides to write write-requested data into the nonvolatile memory device230from an application according to a first write mode WM1. In other words, the write-requested data issued by an application corresponding to the second execution group (Group2) may be written into the nonvolatile memory device230at a lower speed than that in the default write mode WM0. However, the write-requested data issued by the application corresponding to the second execution group (Group2) may be written into the nonvolatile memory device230according to a write bias where data integrity is higher than that in the first execution group (Group1) and a voltage stress is less than that in the first execution group (Group1). At operation S150, the write mode manager150decides to write write-requested data issued from an application into the nonvolatile memory device230according to a second write mode WM2. In other words, the write-requested data issued by an application corresponding to the third execution group (Group3) may be written into the nonvolatile memory device230at a lower speed than that in the first write mode WM1. However, the write-requested data issued by the application corresponding to the third execution group (Group3) may be written into the nonvolatile memory device230according to a write bias where data integrity is higher than that in the second execution group (Group2) and a voltage stress is less than that in the second execution group (Group2). At operation S160, the write mode manager150decides to write write-requested data issued from an application into the nonvolatile memory device230according to a third write mode WM3. In other words, the write-requested data issued by an application corresponding to the fourth execution group (Group4) may be written into the nonvolatile memory device230at a lower speed than that in the second write mode WM2. However, the write-requested data issued by the application corresponding to the fourth execution group (Group4) may be written into the nonvolatile memory device230according to a write bias where data integrity is higher than that in the third execution group (Group3) and a voltage stress is less than that in the third execution group (Group3). At operation S170, the write mode manager150requests the memory system200to write data into a selected memory region according to a decided write mode. At this point, information on a write mode WMi may be provided to the memory system200together with the write request WR. Heretofore, a write mode decision method according to an exemplary embodiment of the inventive concept has been described. According to the above-described write mode decision method, a write mode WMi is decided depending on a kind of application. FIG.13is a flowchart illustrating a write mode decision method according to an exemplary embodiment of the inventive concept. According to the write mode decision method described with reference toFIG.13, a write mode WMi is decided considering a state of an input/output device (e.g., an LCD) and an input timing detected by a user interface from a user. At operation S210, the write mode manager150receives a write request set signal WMS2from a user context monitor130(seeFIG.3). The user context monitor130may detect an ON/OFF state of an LCD136(seeFIG.3) of a user device100(seeFIG.3) and an input state of a user input of a user interface135(seeFIG.3). In addition, the user context monitor130may detect a time elapse from an ON/OFF switching timing of the LCD136and whether a screen save mode is executed. The user context monitor130may monitor a lock mode state of the user device100, whether a power saving mode is activated, whether a user's eye gazes at the LCD136, foreground application information, lifetime of the memory system200, and the total amount of data written into the memory system200up to the present time. At operation S220, the write mode manager150detects the ON/OFF state of the LCD136. The ON state of the LCD136means a state where an image is displayed on the LCD136, while the OFF state of the LCD136means a state where information or an image is not displayed on the LCD136. When the LCD136is in the ON state, the flow proceeds to operation S230. When the LCD136is in the OFF state, the flow proceeds to operation S225. At operation S225, the write mode manager150monitors a time elapse from the time when the LCD136is turned off. If the time elapse exceeds a reference time Tth (th>0) (Yes direction), the flow proceeds to operation S270. If the time elapse does not reach the reference time Tth (No direction), the flow returns to operation S220to monitor the state of the LCD136. If the reference time Tth increases in length, it may be difficult for the user to sense write mode WM switching which occurs following the OFF state of the LCD136. An exemplary embodiment of the inventive concept has been described with reference to the ON/OFF state of the LCD136as an example. However, the write mode manner150may perform an operation with reference to not only the ON/OFF state of the LCD136but also a screen save mode, a screen lock mode, and a power saving mode of the user device100, whether the user's eye gazes at the LCD136, and whether a foreground application being executed is associated with a response with respect to the user. The write mode manager150may forcibly decide a write mode with reference to lifetime information of the memory system200and the total amount of data written into the memory system200up to the present time. For example, when the lifetime of the memory system200cannot be guaranteed because a write operation is relatively frequently performed in the memory system200, the write mode manager150may select a write mode capable of extending the lifetime of the memory system200in preference to the other conditions. In this case, an extended lifetime may be guaranteed although the write speed is made low. To achieve this, an interface is added for receiving lifetime information and the total amount of written data from the memory system200. At operation S230, the write mode manager150detects a time interval between a write request and a user input provided from the user interface135. When the write request is issued by the user input, a rapid response may be provided to the user or there is no problem to provide a relatively slow response to the user in certain cases. A high-speed write mode is supported to provide a rapid response to the user. When a time interval TI between a user input time and a write request is shortest (e.g., TI≤T1), the flow proceeds to operation S240. When the time interval TI is longer than T1and less than T2(e.g., T1<TI≤T2), the flow proceeds to S250. When the time interval TI is longer than T2(e.g., T2<TI), the flow proceeds to operation S260. At operation S240, the write mode manager150decides to program write-requested data into the nonvolatile memory device230(seeFIG.2) depending on a default write mode WM0. At operation S250, the write mode manger150decides to program the write-requested data into the nonvolatile memory device230depending on a first write mode WM1. At operation S260, the write mode manger150decides to program the write-requested data into the nonvolatile memory device230depending on a second write mode WM2. At operation S270, the write mode manger150decides to program the write-requested data into the nonvolatile memory device230depending on a third write mode WM3. At operation S280, the write mode manager150may request the memory system200to write data into a selected memory region depending on a decided write mode. At this point, information on the write mode WMi may be provided to the memory system200together with the write request WR. Heretofore, a write mode decision method for deciding a write mode with reference to information provided by the user context monitor130(seeFIG.3) has been described. FIG.14is a block diagram illustrating a write mode decision method according to an exemplary embodiment of the inventive concept. Referring toFIG.14, the write mode manager150may decide a write mode for a current write request depending on a state of a write buffer166provided from a write queue monitor160. The write buffer166may be managed by dividing it into a synchronous queue166aand an asynchronous queue166b. When a write request to the write buffer166occurs, data is classified as part of the synchronous queue166aor the asynchronous queue166baccording to characteristics of the write request by an input/output scheduler165. If a high-speed write operation is not performed, write data that would cause a user to feel performance deterioration may be stored in the synchronous queue166a.For example, write data WD1, WD2, WD4, WD7, and WD8are stored in the synchronous queue166a.However, if a high-speed write operation is not performed, write data that would cause a user to feel performance deterioration may be stored in the synchronous queue166a.For example, write data WD3, WD5, and WD6are stored in the asynchronous queue166b. The write queue monitor160according to an exemplary embodiment of the inventive concept receives information on a size of accumulated write data (e.g., queue size (QS)) from the input/output scheduler165or the write butler166. The write queue monitor160transfers the information on the queue size QS to the write mode manager150. The write queue monitor160may transfer a queue size QS of data accumulated in the synchronous queue166aand a queue size QS′ of data accumulated in the asynchronous queue166bto the write mode manager150. The write mode manager150may decide a write mode WMi of current write-requested data with reference to the queue sizes QS and QS′. In other words, the write mode manager150may select a high-speed write mode when a queue size to be written into the memory system200is large. In addition, the write mode manager150may select a low-speed write mode when a queue size to be written into the memory system200is relatively small. The write mode manager150may decide a write mode with reference to the queue size QS of the synchronous queue166a.Alternatively, the write mode manager150may assign write data of the synchronous queue166ato a high-speed write mode and assign write data of the asynchronous queue166bto a low-speed write mode. FIG.15is a flowchart illustrating a write mode decision method according to an exemplary embodiment of the inventive concept. A write mode decision method for deciding a write mode WMi depending on a queue size accumulated in a write buffer166will now be described with reference toFIG.15. At operation S310, the write mode manager150receives a write request and information on a queue size QS of data currently stored in the write buffer166from the write queue monitor160. The write mode manager150may detect a queue size of write data accumulated in the write buffer166. At operation S320, the write mode manager150decides a write mode WMi depending on a queue size QS. When the queue size QS exceeds a third threshold TH3, at operation S330, the write mode manager150may assign a default write mode WM0to write currently write-requested data at a high speed. When the queue size QS is greater than a second threshold TH2and less than the third threshold TH3, at operation S340, the write mode manager150may assign a first write mode WM1with respect to the currently write-requested data. When the queue size QS is greater than the first threshold TH1and less than the second threshold TH2, at operation S350, the write mode manager150may assign a second write mode WM2with respect to the currently write-requested data. When the queue size QS is less than the first threshold TH1, at operation S360, the write mode manager150may assign a third write mode WM3with respect to the currently write-requested data. At operation S370, the write mode manager150requests the memory system200to write data into a selected memory region depending on the decided write mode WMi. At this point, information on the write mode WMi may be provided to the memory system700together with the write request WR. FIG.16is a flowchart illustrating a write mode decision method according to an exemplary embodiment of the inventive concept. A method of deciding a write mode WMi according to an attribute of write-requested data will be described with reference toFIG.16. At operation S410, the write mode manager150detects an attribute of write-requested data. Urgency to writing of data will be explained herein as an example of the attribute of the write-requested data. However, the attribute of the write-requested data may include a significance of data or a size of data. At operation S420, the write mode WMi is decided according to the attribute of the write-requested data. When the attribute of the write-requested data requires a most urgent write (Most Urgent), the flow proceeds to operation S430at which a default write mode WM0is assigned to write the currently write-requested data at a high speed. When the attribute of the write-requested data requires a relatively urgent write (Urgent), the flow proceeds to operation S440at which a first write mode WM1is assigned to write the currently write-requested data. When the attribute of the write-requested data requires a relatively less urgent write (Less Urgent), the flow proceeds to operation S450at which a second write mode WM2is assigned to the currently write-requested data. When the attribute of the write-requested data requires a non-urgent write (Not Urgent), the flow proceeds to operation S460at which a third write mode WM3is assigned to write the currently write-requested data. At operation S470, the write mode manager150may request the memory system200to write data into a selected memory region depending on the decided write mode. At this point, information on the write mode WMi may be provided to the memory system700together with the write request WR. FIG.17is a block diagram showing a software structure of the host100inFIG.2, according to an exemplary embodiment of the inventive concept. As illustrated, software100′ of the host100may include a platform310and a file system/device driver350. A access mode manager340access mode manager340may be included in a layer of the platform310. The platform310may include application software320, a user context monitor330, and the access mode manager340. The access mode manager340access mode manager340may monitor a kind of the application software320issuing a write request or an attribute of the write request. The user context monitor330may monitor whether there is a user input or an input period. The user context monitor330may monitor an ON/OFF state of an LCD336. The user context monitor330may detect an ON/OFF state of a user interface335or the LCD336and an input state of a user input of the user interface335. In addition, the user context monitor330may detect a time elapse from an ON/OFF switching timing of the LCD336and whether a screen save mode is executed. The user context monitor330may monitor a lock mode state of a user device, whether a power saving mode is activated, whether a user's eye gazes at the LCD336, foreground application information, lifetime of a memory system410, and the total amount of data written into the memory system410up to the present time and provide the monitored information to the access mode manager340. Thus, any one of a plurality of write modes may be selected by the access mode manager340. The selected write mode WMi or read mode RMj may be transferred to the side of the memory system410via the file system/device driver350. Heretofore, a structure of the software100′ of a host according to an exemplary embodiment of the inventive concept has been described. However, a location of the write mode manager150on a software layer is not limited to the example shown above. FIGS.18A and18Bare block diagrams illustrating an interface method in a software layer of a host for applying exemplary embodiments of the inventive concept. InFIG.18A, an example of a access mode manager525included in a layer of a file system/device driver520is shown. The access mode manager525may receive an LCD state and an input/output state of a user interface from a platform510. The access mode manager525may decide a write mode or read mode based on information from the platform510and transfer information on a write request WR and a access mode WMi/RMj to a memory system530. The platform510may include a user context monitor515. InFIG.18B, an example of a write mode manager612included in a platform610is shown. The access mode manager612may receive an LED state and an input/output state of a user interface from the platform610and decide a access mode WMi/RMj within the platform610. The access mode WMi/RMj decided by the access mode manager612in the platform610may be bypassed without being corrected by a file system/device driver620. A bypassed write request WR and the bypassed access mode WMi/RMj may be transferred to a memory system630. The platform610may include a user context monitor615. FIG.19is a table showing write mode WMi according to an embodiment of the inventive concept. Referring toFIG.19, each of the write modes WMi defines a write speed of the nonvolatile memory device. In the embodiment, a write mode WMi is classified into eight types WM0˜WM7as shown. However, it will be understood that a write mode WMi may be divided into more than or less than eight types. The write mode WM0corresponds to default mode. Thus, if the write mode information is not received from the host100, the memory controller210writes the received data into nonvolatile memory device230with default write mode WM0. In the default write mode WM0, an incremental step of a program (ISPP) voltage may be a relatively great value. Thus, write speed in the default write mode WM0may be higher than that in the other write modes WM1˜WM5. In some embodiment, the write speed in the default write mode WM0may be highest of all the write modes. As the write mode WM1is applied, a write bias may be supplied to a wordline of memory cells selected for a program operation corresponding to the write mode WM1. Thus, program speed of the write mode WM1may be 75% of the default write mode WM0. As the write mode WM2is applied, a write bias may be supplied to a wordline of memory cells selected for a program operation corresponding to the write mode WM2. Thus, program speed of the write mode WM2may be 50% of the default write mode WM0. As the first write mode WM3is applied, a write bias may be supplied to a wordline of memory cells selected for a program operation corresponding to the write mode WM3. Thus, the program speed of the write mode WM3may be 25% of the default write mode WM0. In this way, the program speed of write modes WM4and WM5correspond to 10% and 5% of the default write mode WM0. In a word, write modes WM1˜WM5indicates a slow mode supplied from the host100. On the contrary, fast write modes WM6, WM7can be applied from the host100. The program speed of the write mode WM6may be 125% of the default write mode WM0. The program speed of the write mode WM7may be 150% of the default write mode WM0. The write mode WMi is determined in the Host100and then transmitted to the Memory System200. The Host100writes the write mode WMi on a register of the Memory System200. Memory System200writes requested data into the nonvolatile memory device with write speed according to the write mode written on the register. FIG.20is a table showing read mode RMj according to an embodiment of the inventive concept. Referring toFIG.20, each of the read modes RMj defines a read speed of the nonvolatile memory device. In the embodiment, read mode RMj is classified into eight types RM0˜RM7as shown. However, it will be understood that a read mode RMj may be divided into more than or less than eight types. The read mode RM0corresponds to default mode. Thus, if the read mode information is not received from the host100, the memory controller210read data from nonvolatile memory device230with default read mode RM0. Thus, the read speed in the default read mode RM0may be higher than that in the other read modes RM1˜RM5. In another embodiment, the write speed in the default read mode RM0may be highest of all the read modes. As the read mode RM1is received, a read bias may be supplied to a wordline of memory cells selected for a read operation corresponding to the read mode RM1. Thus, read speed of the read mode RM1may be 75% of the default read mode RM0. As the read mode RM2is applied, a read bias may be supplied to a wordline of memory cells selected fora read operation corresponding to the read mode RM2. Thus, the read speed of the read mode RM2may be 50% of the default read mode RM0. As the read mode RM3is applied, a read bias may be supplied to a wordline of memory cells selected for a read operation corresponding to the read mode RM3. Thus, the read speed of the read mode RM3may be 25% of the default read mode RM0. In this way, the read speed of read modes RM4and RM5correspond to 10% and 5% of the default read mode RM0. In a word, read modes RM1˜RM5indicates a slow read mode supplied from the host100. The fast read modes RM6, RM7can be applied from the host100. The read speed of the read mode RM6may be 125% of the default read mode RM0. The read speed of the read mode RM7may be 150% of the default read mode RM0. The read mode RMj is determined in the Host100and then transmitted to the Memory System200. The Host100writes the read mode RMj on a register of the Memory System200. Memory System200accesses the nonvolatile memory device with read speed according to the read mode RMj written on the register. FIG.21is a block diagram of a user device1000including a solid state disk (hereinafter referred to as “SSD”) according to an exemplary embodiment of the inventive concept. As illustrated, the user device1000includes a host1100and an SSD1200. The SSD1200includes an SSD controller1210, a buffer memory1220, and a nonvolatile memory1230. The host1100includes a write mode manager1150. The write mode manager1150may decide an access mode WMi/RMj depending on a kind of application that is being driven, an ON/OFF state of an LCD, a user input time interval for input to a user interface, a size of data accumulated in a write buffer and an attribute of write data. The SSD controller1210provides a physical connection between the host1100and the SSD1200. In other words, the SSD controller1210provides interfacing with the SSD1200, which corresponds to a bus format of the host1100. In particular, the SSD controller1210may vary an access speed to the nonvolatile memory device1230with reference to a access mode WMi/RMj provided from the host1100. In other words, the SSD controller1210may adjust an access bias of the nonvolatile memory device1230depending on the access mode WMi/RMj. In other words, the SSD controller1210may perform various memory management operations according to the level of an erase voltage. The bus format of the host1100may include a USB, an SCSI, a PCI express, an ATA, a PATA, a SATA, and a SAS. Write data provided from the host1100or data read from the nonvolatile memory device1230is temporarily stored in the buffer memory1220. When data exiting the nonvolatile memory device1230is cached in a read request of the host1100, the buffer memory1220supports a cache function to directly provide the cached data to the host1100. In general, data transfer speed using a bus format (e.g., SATA or SAS) of the host1100is much higher than transfer speed of a memory channel of the SSD1200. In other words, when the interface speed of the host1100is significantly high, a high-capacity buffer memory1220may be provided to minimize performance degradation caused by a speed difference. The nonvolatile memory device1230is provided as a storage medium of the SSD1200. For example, the nonvolatile memory device1230may be provided as a NAND-type flash memory having a mass storage capacity. The nonvolatile memory device1230may include a plurality of memory devices. In this case, each of the memory devices is connected to the SSD controller1210in units of channels. While a NAD type flash memory has been described an example of the nonvolatile memory device1230, the nonvolatile memory device1230may include other nonvolatile memory devices. For example, PRAM, MRAM, ReRAM, or NOR flash memory may be used as a storage medium and the flash memory may be applied to a memory system in which different kinds of memory devices are mixed. The nonvolatile memory device1230includes a buffer region for a buffer program operation and a main region for a main program operation. FIG.22is a block diagram of a computing system2000according to an exemplary embodiment of the inventive concept. As illustrated, the computing system2000includes a network adaptor2100, a CPU2200, a mass storage2300, a RAM2400, a ROM2500, and a user interface2600, which are electrically connected to a system bus2700. The network adaptor2100provides interfacing between the computing system2000and an external network2800. The CPU2200may control an overall operation to drive an operating system and an application program which are resident on the RAM2400. The mass storage2300stores data required for the computing system2000. For example, the mass storage2300may store an operating system used to drive the computing system2000, an application program, various program modules, program data, user data, and the like. The RAM2400is used as a working memory of the computing system2000. Upon booting, the operating system, the application program, the various program modules, and program data used to drive programs and various program modules read out from the mass storage2300are loaded on the RAM2400. The ROM2500stores a basic input/output system (BIOS) which is activated before the operating system is driven upon booting. Information exchange between the computer system2000and a user is made via the user interface2600. In addition, the computing system2000may further include a battery, a modem, and the like. Although not shown, the computer system2000may further include an application chipset, a camera image processor (CIS), a mobile dynamic random access memory (DRAM), and the like. The mass storage2300may include a nonvolatile memory device employing a memory management method according to an exemplary embodiment of the inventive concept. In other words, the mass storage2300may perform wear leveling depending on effective wearing EW or cumulative effective wearing CEW. The mass storage2300may vary a write mode or an erase mode according to a request of a host or operating conditions. The mass storage2300may include an SSD, a multimedia card (MMC), a secure digital card (SD card), a micro SD card, a memory stick, and ID card, a personal computer memory card international association (PCMCIA) card, a chip card, a USB card, a smart card, a compact flash card (CF card), an embedded MMC (eMMC) or the like. FIG.23is a block diagram of a handheld terminal3000according to an exemplary embodiment of the inventive concept. As illustrated, the handheld terminal3000includes an image processing unit3100, a wireless transceiver unit3200, an audio processing unit3300, a DRAM3400, an eMMC3500, a user interface3600, and an application processor3700. The image processing unit3100includes a lens3110, an image sensor3120, an image processor3130, and a display unit3140. The wireless transceiver unit3210includes an antenna3210, a transceiver3220, and a modem3230. The audio processing unit3300includes an audio processor3310, a microphone3320, and a speaker3330. The eMMC3500may be provided as a nonvolatile memory device driven according to an exemplary embodiment of the inventive concept. In this case, the eMMC3500may be erased by an erase voltage of various levels. The eMMC3500may vary a subsequent access mode depending on the level of the erase voltage. In addition, the application processor3700may decide a data access mode WMi/RMj to the eMMC3500according to states of the image processing unit3100, the wireless transceiver unit3200, the audio processing unit3300, the DRAM3400, and the user interface3600. A nonvolatile memory device and/or a memory controller according to an exemplary embodiment of the inventive concept may be packaged as one of various types to he subsequently embedded. For example, a flash memory device and/or a memory controller according to an exemplary embodiment of the inventive concept may be packaged by one of Package on Package (PoP), Ball grid arrays (BGAs), Chip scale packages (CSPs), Plastic Leaded Chip Carrier (PLCC), Plastic Dual In-Line Package (PDIP), Die in Waffle Pack, Die in Wafer Form, Chip On Board (COB), Ceramic Dual In-Line Package (CERDIP), Plastic Metric Quad Flat Pack (MQFP), Thin Quad Flatpack (TQFP), Small Outline Integrated Circuit (SOIC), Shrink Small Outline Package (SSOP), Thin Small OutlinePackage (TSOP), System In Package (SIP), Multi Chip Package (MCP), Wafer-level Fabricated Package (WFP), and Wafer-Level Processed Stack Package (WSP). As described above, in accordance with an exemplary embodiment of the inventive concept, a write mode for a nonvolatile memory device is decided in a host level to minimize deterioration of a nonvolatile memory device that results from a driving environment. This, lifetime of the nonvolatile memory device can be extended while minimizing a burden on a memory system including the nonvolatile memory device. While the inventive concept has been particularly shown and described with reference to exemplary embodiments thereof, it will be apparent to those of ordinary skill in the art that various changes in form and detail may be made therein without departing from the spirit and scope of the inventive concept as defined by the following claims.
73,855
11860775
DETAILED DESCRIPTION Reference is made in detail to embodiments of the invention, which are illustrated in the accompanying drawings. The same reference numbers may be used throughout the drawings to refer to the same or like parts, components, or operations. The present invention will be described with respect to particular embodiments and with reference to certain drawings, but the invention is not limited thereto and is only limited by the claims. It will be further understood that the terms “comprises,” “comprising,” “includes” and/or “including,” when used herein, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. Use of ordinal terms such as “first”, “second”, “third”, etc., in the claims to modify a claim element does not by itself connote any priority, precedence, or order of one claim element over another or the temporal order in which acts of a method are performed, but are used merely as labels to distinguish one claim element having a certain name from another element having the same name (but for use of the ordinal term) to distinguish the claim elements. It will be understood that when an element is referred to as being “connected” or “coupled” to another element, it can be directly connected or coupled to the other element or intervening elements may be present. In contrast, when an element is referred to as being “directly connected” or “directly coupled” to another element, there are no intervening elements present. Other words used to describe the relationship between elements should be interpreted in a like fashion (e.g., “between” versus “directly between,” “adjacent” versus “directly adjacent.” etc.) In the flash controller, the entire data flow for programming data may be divided into three processing stages: front-end; mid-end; and back-end. The front-end processing stage is responsible for obtaining data to be programmed, in which includes information about the source address of data, the data length, the location temporarily storing the data in the Static Random Access Memory (SRAM), etc. The mid-end processing stage involves data security, including data reordering, and coordination with the RAID engine to perform such as data encryption, parity page generation, etc. The back-end processing stage includes obtaining data from the SRAM, post-operations (including such as data scrambling, appending low-density parity-check (LDPC) code to data, etc.), control of physical data-programming, and so on. It is to be understood that the system may ignore any one or two of the above three stages depending on the different characteristics of data programming. In some implementations, the flash controller when executing a host write command usually uses firmware (also referred to as Firmware Translation Layer, FTL) to activate, control and supervise the data flow, so that it consumers large of processor loading and computing resources on such tasks. Specifically, the firmware would consume excessive time and computing resources to check if relevant data has been stored in the designated location in the SRAM, query relevant hardware (such as the RAID engine, the flash I/F, etc.), wait for the replies to know the operating statuses, and so on. To address the problems described above, an embodiment of the invention modifies the current architecture to set dedicated hardware circuits that can be matched with the firmware to speed up the overall processing of data programming. Refer toFIG.1. The electronic apparatus10includes a host side110, a flash controller130and a flash module150, and the flash controller130and the flash module150may be collectively referred to as a device side. The electronic apparatus10may be equipped with a Personal Computer (PC), a laptop PC, a tablet PC, a mobile phone, a digital camera, a digital recorder, or other consumer electronic products. The host side110and a host interface (I/F)131of the flash controller130may communicate with each other by Universal Serial Bus (USB), Advanced Technology Attachment (ATA), Serial Advanced Technology Attachment (SATA), Peripheral Component Interconnect Express (PCI-E), Universal Flash Storage (UFS), Embedded Multi-Media Card (eMMC) protocol, or others. A flash I/F139of the flash controller130and the flash module150may communicate with each other by a Double Data Rate (DDR) protocol, such as Open NAND Flash Interface (ONFI), DDR Toggle, or others. The flash controller130includes a first processing unit134and the first processing unit134(also referred to as the primary processing unit) may be implemented in numerous ways, such as with general-purpose hardware (e.g., a single processor, multiple processors or graphics processing units capable of parallel computations, or others) that is programmed using firmware and/or software instructions to perform the functions recited herein. The first processing unit134receives host commands, such as host read, write, trim, erase commands, through the host I/F131, schedules and executes these commands. The flash controller130includes a Random Access Memory (RAM)136and the RAM136may be implemented in a Dynamic Random Access Memory (DRAM), a Static Random Access Memory (SRAM), or the combination thereof, for allocating space as a data buffer storing user data (also referred to as host data) that is to be programmed into the flash module150, and has been read from the flash module150and is to be output to the host side110. The RAM136stores necessary data in execution, such as variables, data tables, data abstracts, host-to-flash (H2F) tables, flash-to-host (F2H) tables, and so on. A shared bus architecture may be configured in the flash controller130for coupling between components to transfer data, addresses, control signals, etc., which include the host I/F131, the first processing unit134, the redundant array of independent disks (RAID) engine135, the RAM136, the data access engine137, and so on. The bus includes a set of parallel physical-wires connected to two or more components of the flash controller130. The bus is a shared transmission medium so that only two devices can access to the wires to communicate with each other for transmitting data at any one time. Data and control signals travel in both directions between the components along data and control lines, respectively. Addresses on the other hand travel only one way along address lines. For example, when the processing unit134wishes to read data from a particular address of the RAM136, the processing unit134sends this address to the RAM136on the address lines. The data of that address is then returned to the processing unit134on the data lines. To complete the data read operation, control signals are sent along the control lines. A dedicated bus, which is independent from the shared bus architecture, may be further configured in the flash controller130for coupling between components to transfer data, addresses, control signals, etc., which include the first processing unit134, the routing engine132and the accelerator133. The routing engine132is employed to complete the tasks of front-end processing stage, and the accelerator133is employed to complete the tasks of mid-end and back-end processing stages. The routing engine132and the accelerator133may not be coupled to the shared bus architecture, so as to avoid occupying the bandwidth of the shared bus architecture, which reduces the overall system performance. The flash module150provides huge storage space typically in hundred Gigabytes (GB), or even several Terabytes (TB), for storing a wide range of user data, such as high-resolution images, video files, etc. The flash module150includes control circuits and memory arrays containing memory cells that can be configured as Single Level Cells (SLCs), Multi-Level Cells (MLCs), Triple Level Cells (TLCs), Quad-Level Cells (QLCs), or any combinations thereof. The first processing unit134programs user data into a designated address (a destination address) of the flash module150and reads user data from a designated address (a source address) thereof through the flash I/F139. The flash I/F139may use several electronic signals run on physical wires including data lines, a clock signal line and control signal lines for coordinating the command, address and data transfer with the flash module150. The data lines may be used to transfer commands, addresses, read data and data to be programmed; and the control signal lines may be used to transfer control signals, such as Chip Enable (CE), Address Latch Enable (ALE), Command Latch Enable (CLE), Write Enable (WE), etc. Refer toFIG.2. The flash I/F151may include four I/O channels (hereinafter referred to as channels) CH #0 to CH #3 and each is connected to four NAND flash units, for example, the channel CH #0 is connected to the NAND flash units153#0,153#4,153#8 and153#12. Each NAND flash unit can be packaged in an independent die. The flash I/F139may issue one of the CE signals CE #0 to CE #3 through the I/F151to activate the NAND flash units153#0 to153#3, the NAND flash units153#4 to153#7, the NAND flash units153#8 to153#11, or the NAND flash units153#12 to153#15, and read data from or program data into the activated NAND flash units in parallel. Refer toFIG.3showing a flowchart for programming data. In the front-end processing stage, the operation settings are checked to determine whether there is any task to be executed, which is associated with the host I/F131(step S310). If so (the “Yes” path of step S310), the host I/F131is driven to obtain data from the host side110and store the obtained data in the designated address in the RAM136(step S320). Otherwise (the “No” path of step S310), the process proceeds to the next stage (that is, the mid-end processing stage) (step S330). In the mid-end processing stage, the operation settings are checked to determine whether there is any task to be executed, which is associated with the RAID engine135(step S330). If so (the “Yes” path of step S330), the RAID engine135is driven to read data from the designated address in the RAM136, encrypt the re-ordered data of a data group (or generate data of a parity page according to the re-ordered data of a data group), and store the encrypted data (or the parity-page data) in the designated address in the RAM136(step S340). Otherwise (the “No” path of step S330), the process proceeds to the next stage (that is, the back-end processing stage) (step S350). In the back-end processing stage, the operation settings are checked to determine whether there is any task to be executed, which is associated with the data access engine137(step S350). If so (the “Yes” path of step S350), the data access engine137is driven to read data from the designated address in the RAM136, which may be the data obtained from the host side110, the encrypted data or parity-page data generated by the RAID engine135, etc. Moreover, the data access engine137is driven to perform post-operations on the read data, such as scramble the read data, append LDPC code to the read data, etc. (step S360). Otherwise (the “No” path of step S350), the process ends. In some implementations, the first processing unit134is normally used to execute firmware to activate, control and supervise the whole data flow of data programming. In order to reduce the occupation of the time and computing resources of the first processing unit134, in an embodiment of the invention, the flash controller130is equipped with the routing engine132and the accelerator133implemented by dedicated circuits, so that the first processing unit134would selectively activate the routing engine132, the accelerator133and the second processing unit138through a control protocol, and the execution of the whole data flow would be chained by the routing engine132, the accelerator133and the second processing unit138themselves without further instruction by the first processing unit134. Moreover, the control protocol would selectively ignore one or two processing stages in the whole data flow in terms of the characteristics of different data programming processes. An embodiment of the invention proposes to handle the whole data flow of data programming in a transaction-by-transaction manner, so that the data to be programmed flows through certain designated hardware for processing. In order to let the routing engine132, the accelerator133and the second processing unit138know the transaction profile of data programming, an embodiment of the invention allows the first processing unit134to generate leading information and cargo flags and transmit them to the routing engine132and the accelerator133to inform the routing engine132, the accelerator133and the second processing unit138such as which carrier the data in each transaction (also referred to as a data-programming transaction) to be programmed belongs to, the readiness status of each cargo in this carrier, which processing stages the carrier needs to go through, etc., so that the execution between the routing engine132, the accelerator133and the second processing unit138for each transaction is coordinated. Refer toFIG.4showing a schematic diagram of the transaction profile including the two-byte leading information (Byte0-Byte1)410, the four-byte cargo flags (Byte2-Byte5)420. Assuming that programming 128 KB of data to the flash module150at a time would yield better performance: the flash controller130drives the data access engine to program 128 KB of data into multiple NAND flash units in the flash module150in the multi-channel interleave manner. According to the above example, the 0thbyte (Byte0) of the leading information410stores the carrier identification (ID), which is used to indicate the specific 128 KB data. The 1stbyte (Byte1) of the leading information410stores information about operation settings, in which the least three significant bits store information indicating which processing stage/stages is/are activated. For example, the least three significant bits being “Ob111” indicate that all the front-end, the mid-end and the back-end processing stages are activated. By providing the carrier ID, the 128K data with the same carrier ID seems to be loaded on the same virtual carrier, cooperating with each belonging transaction to be processed between the routing engine132and the accelerator133. It is to be noted that a virtual carrier may load data of a specific length according to a particular type of flash module, such as 16 KB, 32 KB, 64 KB, or others. Since one transaction may not be used to supervise the data programming for the whole 128 KB of data, each bit in the cargo flags420is employed to indicate whether a specific data fragment (also referred to as cargo) in the 128 KB data is ready, “1” means ready, and “0” means not yet. For example, the least two significant bits in the 2ndbyte (Byte2) being set to “Ob11” indicates that the 0thand the 1st4 KB of data in the whole 128 KB data are ready. The least two significant bits in the 3rdbyte (Byte3) being set to “Ob11” indicates that the 8thand the 9th4 KB of data in the whole 128 KB data are ready. It is to be understood that, in some system configurations, 4 KB data is referred to as one host page (including eight continuous logical block addresses, LBAs) of data. In an example, when receiving a host write command instructing to write 128 KB of data from the host side110through the host I/F131, the firmware executed by the first processing unit134generates the transaction profile: the carrier ID is “0x00”; the operating settings are “0x07”, which indicates that the front-end, the mid-end and the back-end processing stages need to be activated for this transaction; and the cargo flags are “0x0000000” (which may be called the initial cargo flags), which indicate that no data is ready. Next, the first processing unit134transmits the transaction profile, the host write command, and the designated address (also referred to as the destination address) in the RAM136for storing the 128 KB data to the routing engine131. The host write command may contain the following information: the operation code (Opcode), the start LBA number, the LBA length, etc. The host write command and the destination address may be collectively referred to as a front-end parameter set. Typically, one LBA indicates 512 B of data and one host page holds eight continuous LBAs of data. Although the embodiments of the invention describe the length of one LBA is 512 B and one host page contains eight LBAs of data, those artisans may modify the length of one LBA to other length (such as 256 B, 1 KB, 2 KB etc.), and/or modify a host page to hold a greater or smaller number of LBAs of data according to different system requirements. In another example, when receiving a host write command instructing to write 64 KB of data from the host side110through the host I/F131, the firmware executed by the first processing unit134generates the transaction profile: the carrier ID is “0x01”; the operating settings are “0x07”; and the cargo flags are “0xFFFF0000” (which may be called the initial cargo flags), which indicate that data related to the 0thto 15thcargos is not ready, and data related to the 16thto 31thcargos is ready (it is also implied that the data can be ignored and do not need to be processed). Next, the first processing unit134transmits the transaction profile, the host write command, and the designated address in the RAM136for storing the 64 KB data to the routing engine131. In still another example, after 128 KB data has been collected in a Garbage Collection (GC) process, the firmware executed by the first processing unit134generates the transaction profile: the carrier ID is “0x02”; the operating settings are “0x04” to indicate that only the back-end processing stage needs to be activated; and the cargo flags are “0xFFFFFFFF” (which may be called the initial cargo flags), which indicate that all data is ready. The first processing unit134transmits the initial cargo flags for each transaction to the routing engine132and the accelerator133to inform the routing engine132and the accelerator133about which portions of data related to each transaction need to be prepared in the front-end processing stage. Before actually pushing the leading information and the front-end parameter set of a transaction into the routing engine132, the first processing unit134needs to prepare the mid-end parameter set and the back-end parameter set associated with the transaction. The firmware executed by the first processing unit134stores the operation details of the mid-end and the back-end processing stages of up to a maximum number of transactions (e.g. 64 transactions) in the SRAM of the accelerator133. The mid-end parameter set indicates the details of how to drive the RAID engine135to complete the mid-end processing stage, and may include a source address allocated in the RAM136for storing the raw data, the encryption or encoding parameters used to set the RAID engine135, a destination address allocated in the RAM136for storing the encrypted or encoded results. The back-end parameter set indicates the details of how to drive the data access engine137to complete the back-end processing stage, and may include a programming table and an index of the programming table. The programming table includes an address (may be referred to as a source address) allocated in the RAM136for storing source data, a series of flash commands and their programming parameters (such as a command type, a programming mode, a physical address to be programed into for each flash command, etc.). The physical address (may be referred to as a destination address) may include information about a channel number, a physical-block number, a physical-page number, a section number, etc. For the executions of host write commands or the performance of background procedures, the first processing unit134generates leading information, initial cargo flags, front-end parameter sets, mid-end parameter sets and back-end parameter sets for multiple transactions. After the first processing unit134transmits the leading information, the initial cargo flags and the front-end parameter sets for these transactions to the routing engine132and transmits the leading information, the initial cargo flags, the mid-end parameter sets and the back-end parameter sets for these transactions to the accelerator133, the routing engine132, the accelerator133and the data access engine137completes a variety of data programming processes accordingly, without the first processing unit134to supervise the whole data flow of the data programming processes, and wait for the status replies from the host I/F131, the RAID engine135and the data access engine137during the data programming processes. In other words, the first processing unit134does not directly drive the host I/F131, the RAID engine135and the data access engine137, but employs the routing engine132and the accelerator133to drive the host I/F131, the RAID engine135and the data access engine137to complete the operations of front-end, mid-end and back-end processing stages during the data programming processes. The saved time and computing resources allows the first processing unit134to perform other tasks, thereby improving the overall system performance. After that, for each transaction, the first processing unit134may read the execution status from the designated address in the RAM136or query the routing engine132and/or the accelerator133to obtain the execution status at regular time intervals. The routing engine132receives the operation settings and the front-end parameter set of a transaction from the first processing unit134, and the operation settings indicate whether each of the front-end, the mid-end and the back-end processing stages is required to be activated. When determining that the front-end processing stage is required to be activated according to the operation settings, the routing engine132drives the host I/F131to obtain data from the host side110and store the obtained data in the designated address of the RAM136through the shared bus architecture according to the front-end parameter set. Refer toFIG.5showing a block diagram of the routing engine132. The routing engine132includes the status queue510, the controller520and the activation queue530. The controller520may be practiced in a general-purpose processor or a dedicated circuit, and the status queue510and the activation queue530may be practiced in pre-allocated space in the SRAM. The routing engine132may perform a series of signal and data interchanges with the first processing unit134through the Advanced High-Performance (AHB) bus. If there is any transaction (i.e. virtual carrier) needs to obtain data from the host side110through the host I/F131, the firmware executed by the first processing unit134pushes the transaction profile (including the initial cargo flags) and the front-end parameter set into the status queue510for instructing the routing engine132how to drive the host I/F131to obtain the designated data and store the data in the designated address in the RAM136. The front-end parameter set indicates the logical address range of the host data, which may be represented by a start LBA number and an LBA length, as well as the designated location in the RAM136for storing the host data. Refer also toFIG.6showing a flowchart of the method for performing the front-end processing stage by the controller520. The method iteratively executes an outer loop (from steps S610to S670) and an inner loop (from steps S630to S660). Each iteration of the outer loop starts with the controller520popping out a transaction from the status queue510(step S610), and then determining whether the data related to the transaction needs to go through the front-end processing stage according to the operation settings of the transaction (step S620). If so (the “Yes” path of step S620), the inner loop is started for driving (or activating) the host I/F131to obtain the host data of the designated address from the host side110and storing the obtained host data in the designated address in the RAM136according to the content of transaction (step S630). It is to be understood that, for better performance, the processing order of the queued transactions may not be consistent with the time order in which they arrive to the statues queue510. That is, a transaction that arrives in the status queue510earlier is not necessarily processed by the controller520earlier. In other words, while the controller520drives the host I/F131to complete the operation indicated by a transaction's front-end parameter set, the status queue510may store an earlier arriving transaction that has not yet been processed. Since the controller520may complete the acquisition of the host data related to one transaction in multiple batches, each time after any host page (or any LBA range) of host data has been successfully stored in the designated address in the RAM136(step S630), the controller520updates the cargo flags to reflect the execution status of the host I/F131(step S640), and pushing the leading information and the updated cargo flags into the activation queue530, so that the accelerator133determines whether to activate the subsequent processing stage accordingly (step S650). For example, the popped transaction records the following transaction profile: the carrier ID is “0x01”; the operation settings are “0x07”; and the cargo flags are “0xFFFF0000”. The controller520uses two batches to drive the host I/F131to complete the reading of 64 KB data. After successfully performing the first batch for the 32 KB data, the controller520updates the cargo flags with “0xFFFF00FF”, and pushes the updated transaction profile (including the carrier ID “0x01”; the operation settings “0x07”; and the cargo flags “0xFFFF00FF”) into the activation queue530. After successfully performing the second batch for the 32 KB data, the controller520updates the cargo flags with “0xFFFFFF00”, and pushes the updated transaction profile (including the carrier ID “0x01”; the operation settings “0x07”; and the cargo flags “0xFFFF FF00”) into the activation queue530. If the operation settings indicate that the data related to this transaction does not go through the front-end processing stage (the “No” path of step S620), the controller520pushes the original transaction profile into the activation queue directly (step S670). Each time the controller520pushes the original or updated transaction profile into the activation queue530, it may mean that the controller520notifies the accelerator133of the activation message for the corresponding transaction. The accelerator133receives the operation settings, the mid-end parameter set and the back-end parameter set of a transaction from the first processing unit134, and the operation settings indicate whether every of the front-end, the mid-end and the back-end processing stages is required to be activated. When receiving the activation message for the transaction from the routing engine132and determining that the mid-end processing stage needs to be activated according to the operation settings, the accelerator133drives the RAID engine135to obtain data from a designated address of the RAM136through the shared bus, and encrypt the obtained data or generate parity-page data in terms of multiple pages of the obtained data according to the mid-end parameter set. Subsequently, when determining that the mid-end processing stage for the transaction does not need to be activated according to the operation settings or has been completed, and the back-end processing stage needs to be activated for the transaction according to the operation settings, the accelerator133drives the data access engine137according to the back-end parameter set to obtain data from a designated address of the RAM136through the shared bus and program the obtained data into a designated address of the flash module150. Refer toFIG.7showing a block diagram of the accelerator133. The accelerator133includes the controller710, the execution table720, the mid-end parameter set730, the back-end parameter set740and the programming table750. The controller710may be practiced in a general-purpose processor or a dedicated circuit, and the execution table720, the mid-end parameter set730, the back-end parameter set740and the programming table750may be practiced in pre-allocated space in the SRAM. The accelerator133may perform a series of signal and data interchanges with the first processing unit134through the AHB bus. The execution table720stores transaction profiles for multiple transactions (i.e. virtual carriers), and the content of execution table720is filled by the first processing unit134. An example of the execution table720is shown in Table 1: TABLE 1Entry No.Leading InformationCargo Flagsentry#0leadInfo#10cargoFlag#10entry#1leadInfo#11cargoFlag#11entry#2leadInfo#12cargoFlag#12entry#3leadInfo#13cargoFlag#13.........entry#62N/AN/Aentry#63N/AN/A The first processing unit134sequentially fills in the transaction profiles (including the leading information and the cargo flags) according to the execution sequence of the transactions. For example, the first processing unit134sequentially fills the 10thto 13thtransaction profiles into the 0thto 3rdentries (entry #0 to entry #3) in the execution table720, respectively. The transaction profile of the 10thtransaction includes the corresponding leading information (leadInfo #10) and the corresponding cargo flags (cargoFlag #10), the transaction profile of the 11thtransaction includes the corresponding leading information (leadInfo #11) and the corresponding cargo flags (cargoFlag #11), and so on. Although the order in which the transactions are pushed in the activation queue530is not necessarily the order that the first processing unit134originally pushes them into the status queue510, the controller710executes the transactions in the order arranged in the execution table720. That is, the controller710cannot drive the RAID engine135and the data access engine137for any of the 11thto 13thtransactions if the mid-end processing stage and/or the back-end processing stage required for the 10thtransaction has not been completed. If there is any transaction that needs to be processed by the RAID engine135, the first processing unit134stores the corresponding mid-end parameter set730in a designated address in the SRAM of the accelerator133in advance, so that the controller710sets the RAID engine135accordingly to complete the mid-end processing stage for this transaction. If there is any transaction that needs to be processed by the data access engine137, the first processing unit134stores the corresponding back-end parameter set740and the corresponding programming table750in a designated address in the SRAM of the accelerator133in advance, so that the second processing unit138in the data access engine137drives the flash I/F139accordingly to complete the back-end processing stage for this transaction. Refer also toFIG.8showing a flowchart of the method for performing the front-end processing stage by the controller710. The method iteratively executes a loop (from steps S810to S880). Each iteration of the loop starts with the controller710popping out a transaction from the activation queue530(step S810), performing logic OR operation on the cargo flags of the popped transaction and the corresponding cargo flags in the execution table720and updating the calculation results with the corresponding cargo flags in the execution table720(step S820), and determining whether the cargo flags of the 0thentry equal “0xFFFFFFFF” (step S830). If so (the “Yes” path of step S830), it means that the front-end processing stage for the 0thentry has completed or there is no need to perform the front-end processing stage for the 0thentry, and the 0thentry in the execution table720goes through the mid-end processing stage (steps S840to S860). Otherwise (the “No” path of step S830), it means that the front-end processing stage for the 0thentry has not completed, the controller710pops the next transaction out of the activation queue530to process (step S810). For example, assume that the execution table stores two transactions. At the time point t0, the 0thentry includes the transaction profile: the carrier ID is “0x10”; operation settings are “0x07”; and the cargo flags are “0x00000000”. The 1stentry includes the transaction profile: the carrier ID is “0x11”; operation settings are “0x07”; and the cargo flags are “0x00000000”. At the time point t1, the controller710pops one transaction out of the activation queue530, which includes the following transaction profile: the carrier ID “0x10”; the operation settings “0x07”; and the cargo flags “0x0000FFFF” (step S810). The controller710performs the logical OR operation on the cargo flags “0x0000FFFF” of the popped transaction and the corresponding cargo flags “0x00000000” (i.e. the cargo flags of the 0thentry) in the execution table720, and updates the corresponding cargo flags in the execution table720with the calculation result “0x0000FFFF” (step S820). Since the cargo flags “0x0000FFFF” of the 0thentry in the execution table720does not equal 0xFFFFFFFF (the “No” path of step S830), the process cannot be executed downward. At the time point t2, the controller710pops one transaction out of the activation queue530, which includes the following transaction profile: the carrier ID “0x11”; the operation settings “0x07”; and the cargo flags “0xFFFFFFFF” (step S810). The controller710performs the logical OR operation on the cargo flags “0xFFFFFFFF” of the popped transaction and the corresponding cargo flags “0x00000000” (i.e. the cargo flags of the 1stentry) in the execution table720, and updates the corresponding cargo flags in the execution table720with the calculation result “0xFFFFFFFF” (step S820). Since the cargo flags “0x0000FFFF” of the 0thentry in the execution table720is still not equal to 0xFFFFFFFF (the “No” path of step S830), even if the 1stentry is ready, the process cannot be executed downward. At the time point t3, the controller710pops one transaction out of the activation queue530, which includes the following transaction profile: the carrier ID “0x10”; the operation settings “0x07”; and the cargo flags “0xFFFF0000” (step S810). The controller710performs the logical OR operation on the cargo flags “0xFFFF0000” of the popped transaction and the corresponding cargo flags “0x0000FFFF” (i.e. the cargo flags of the 0thentry) in the execution table720, and updates the corresponding cargo flags in the execution table720with the calculation result “0xFFFFFFFF” (step S820). Since the cargo flags “0xFFFFFFFF” of the 0thentry in the execution table720equals 0xFFFFFFFF (the “Yes” path of step S830), the process proceeds to the mid-end processing stage for the 0thentry (steps S840to S860). It is to be noted that, after completing the back-end processing stage for 0thentry, the controller710deletes the data of the 0thentry in the execution table720and moves the data of the 1stentry and the subsequent entries in the execution table720forward by one entry. That is, the 0thentry of the updated execution table720includes the following transaction profile: the carrier ID “0x11”; the operation settings “0x07”; and the cargo flags “0xFFFFFFFF”. At the beginning of mid-end processing stage, the controller710determines whether the data corresponding to the 0thentry in the execution table720needs to go through the mid-end processing stage according to the operations settings of the 0thentry (step S840). If so (the “Yes” path of step S840), the controller710sets the RAID engine135according to the mid-end parameter set of the 0thentry for driving the RAID engine135to complete a designated data encryption or encoding operation for the data corresponding to the 0thentry (step S850). Since the encoding by the RAID engine135takes a period of time, the controller710may send polls to the RAID engine135at regular time intervals, and determine whether the mid-end processing stage is completed according to the replied statuses (step S860). If the mid-end processing stage hasn't been completed (the “No” path of step S860), the controller710continues to wait and poll. If the mid-end processing stage has been completed (the “Yes” path of step S860), the process proceeds to the next stage (i.e. the back-end processing stage) (steps S870and S880). Moreover, if the data corresponding to the 0thentry in the execution table720does not need to go through the mid-end processing stage (the “No” path of step S840), the process proceeds to the next stage directly (steps S870and S880). The RAID engine135may perform a variety of procedures, such as clear and encode, encode, terminate encode, terminate, resume, etc., according to the instructions issued by the accelerator133. When receiving the clear and encode instruction, the controller in the RAID engine135reads data of multiple host pages (such as 32 host pages) from a designated address (also called source address) in the RAM136through the shared bus, and overwrites the data stored in the SRAM of the RAID engine135with the read data. When receiving the encode instruction, the controller in the RAID engine135reads data of multiple host pages from a designated address in the RAM136through the shared bus, performs the logical Exclusive-OR (XOR) operation on the read data, and the data and stored in the SRAM of the RAID engine135, and overwrites the data stored in the SRAM of the RAID engine135with the calculated result. When receiving the terminate encode instruction, the controller in the RAID engine135reads data of multiple host pages from a designated address in the RAM136through the shared bus, performs the logical XOR operation on the read data, and the data and stored in the SRAM of the RAID engine135, overwrites the data stored in the SRAM of the RAID engine135with the calculated result, and store the calculated result in a designated address (also called destination address) in the RAM136through the shared bus. For example, the first processing unit134may store 64 transactions (the carrier ID are “0x20” to “0x5F”) in the execution table. The mid-end parameter set730of the 0thentry includes the clear and encode instruction, the mid-end parameter sets730of the 1stto 62thentries include the encode instructions, and the mid-end parameter sets730of the 63thentry includes the terminate encode instruction. Thus, the first processing unit134may drive the RAID engine135to execute the instructions in these 64 entries to obtain parity-page data corresponding to the host data. At the beginning of back-end processing stage, the controller710determines whether the data corresponding to the 0thentry in the execution table720needs to go through the back-end processing stage according to the operations settings of the 0thentry (step S870). If so (the “Yes” path of step S870), the controller710transmits a message to the second processing unit138for completing a designated data-programming operation according to the back-end parameter set associated with the 0thentry (step S880). If the data corresponding to the 0thentry in the execution table720does not need to go through the back-end processing stage (the “No” path of step S870), the process continues to pop the next transaction out of the activation queue530to process (step S810). The message sent from the controller710to the second processing unit138includes a programming index and a source address, the programming index indicates a designated address in the SRAM of the accelerator133, and the source address indicates data stored in the RAM136, which is to be programmed into the flash module150. The second processing unit138reads data from the source address in the RAM136through the shared bus, reads the programming table750corresponding to the 0thentry from the SRAM of the accelerator133according to the programming index, and drives the flash I/F139according to flash commands with programming parameters in the read programming table750for programming the read data into designated physical address in the flash module150. It is to be noted that the first processing unit134may be configured to handle the critical operations of the flash controller130, such as system booting, system off, execution scheduling for a variety of host commands, sudden power-of recovery (SPOR), etc., and the second processing unit138may be configured to interact with the flash module150, which includes driving the flash I/F139to read data from a designated address in the flash module150, program data into a designated address in the flash module150, erase a designated physical block in the flash module150, etc. The aforementioned design makes the whole system flexibly configure the data flow. For example, Table 2 shows that the data programming of the four transactions needs to go through the front-end, the mid-end and the back-end processing stages, which are arranged into a pipeline of parallel execution. TABLE 2Time PointCarrier#0Carrier#1Carrier#2Carrier#3t0Front-endt1Mid-endFront-endt2Back-endMid-endFront-endt3Back-endMid-endFront-endt4Back-endMid-endt5Back-end Table 3 shows the data programming for the data corresponding to the 0thto 2ndentries needs to go through the front-end and the mid-end processing stages, the data programming for the data corresponding to the 3rdentry needs to go through the front-end, the mid-end and the back-end processing stages, which are arranged into a pipeline of parallel execution. TABLE 3Time PointCarrier#0Carrier#1Carrier#2Carrier#3t0Front-endt1Mid-endFront-endt2Mid-endFront-endt3Mid-endFront-endt4Mid-endt5Back-end Table 4 shows the data programming for the data corresponding to the 0thto 1stentries needs to go through the front-end and the mid-end processing stages, the data programming for the data corresponding to the 2ndentry needs to go through the mid-end processing stage, and the data programming for the data corresponding to the 3rdentry needs to go through the mid-end and the back-end processing stages, which are arranged into a pipeline of parallel execution. TABLE 4Time PointCarrier#0Carrier#1Carrier#2Carrier#3t0Front-endt1Mid-endFront-endt2Mid-endt3Mid-endt4Mid-endt5Back-end Table 5 shows the data programming for the data corresponding to the 0thto 2ndentries needs to go through the front-end processing stage, and the data programming for the data corresponding to the 3rdentry needs to go through the front-end and the mid-end processing stages, which are arranged into a pipeline of parallel execution. TABLE 5Time PointCarrier#0Carrier#1Carrier#2Carrier#3t0Front-endt1Front-endt2Front-endt3Front-endt4Mid-end Some or all of the aforementioned embodiments of the method of the invention may be implemented in a computer program such as a driver for a dedicated hardware, a firmware translation layer (FTL) of a storage device, or others. Other types of programs may also be suitable, as previously explained. Since the implementation of the various embodiments of the present invention into a computer program can be achieved by the skilled person using his routine skills, such an implementation will not be discussed for reasons of brevity. The computer program implementing some or more embodiments of the method of the present invention may be stored on a suitable computer-readable data carrier such as a DVD, CD-ROM, USB stick, a hard disk, which may be located in a network server accessible via a network such as the Internet, or any other suitable carrier. Although the embodiment has been described as having specific elements inFIGS.1,2,5, and7, it should be noted that additional elements may be included to achieve better performance without departing from the spirit of the invention. Each element ofFIGS.1,2,5, and7is composed of various circuits and arranged operably to perform the aforementioned operations. While the process flows described inFIGS.3,6, and8include a number of operations that appear to occur in a specific order, it should be apparent that these processes can include more or fewer operations, which can be executed serially or in parallel (e.g., using parallel processors or a multi-threading environment). While the invention has been described by way of example and in terms of the preferred embodiments, it should be understood that the invention is not limited to the disclosed embodiments. On the contrary, it is intended to cover various modifications and similar arrangements (as would be apparent to those skilled in the art). Therefore, the scope of the appended claims should be accorded the broadest interpretation so as to encompass all such modifications and similar arrangements.
45,070
11860776
DETAILED DESCRIPTION OF EMBODIMENTS The present memory restoration system enables a collection computing systems to prepare inactive rewritable memory for reserve and for future replacement of other memory. The preparation occurs while the other memory is active and available for access by a user of the computing system. The preparation of the reserved memory part is performed off-line at the computing system in a manner that is isolated from the current user of the active memory part. Preparation of memory includes erasure of data, reconfiguration, etc. The memory restoration system allows for simple exchange of the reserved memory part, once the active memory part is returned. In some implementations, the previously active memory may be concurrently recycled for future reuse in this same manner to become a reserved memory. In some implementations, a collection of computing devices, e.g. servers, may incorporate the memory restoration system in a cloud computing environment. The computing devices used in a cloud computing system include more memory and powerful components such as a robust operating system, than an individual client device, e.g. workstation. The cloud computing system may include various types of servers, such as bare metal instances without a hypervisor. A service processor is configured to switch between using one of two memory parts, e.g. two flash memories. While the service processor is connected to and using a current memory part, a root of trust processing unit (ROT) can configure the “inactive” memory part in the background concurrent with a user resident on a server, for example, in a cloud service application. The inactive memory part is disabled from being connecting with the service processor during the preparation phase. The cloud infrastructure may “swap” to what was previously the inactive memory part when a user vacates the server. The concurrent recycling process of the present memory restoration system may results in substantial speeding up of the server preparation process. Exchange of memory parts occurs by use of electrical connections, taking milliseconds to a few seconds to complete the swap. The computing device accessed by a user is provisioned with clean firmware. Preparation of the reserved memory part running in the background may include erasing data from a previous user of the reserved memory part, to wipe clean the reserved memory part, e.g. restored to factory settings. In some instances, this preparation phase may also include reading data stored in the reserved memory part to confirm that no bits have changed while the previous user was resident on the system. The preparation phase may also include reconfiguring the reserved memory part for use by a subsequent user. Reconfiguration may require that a startup or boot memory be written with appropriate data to initiate an environment or task of the subsequent user. For example, the root of trust may prepare a fresh software image that is booted onto the cleaned flash memory before the prepared memory is handed over to the service processor for a next user. The fresh image may be from a previously prepared memory device. Thus, the preparation process may be continuously flip-flopping memory parts. A user, e.g., end user, as referred to in this description, may be any person or entity designated to have available access to a given service processor and an active memory device, in which the active memory device is preconfigured for the user's needs. For example, the user may be a tenant of a server space in a multi-tenant cloud architecture, where the tenant is isolated and invisible to other tenants of other cloud resources. In some implementations, a user may also be a tenant of a single-tenant cloud system. For example, a user may be a customer who is currently using, e.g., assigned to, a server. The user may be an authorized group of persons, one or more entities, e.g. enterprise or groups within an enterprise (including a business, university, government, military, etc.) or individual person. The service processor and active memory may be dedicated to the assigned user during a usage period of time in which a user is designated as having available access to allocated computer resources. For example, a usage period may initiate with a user being granted access to assigned computer resources, e.g. a given service processor and associated memory, and terminate when the user relinquishes the allocated computer resources. The usage period may be defined by a customer agreement, a duration time of a particular project, during an employment or contract term, etc. Although features may be described with respect to specific types of resources or operations, e.g., system flash memory, the features described herein may be applicable to other cloud computing resources and operations. Furthermore, while cloud computing is one example of a computing system described, where the memory restoration system may be implemented by a motherboard, the present memory restoration system may be employed in other computing environments in which a memory device or other electronic hardware is updated in the background. For example, network cards, hard drives, etc. may be updated without interfering with currently executing software. The memory restoration system may be employed for an update of a memory, such as a memory image that would typically take a long time to load because include much data, The update may be run in the background with a second memory while a first memory is being used by a user of the computer device. The device may be then rebooted to load the updated second memory, where the two memories are of the same type, e.g. flash memory. Thus, one memory is not, for example persistent memory, and the other memory being random access memory. Additionally, the memory restoration system may also enable forensics to be performed on an inactive memory part, for example, where it is suspected that the memory part had been tampered with, such as by an authorized user. It is customary to configure a single flash memory device to attach to service processors for a user. Erasing of the flash memory can consume much downtime during which the system is inoperable. To compensate for the slow erase/write cycles, some systems have attempted to speed up the physical communication bus between the service processor and the flash memory part such that communication is sped up to the flash memory part. Another attempted solution has been to logically split the memory that is being used by the service processor into portions, e.g. two halves, and to update one portion, e.g. one half, while executing off of the other portion(s). However, this split memory approach may impose security risks since the currently executing service processor software may be considered untrusted once a user is resident on the server. The present memory restoration system provides a security benefit by isolating the preparation of the reserve memory part through a switch restricting the service processor and the resident user access to the reserve memory until the reserve memory is ready to be deployed. With the present memory restoration system, the recycle time of a computer resource is greatly reduced. A next user for the computer resource need not wait for the computer resource to be prepared, e.g. data erased, after a previous user is done with the computer resource. The wipe time experienced by a user switchover can be reduced by configuring time-consuming portions of a memory wipe for the next user, concurrent with the service processor's accessing a separate active memory for a current resident user. The memory restoration system adds to the elasticity of a fleet of servers to accommodate increasing customer demands and avoid potentially thousands of offline servers at any given time waiting for recycling. In some implementations, in order to reduce the amount of time before a next user can begin using a service processor, two or more physical sections of memory are maintained. While the service processor is being used by a present user, a second section of memory (e.g., flash memory, or other) is accessed by a root of trust processing unit (ROT). The ROT may be dedicated to the jobs of preparation of reserved memory and replacement, e.g. reinstalling, of the prepared reserved memory. In some implementations, the ROT can include a separate piece of hardware and firmware that is trusted by not being accessible to a user. Between user tenancies the ROT is used to securely wipe any memory to be used by a service processor. In some implementations, the ROT or any other type of processing facility may be used to provide other functions. For example, the ROT may sanitize, load and/or verify data for a next user. Then, when it is time for a new user to begin operations on the cloud resources, the service processor is put into communication with the prepared memory section and can begin processing without waiting for the most recently used memory section to be wiped or otherwise configured. In illustration of a usage example shown inFIG.1a, a memory preparation stage is shown in the context of a cloud computing system100, in which a reserve memory, memory-1146, is prepared for reuse. The cloud computing system100includes with a memory restoration system110having a service front end122of computing device-A120and a restoration back end142of computing device-B140. According to one implementation, the memory restoration system110may be employed to prepare memory-1146, while memory-2126is accessed through I/O port128by a client device102of a user104across network164via a router (not shown) of the cloud computing system100. FIG.1ashows an instance of the memory restoration system110that includes service front end122accessible to client device102, with a service processor124and memory-2126. The memory restoration system110includes restoration back end140inaccessible to client device102, with a ROT processing unit144and memory-1146. The client device may be a variety of heterogeneous devices, such as a desktop, laptop, tablet, smartphone, thin client, etc. In some implementations, the computing device-A120and computing device-B140may be server devices. For example, computing device-A120may be a bare metal type server in which the user has access to much of the computing device-A120except for particular locked down components. In other implementations, computing device-A120and computing device-B140are not distinct devices but virtualization machines, for example, managed by Hypervision software. In some implementations, the server is a large scale blade server. The service processor124may be any microprocessor that the user device102may connect to and perform functions. For example, in some implementations, the service processor may include an Oracle Integrated Lights Out Manager (ILOM) embedded in the computing device-A120. Servers may include various types of service processors such as high performance processors, e.g., Intel XEON and AMD EPYC processors, etc. In some implementations, the service processor may include a baseboard management controller (BMC). Other types of service processers (including general purpose, custom, bitslice or other processors) or processing systems or resources may be used. The BMC may monitor the physical state of a server using sensors and communicate with a system administrator through a special management connection. A switch160selectively connects the memory-1146and memory-2126to the ROT and service processor, respectively. The switch may include various types, such as a multiplexer, a collection of switches, e.g. a crossbar switch having multiple input and output lines, etc. A computing system controller162may control various aspects of the memory restoration system110. The controller may send signals to the ROT to prepare memory-1146for deployment. The control signals may trigger the ROT to prepare and configure memory-1146while the service processor124is in communication with memory-2126. In some implementations, one or more switch signals are sent from the ROT to the switch to instruct the switch160to selectively connect or disconnect the ROT and/or service processor to the memory-2126or memory-1146. The switch signal may be a single signal to apply a voltage or not to apply a voltage. In some implementations, the switch signal may be sent from the ROT through a virtual wire, a physical wire, or other signal transmission media. In some implementations, the switch signal may also be sent by other sources, such as controller162. In still some implementations, the switch signal may be a physical component to flip the switch160. For simplicity, a single client device102and computing devices120,140are shown inFIG.1a. The cloud computing system100may include a vast collection of computing devices120,140and offers the ability of scaling to concurrently serve many client devices102of numerous users104. In addition, althoughFIG.1adepicts one memory part being prepared by the ROT, in some implementations, the ROT may simultaneously or sequentially prepare multiple reserve memories for future deployment. Cloud computing system100may be a public, private, virtual private, multi-cloud, or personal cloud system, or combinations thereof, running a variety of services, such as platform as a service (PaaS), infrastructure as a service (IaaS), etc. Although the memory restoration system110is shown inFIG.1ain the context of the cloud computing system100, the memory restoration system110may be employed with substantially the same components in other computing systems and applications that may benefit from swapping of memory parts that involves reduced downtime. Memory-1146and memory-2126may be any rewritable memory suitable for storage and communication with the service processor124and ROT processing unit144, such as flash memory. Memory-1146and memory-2126are typically the same type of rewritable memory, e.g. flash memory. Although memory-1146and memory-2126are shown as members of computing devices120,140respectively, the memory may be also located remote from the computing devices120,140. For example, memory-1146and memory-2126may be virtualization memory decoupled from the servers (computing devices120,140) by a virtualization manager, e.g. Hypervision software. In further illustration of the usage example ofFIG.1a,FIG.1bshows a replacement stage of the cloud computing system100, in which a fully prepared memory-1146is swapped with memory-2126for reuse of memory-1146with the service processor124. During the replacement stage, the switch160may receive one or more signals, such as from ROT144triggering the switch160to change connections. Through the change of switch connections ROT144gains communication access to the memory-2126and the service processor124loses access to memory-2126. Likewise, ROT144loses connection capabilities with memory-1146and service processor124gains communication access to memory-2126. A new user device106of a subsequent user108different from a most recent prior user, may connect to memory-1146while memory-2126is prepared for reuse in the background. During the replacement phase, the ROT may trigger a power cycle of the hardware host, e.g. the server to be used by a tenant user, and prompt for the installation of known firmware. The ROT may receive confirmation from the hardware host that the process has been performed as expected. The preparation and replacement phases of firmware installation reduces the risk from firmware-based attacks, such as a permanent denial of service (PDoS) attack or attempts to embed backdoors in the firmware to steal data or make it otherwise unavailable. Some implementations of the memory restoration system200, as shown in an example inFIG.2, may include a multiplexer switch202. A service processor (SP)204is selectively coupled through a serial peripheral interface (SPI)206via and the multiplexer switch202to operational serial peripheral interface (OSPI)208through SPI210. An ROT214is selectively coupled through a serial peripheral interface (SPI)216via and the multiplexer switch202to OSPI208. The OSPI208enables communication with a flash memory system including memory-1220accessed by ROT214and memory-2222accessed by SP204. In some implementations, only certain steps or operations in a preparation phase may need to be pre-configured in order to provide time savings or other beneficial results. A connector218may provide a dedicated path for one or more signals generated by the ROT214to be transmitted to the switch. The connector218may be a virtual wire to bind the ROT214with the multiplexer switch202. FIG.3shows a diagram illustrating an implementation of the memory reservation system300for configuring a reserved memory, e.g. memory-1320while concurrently allowing a service processor304to access an active memory, e.g. memory-2322. Once the reserved memory is prepared, the memory reservation system switches via crossbar switch302, the service processor304to the reserved memory, e.g. memory-1320. A crossbar switch302allows ROT314or SP304to selectively be connected to either a memory-1320or memory-2322via the SPIs and OSPI1308or OSPI2324. Through crossbar switch302, ROT314is connected to a reserve memory (memory-1320through OSPI1308or memory-2322through OSPI2324) and SP304is connected to an active memory (memory-1320through OSPI1308or memory-2322through OSPI2324). In some implementations, one or more ROT's and one or more SP's can be connected among two or more memory parts with two or more associated OSPI's. In some implementations, a high bandwidth bus switch may be employed. Connector318may be a physical wire or virtual wire to bind the ROT314with the crossbar switch302. ROT314may send switch signals through connector318to the crossbar switch302, for example, signals to trigger crossbar switch302to selectively swap access to OSPI1308for memory-1320, and access to OSPI2324for memory-2322. In particular implementations, each memory part may be a separate flash memory component or system. In other implementations, the memory parts or partitions may be in the same physical system, or may be organized across more than two different memory components (or memory devices if there are multiple) in the background. In some implementations, the ROT314may prepare and configuration the reserve memory, which may further include loading and measuring the OSPI1308and/or OSPI2324. For example, the contents of OSPI1308and/or OSPI2324may be read and compared to a known acceptable value. In some implementations, the ROT can be given alternating access to both memory-1320and memory-2322so that it can measure and update the OSPI1308and OSPI2324at given time. In one implementation, the crossbar switch302permits the ROT314to control, e.g. through signals sent from ROT314through connector318, which of OSPI1308or OSPI2324the ROT314is connected to, as well as which of OSPI1308or OSPI2324devices is connected to the SP304. FIG.4is a block diagram of exemplary interconnections of possible components in the memory restoration system configuration400shown inFIG.3. For example, a high bandwidth bus switch402may be used, such as QS3VH16212 bus exchange hotswitch by Renesas/Integrated Device Technology. A connector418may be an input/output expander, such as a 16 bit I/O expander PCA9555 by Texas Instruments, to provide more I/O. In some implementations, the signal expander device may be beneficial if there is a shortage of physical pins in a particular configuration of the memory restoration system400. For example, a high bandwidth bus switch may require added control bits for additional signals. Other numbers and types of signal expanders or signal connector configurations may be used. As described above with regards toFIG.3, interconnections include the switch402with multiple pins selectively connecting either the ROT414through multiple input lines416or the SP404through multiple output lines406, to either of two OSPI devices408,424through multiple lines410,412. The OSPI devices408,424enable coupling to respective memory parts (not shown). FIG.5is a flowchart of an exemplary concurrent recycling process500to automatically prepare and configure the used memory for reuse. The concurrent recycling process is performed by at least some of the components of memory restoration system (for example,110ofFIGS.1aand1b,200ofFIG.2, or300ofFIG.3). This concurrent recycling may include discovering the physical media, e.g. flash memory returned by a previous user, connected to the host. In block502, an indication is received of a returned memory part being inactive that had previously been in use by a collection of computing resources, e.g. cloud computing system. The indication may be an internally-generated notification, for example, by the cloud computing system, that a parameter had been reached to terminate a particular user's access to the returned memory. Termination of use may be expiration of a usage period for a particular user, warning of prohibited use of the returned memory, a problem with the memory hardware or software, etc. In some implementations, the indication may be generated by user device, e.g., a submission to the system that the user is finished utilizing the returned memory. In block504, a service processor (such as124ofFIGS.1a,1b;204ofFIG.2and304ofFIG.3) is temporarily disabled from accessing the returned inactive memory. The service processor may be disconnected from the inactive memory by a switch (such as106ofFIGS.1a,1b,202ofFIG.2, or302ofFIG.3). The service processor is used by a current resident user (such as104ofFIGS.1a,1b). In block506, a user connection is provided for the SP to access an active memory part for a client device of the current resident user. The active memory part had already been prepared and configured for use. The ROT (such as144ofFIGS.1a,1b;214ofFIG.2and314ofFIG.3) may send a switch signal to the switch, triggering the switch to change connections to the memory parts. In block508, a connection is provided for the ROT to access to the returned inactive memory part. The ROT may initiate secure erasure by executing the applicable erasure command for the media type. In block510, data on the returned inactive memory part is erased by the ROT during a preparation stage. During the preparation phase, physical destruction and logical data erasure processes are employed so that data does not persist in restored memory. In some implementations, when the erasure process is complete, the ROT may start a process to return the used memory to its initial factory state to restore the memory to factory settings prior to a first deployment for a user. The ROT may further test the used memory for faults. If a fault is detected, the used memory may be flagged further investigation. In block512, the ROT configures the inactive memory for subsequent user of the service processor. When a computing resource, e.g. bare metal compute server instance, is released by a user or service, the hardware goes through the provisioning process before the returned memory is released to inventory for reassignment. Configuring may include installing and configuring software, including the operating system and applications. In decision block514, it is determined whether a usage period current for the active memory is still current, or whether the period had ended. In block516, if the usage period is still current such that a user is still permitted to use the active memory, the prepared active memory part remains in use and the inactive memory part is maintained as pending for a subsequent user in a memory swap. Otherwise, in some circumstances the process may end, for example if there are no further users, if the memory is found to be too worn for subsequent use, etc. If the usage period is no longer current such that the user is not permitted to continue use of the active memory part, the process returns to block502to swap memories and prepare/configure the recently used memory part. FIG.6is a block diagram of an exemplary computer device600, e.g. a server (such as120or140ofFIG.1aor1b) for use with implementations of the memory restoration system described herein. The computer device600may be included in any of the above described computer devices of a collection of computer devices. Computer device600is merely illustrative and not intended to limit the scope of the claims. One of ordinary skill in the art would recognize other variations, modifications, and alternatives. In one exemplary implementation, computer device600includes an I/O interface610, which may represent a combination of a variety of communication interfaces (such as128ofFIGS.1a,1b). I/O interface610may include a network interface. A network interface typically includes a network interface card, an Ethernet card, a modem (telephone, satellite, cable, ISDN), (asynchronous) digital subscriber line (DSL) unit, and the like. Further, a network interface may be physically integrated on a motherboard, may be a software program, such as soft DSL, or the like. In some implementations, the computer device600may use a virtual device to substitute for a physical I/O component. Computer device600may also include software that enables communications of I/O interface610over a network670such as the HTTP, TCP/IP, RTP/RTSP, protocols, wireless application protocol (WAP), IEEE 902.11 protocols, and the like. In addition to and/or alternatively, other communications software and transfer protocols may also be used, for example IPX, UDP or the like. Communication network670may include a local area network, a wide area network, a wireless network, an Intranet, the Internet, a private network, a public network, a switched network, or any other suitable communication network, such as for example Cloud networks. Network670may include many interconnected computer systems and any suitable communication links such as hardwire links, optical links, satellite or other wireless communications links such as BLUETOOTH, WIFI, wave propagation links, or any other suitable mechanisms for communication of information. For example, network670may communicate to one or more mobile wireless devices956A-N, such as mobile phones, tablets, and the like, via a base station such as a wireless transceiver. Computer device600typically includes computer components such as a processor650as described above (such as service processor124and ROT144inFIGS.1a,1b), and memory storage devices, such as a memory620, e.g., flash memory as described above and storage media640. A bus may interconnect computer components. In some implementations, computer device600is a server having hard drive(s) (e.g. SCSI) and controller card, server supported processors, network interface, memory, and the like. While a computer is shown, it will be readily apparent to one of ordinary skill in the art that many other hardware and software configurations are suitable for use with the present invention. Memory620and storage media640are examples of tangible non-transitory computer readable media for storage of data, files, computer programs, and the like. Other types of tangible media include disk drives, solid-state drives, floppy disks, optical storage media and bar codes, semiconductor memories such as flash drives, flash memories, random-access or read-only types of memories, battery-backed volatile memories, networked storage devices, cloud storage, and the like. A data store632may be employed to store various data such as data saved by a user. One or more computer programs, such as applications634, also referred to as programs, software, software applications or code, may also contain instructions that, when executed, perform one or more methods, such as those described herein. The computer program may be tangibly embodied in an information carrier such as computer or machine readable medium, for example, the memory620, storage device or memory on processor650. A machine readable medium is any computer program product, apparatus or device used to provide machine instructions or data to a programmable processor. Computer device600further includes operating system628. Any operating system628, e.g. server OS, that is supports the fail-over cluster may be employed, e.g. Linux, Windows Server, Mac OS, etc. Although the description has been described with respect to particular implementations thereof, these particular implementations are merely illustrative, and not restrictive. For example, circuits or systems to implement the functionality described herein may vary widely from the specific embodiments illustrated herein. Any suitable programming language can be used to implement the routines of particular implementations including C, C++, Java, assembly language, etc. Different programming techniques can be employed such as procedural or object oriented. The routines can execute on a single processing device or multiple processors. Although the steps, operations, or computations may be presented in a specific order, this order may be changed in different particular implementations. In some particular implementations, multiple steps shown as sequential in this specification can be performed at the same time. Particular embodiments may be implemented in a computer-readable storage medium for use by or in connection with the instruction execution system, apparatus, system, or device. Particular embodiments can be implemented in the form of control logic in software or hardware or a combination of both. The control logic, when executed by one or more processors, may be operable to perform that which is described in particular embodiments. Particular embodiments may be implemented by using a programmed general purpose digital computer, by using application specific integrated circuits, programmable logic devices, field programmable gate arrays, optical, chemical, biological, quantum or nanoengineered systems, components and mechanisms may be used. In general, the functions of particular embodiments can be achieved by any means as is known in the art. Distributed, networked systems, components, and/or circuits can be used. Communication, or transfer, of data may be wired, wireless, or by any other means. It will also be appreciated that one or more of the elements depicted in the drawings/figures can also be implemented in a more separated or integrated manner, or even removed or rendered as inoperable in certain cases, as is useful in accordance with a particular application. It is also within the spirit and scope to implement a program or code that can be stored in a machine-readable medium to permit a computer to perform any of the methods described above. As used in the description herein and throughout the claims that follow, “a”, “an”, and “the” includes plural references unless the context clearly dictates otherwise. Also, as used in the description herein and throughout the claims that follow, the meaning of “in” includes “in” and “on” unless the context clearly dictates otherwise. Thus, while particular embodiments have been described herein, latitudes of modification, various changes, and substitutions are intended in the foregoing disclosures, and it will be appreciated that in some instances some features of particular embodiments will be employed without a corresponding use of other features without departing from the scope and spirit as set forth. Therefore, many modifications may be made to adapt a particular situation or material to the essential scope and spirit.
31,711
11860777
DETAILED DESCRIPTION OF THE EMBODIMENTS Hereinafter, a storage device using a flash memory device will be used to describe exemplary embodiments of the inventive concept. However, one skilled in the art will understand that the inventive concept is not limited thereto and that the inventive concept may be implemented or applied through other embodiments. It is to be further understood that in the drawings, the same reference numerals may refer to the same or similar elements. FIG.1is a block diagram illustrating a storage device according to an exemplary embodiment of the inventive concept. Referring toFIG.1, a storage device100may include a storage controller110and a nonvolatile memory device120. In an exemplary embodiment of the inventive concept, each of the storage controller110and the nonvolatile memory device120may be implemented with one chip, one package, or one module. Alternatively, the storage controller110and the nonvolatile memory device120may be implemented with one chip, one package, or one module to constitute a memory system such as a memory card, a memory stick, or a solid state drive (SSD). The storage controller110may be configured to control the nonvolatile memory device120. For example, depending on a request of a host, the storage controller110may write data in the nonvolatile memory device120or may read data stored in the nonvolatile memory device120. To access the nonvolatile memory device120, the storage controller110may provide a command, an address, and a control signal to the nonvolatile memory device120. In particular, the storage controller110includes a flash translation layer (FTL)114that performs garbage collection according to an exemplary embodiment of the inventive concept. The flash translation layer114provides an interfacing between a file system of the host and the nonvolatile memory device120, to hide an erase operation of the nonvolatile memory device120. In the nonvolatile memory device120, there may be mismatch between an erase unit and a write unit and thus an erase-before-write characteristic may be redeemed through the flash translation layer114. Further, the flash translation layer114may map a logical address that a file system of the host generates, onto a physical address of the nonvolatile memory device120. In addition, the flash translation layer114may perform a wear leveling for managing a lifetime of the nonvolatile memory device120or a garbage collection for managing a data capacity of the nonvolatile memory device120. In a data write operation, in the case where an empty page (hereinafter referred to as a “clean page”) is present in a selected memory block, the storage controller110according to an exemplary embodiment of the inventive concept counts a time (hereinafter referred to as an “elapse time ET”) that elapses from a time at which programming is terminated. The elapse time ET is counted from a time when a last page of the selected memory block is programmed. Here, the last page may refer to a page of the selected memory block, at which data are finally programmed. The last page may not refer to a physical page at an edge of the selected memory block. Hereinafter, a memory block that is selected to program data and on which programming is not terminated is called an “active block”. When the counted elapse time ET reaches a threshold value TH, the storage controller110designates an active block corresponding to the counted elapse time ET as a destination block of the garbage collection GC. In another exemplary embodiment of the inventive concept, the storage controller110may designate an active block, of which the counted elapse time ET reaches the threshold value TH, as a free block and may then designate the active block as a destination block of the garbage collection. Valid data collected for the garbage collection are programmed at clean pages of the active block designated as the destination block of the garbage collection. Accordingly, a time during which clean pages of an active block are left alone in an erase state may be minimized. Under control of the storage controller110, the nonvolatile memory device120may store data received from the storage controller110or may transmit data stored therein to the storage controller110. The nonvolatile memory device120may include a plurality of memory blocks BLK1to BLKi. Each of the plurality of memory blocks BLK1to BLKi has a three-dimensional memory structure in which word line layers are stacked in a direction perpendicular to a substrate. Each of the plurality of memory blocks BLK1to BLKi may be managed by the storage controller110by using information for wear leveling, such as an “erase count EC”. According to an exemplary embodiment of the inventive concept, the storage device100may utilize clean pages included in an active block in a garbage collection operation. Accordingly, the EPI error occurring at a clean page may be prevented, and an additional memory management operation for programming dummy data at the clean page is unnecessary. According to an exemplary embodiment of the inventive concept, cost reduction and reliability improvement may be imparted to the storage device100in which the number of active blocks increases. FIG.2is a block diagram illustrating a configuration of a storage controller ofFIG.1. Referring toFIG.2, the storage controller110according to an exemplary embodiment of the inventive concept includes a processing unit111, a working memory113, a host interface115, an error correction code block117, and a memory interface119. However, it is to be understood that components of the storage controller110are not limited to the aforementioned components. For example, the storage controller110may further include a read only memory (ROM) that stores code data necessary for an initial booting operation. The components of the storage controller110may be communicably coupled via a bus. The processing unit111may include a central processing unit (CPU) or a micro-processor. The processing unit111may manage overall operations of the storage controller110. The processing unit111is configured to drive firmware for driving the storage controller110. Software (or firmware) for controlling the storage controller110or data may be loaded onto the working memory113. The stored software and data may be driven or processed by the processing unit111. In particular, according to an exemplary embodiment of the inventive concept, the flash translation layer114that utilizes a clean page of an active block as a destination area of garbage collection may be loaded onto the working memory113. The flash translation layer114that is driven by the processing unit111performs functions such as an address managing function, a garbage collection function, and a wear-leveling function. The flash translation layer114designates a clean page of an active block as a destination area of the garbage collection with reference to the elapse time ET. Here, the expression “destination” may mean a memory area in which valid data collected in the garbage collection operation are programmed. For example, a destination page may mean a page area in which data collected in the garbage collection operation are programmed. In a data write operation, when a clean page is present in an active block, the flash translation layer114counts the elapse time ET from a program time of a last page. When the counted elapse time ET reaches the threshold value TH, the flash translation layer114may trigger the garbage collection GC and may program valid data at the clean page of the active block. Accordingly, a time during which clean pages of an active block are left alone in an erase state may be reduced. The host interface115provides an interface between a host and the storage controller110. The host and the storage controller110may be connected through one of various standardized interfaces. Here, the standardized interfaces may include an advanced technology attachment (ATA) interface, a serial ATA (SATA) interface, an external SATA (e-SATA) interface, a small computer system interface (SCSI), a serial attached SCSI (SAS), a peripheral component interconnection (PCI) interface, a PCI Express (PCI-E) interface, a universal serial bus (USB) interface, an IEEE 1394 interface, a universal flash store (UFS) interface, a card interface, and the like. The error correction code block117may correct an error of data damaged due to various causes. For example, the error correction code block117may perform an operation for detecting or correcting an error of data read from the nonvolatile memory device120. In particular, the error correction code block117may detect the number of error bits or a bit error rate BER of data read from memory cells in units of a word line, depending on a request of the flash translation layer114. When using the garbage collection scheme according to an exemplary embodiment of the inventive concept, the number of physical pages that are left alone as a clean page are markedly reduced. Accordingly, a bit error rate BER of data written in memory blocks may be markedly improved. The memory interface119may provide an interface between the storage controller110and the nonvolatile memory device120. For example, data processed by the processing unit111may be stored in the nonvolatile memory device120through the memory interface119. As another example, data stored in the nonvolatile memory device120are provided to the processing unit111through the memory interface119. The components of the storage controller110are described above as an example. According to the function of the storage controller110of an exemplary embodiment the inventive concept, even though dummy data are not programmed in an active block, the number of pages that are left alone in a clean page state may be markedly reduced. FIG.3is a block diagram illustrating a nonvolatile memory device according to an exemplary embodiment of the inventive concept. Referring toFIG.3, the nonvolatile memory device120includes a cell array121, a decoder122, a page buffer123, an input/output buffer124, and a control logic circuit125. The cell array121is connected to the decoder122through word lines WL and selection lines SSL and GSL. The cell array121is connected to the page buffer123through bit lines BL. The cell array121includes the plurality of memory cells BLK1to BLKi. Each of the memory blocks BLK1to BLKi includes a plurality of NAND cell strings. Data may be written in the cell array121in units of a page. An erase operation may be performed in units of a memory block. According to an exemplary embodiment of the inventive concept, the cell array121may be a three-dimensional (3D) memory array. The 3D memory array may be monolithically formed in one or more physical level(s) of a memory cell array having an active area disposed above a silicon substrate and circuitry associated with the operation of memory cells. In an exemplary embodiment of the inventive concept, the 3D memory array includes vertical NAND strings that are vertically oriented such that at least one memory cell is located over another memory cell. The at least one memory cell includes a charge trap layer. Each vertical NAND string may include at least one selection transistor located over memory cells. At least one selection transistor may have the same structure as the memory cells and may be monolithically formed together with memory cells. The decoder122may select one of the memory blocks BLK1to BLKi of the cell array121in response to an address ADD. The decoder122may provide a word line voltage corresponding to an operating mode to a word line of a selected memory block. The decoder122may provide selection signals to the selection lines SSL and GSL to select a memory block. In the read operation, a read voltage Vrd may be applied to a selected word line of a memory block and may provide a pass read voltage Vread to unselected word lines. The page buffer123may operate as a write driver or a sense amplifier depending on an operating mode. In a program operation, the page buffer123supplies a bit line voltage corresponding to data to be programmed to a bit line of the cell array121. In the read operation, the page buffer123senses data stored in a selected memory cell through a bit line. The page buffer123latches the sensed data and outputs the latched data to the outside. The input/output buffer124provides write data received in the program operation to the page buffer123. The input/output buffer124outputs data provided from the page buffer123to the outside in the read operation. The input/output buffer124may transmit the received address or the received command to the control logic circuit125or the decoder122. The control logic circuit125controls the decoder122and the page buffer123in response to a command CMD or a control signal CTRL. The control logic circuit125may control the decoder122such that various bias voltages are generated depending on a program command. In particular, the control logic circuit125may output program result information according to a request from the storage controller110. The number of word lines stacked in each of the memory blocks BLK1to BLKi increases to implement a high-capacity memory device. In addition, the number of bits of data to be stored in each of the memory cells increases. For management, programming dummy data at a clean page left alone in an erase state after programming may not be appropriate for a high-capacity memory block in terms of complexity and overload. FIG.4illustrates a circuit diagram of the memory block BLKi. Referring toFIG.4cell strings are formed between bit lines BL0, BL1, BL2, and BL3and a common source line CSL. Cell strings NS10and NS20are formed between the bit line BL0and the common source line CSL. In a similar manner, the plurality of cell strings NS11, NS21, NS12, NS22, NS13, and NS23are formed between the bit lines BL1, BL2, and BL3and the common source line CSL. In each cell string, a string selection transistor SST is connected with a corresponding bit line BL. In each cell string, a ground selection transistor GST is connected with the common source line CSL. In each cell string, memory cells are provided between the string selection transistor SST and the ground selection transistor GST. The memory cells may be connected to word lines WL0, WL1, WL2, WL3, WL4and WL5. Each cell string includes the ground selection transistor GST. The ground selection transistors GST of the cell strings may be controlled by a ground selection line GSL. Alternatively, cell strings of respective rows may be controlled by different ground selection lines. The string selection transistors SST may be controlled by different string selection lines SSL1and SSL2. A circuit structure of memory cells included in one memory block is briefly described above. However, the circuit structure illustrated inFIG.4is merely illustrated for convenience, and thus, an actual memory block is not limited to the example illustrated inFIG.4. In other words, it is to be understood that one memory block can include more semiconductor layers, more bit lines, and more string selection lines. FIG.5is a flowchart illustrating a memory management operation of a storage controller or a flash translation layer ofFIG.1. Referring toFIG.5, the storage controller110may utilize clean pages as a destination area of the garbage collection depending on the elapse time ET after a last page of an active block is programmed. In operation S110, the storage controller110receives a write request from a host. For example, the storage controller110receives an address and data associated with the write request from the host. Here, the write request is provided from the host. However, the inventive concept is not limited to the case where the write request is generated from the host. A write request may be generated by a memory management operation (e.g., a garbage collection operation or a meta data update operation) of the storage controller110. In operation S120, based on the address, the storage controller110selects a memory block in which write-requested data are to be written. In this case, the storage controller110may select one of free blocks in an erase state. In operation S130, the storage controller110may program the write-requested data in a selected memory block. In other words, the data received with the write request may be written into the selected memory block. In this case, however, a capacity of the write-requested data may be greater or smaller than a capacity of the selected memory block. In operation S140, the storage controller110checks the memory block in which the write-requested data are programmed. In other words, the storage controller110determines whether all physical pages of the memory block in which the write-requested data are programmed are in a programmed state. When all of the physical pages of the memory block are programmed (e.g., full page programmed) (Yes), the procedure proceeds to operation S150. When at least one clean page is present in the memory block (No), the procedure proceeds to operation S160. In other words, when there is at least one unprogrammed page in the memory block, the procedure proceeds to operation S160. In operation S150, the storage controller110determines whether the write-requested data are completely processed. In other words, when it is determined that all of the write-requested data are completely programmed (Yes), the method may be terminated. However, when the size of the write-requested data exceeds a capacity of one memory block or an additional write request exists, the method may be continuously performed. Accordingly, the procedure proceeds to operation S155. In operation S155, the storage controller110selects a free block to be written with additionally write-requested data. Afterwards, the procedure proceeds to operation S130to write data in the selected free memory block. In operation S160, the storage controller110counts the elapse time ET. A time to count the elapse time ET may be a time at which data are completely written in an active block. In other words, in the active block in which a clean page(s) is present, the elapse time ET may be counted from a time when data are completely written at a physical page before the clean page. However, it is to be understood that a count start point of the elapse time ET can be a time when an active block is erased. In operation S170, the storage controller110determines whether the elapse time ET reaches the threshold value TH. The threshold value TH may be set to a time when a decrease in reliability does not occur even though clean pages are in an erase state. When it is determined that the elapse time ET does not exceed the threshold value TH (No), the storage controller110waits until the elapse time ET reaches the threshold value TH. After the elapse time ET reaches the threshold value TH (Yes), the procedure proceeds to operation S180. In operation S180, the storage controller110triggers the garbage collection GC. In other words, the storage controller110starts the garbage collection operation for collecting valid data of memory blocks where data are stored and programming the collected valid data in a destination area. In this case, the collected valid data may be programmed at clean pages of an active block. An exemplary embodiment of the inventive concept is described above as clean pages of an active block are allocated to a destination area of the garbage collection depending on the elapse time ET. Here, the elapse time ET may be variously adjusted depending on a process or a design rule of a nonvolatile memory device or depending on the degree of reliability required. FIG.6is a diagram illustrating an active block processing method according to an exemplary embodiment of the inventive concept. Referring toFIG.6, when the elapse time ET reaches the threshold value TH after a last page (corresponding to WL1) of an active block121ais completely programmed, the remaining pages WL2to WLn are used as a destination area of the garbage collection. When a write request is received from a host, the storage controller110may select one of free blocks of an erase state. For example, the storage controller110may select the active block121a. Write-requested data are programmed in the active block121aselected for programming. As described above, because all the pages of the active block121ado not be programmed, a clean page(s) may be present in the active block121a. For example, data may be programmed at pages corresponding to word lines WL0and WL1, the programming of the active block121amay be terminated, thereby leaving clean pages corresponding to the word lines WL2to WLn. If an additional write request is not received after the last page (corresponding to WL1) of the active block121ais programmed, the storage controller110may count the elapse time ET. Here, the additional write request may be a write request depending on a state of a write buffer included in the storage controller110, a write request from the host, or a write request that is performed as part of a memory management operation. The elapse time ET may be counted by utilizing a time stamp for managing the active block121a. A separate count algorithm or circuit to start a count from a time when the last page (corresponding to WL1) of the active block121ais completely programmed may be used. The storage controller110monitors the elapse time ET. When the elapse time ET reaches the threshold value TH, the storage controller110manages an active block121bin which a clean page is present, like one of the free blocks. For example, the active block121bin which a clean page is present is used as a destination block121bfor copying data collected in the garbage collection (GC) operation. Here, in the destination block121b, only the remaining clean pages WL2to WLn other than the already programmed pages (corresponding to WL0and WL1) may be utilized as destination pages in which valid data collected by the garbage collection operation are to be programmed. When data collected through the garbage collection operation are programmed at the clean pages (corresponding to WL2to WLa), the clean pages WL2to WLn are maintained and managed in a program state, not an erase state. As such, the EPI error that occurs in the case where clean pages are maintained in the erase state may be prevented. In addition, the burden of the storage controller110for programming dummy data at a clean page of an active block may be reduced. FIG.7is a diagram illustrating a garbage collection method using a clean page of an active block, according to an exemplary embodiment of the inventive concept. Referring toFIG.7, the storage device100according to an exemplary embodiment of the inventive concept may allocate an active block240to a free block and may use the active block240as a destination area in the garbage collection operation. Here, an example in which the active block240is managed as a destination area of the garbage collection after being first designated as a free block is provided. However, the active block240may be immediately used as a destination area of the garbage collection depending on the elapse time ET without allocation to a free block list. An erased memory block is managed as a free block for writing data. A free block includes a memory block (Erased BLK) erased through a block erase operation. Each of free blocks210,220, and230included in a free block list200illustrated inFIG.7correspond to the erased memory block. In addition, according to the memory management technique of the inventive concept, the active block240may be included in the free block list200. In other words, the active block240in which some of pages are programmed and a clean page(s) is present may be included in the free block list200when the elapse time ET after programming exceeds the threshold value TH. When the elapse time ET after programming of the active block240exceeds the threshold value TH, the storage controller110triggers the garbage collection GC. In this case, the flash translation layer114selects data blocks in which only invalid data INV are stored or data blocks in which valid data VALID and invalid data INV are mixed, for the garbage collection. For example, data blocks250and260in which valid data VALID and invalid data INV are mixed may be selected for the garbage collection. Valid data VALID stored in each of the data blocks250and260may be collected for the garbage collection. The collected valid data VALID are copied to clean pages of an active block240a. As such, the clean pages of the active block240amay be programmed by the garbage collection. When clean pages are programmed by the garbage collection, afterwards, the active block240amay be managed as a data block240bwhere data are stored. In addition, the data blocks250and260that are targeted for the garbage collection and of which valid data VALID are copied to the active block240amay be managed as a free block after being erased. In other words, after the data blocks250and260are erased they may be placed in the free block list. A characteristic of the garbage collection of the inventive concept is briefly described above. According to the garbage collection method of the inventive concept, with regard to the active block240, clean pages of the active block240may be used as a destination area of the garbage collection when the elapse time ET after programming passes. In the case of applying this memory management technique, clean pages of an active block may be prevented from being left alone in an erase state. FIG.8is a block diagram illustrating a storage device according to another exemplary embodiment of the inventive concept. Referring toFIG.8, a host310may transmit multi-stream data to a storage device320depending on attributes of data. As such, the storage device320may allocate an active block depending on a stream identifier (ID) SID. In this case, a storage controller322according to an exemplary embodiment of the inventive concept may apply active blocks, in which a clean page is present after programming, to the garbage collection with reference to an elapse time. This will be described in detail below. Depending on attributes of write data, the host310may allocate different stream identifiers SID to write data and may transmit the write data to the storage device320. This data management technique may be referred to as a “multi-stream technique or manner”. A data generating block312of the host310may classify write data into different streams depending on attributes. The data generating block312may be, for example, a kernel or an application. The data generating block312may classify meta data, which are frequently updated, as a first stream Stream_1and may allocate a stream ID SID_1to the meta data. The data generating block312may classify user data as a second stream Stream_2and may allocate a stream ID SID_2to the user data. The data generating block312may classify temporary data, which have low importance, as a third stream Stream_3and may allocate a stream ID SID_3to the temporary data. Here, the way to classify data into streams may be variously changed depending on data attributes. For example, different stream identifiers may be allocated to user data for each type of media. An interface circuit314may transmit multi-stream data write-requested from the data generating block312to the storage device320through a data channel. In this case, stream data may be randomly transmitted. However, each data transmission unit (e.g., a packet) may have a stream ID. Accordingly, the storage device320may identify data attributes of received packets by using stream identifiers SID. The storage device320includes the storage controller322and a nonvolatile memory device324. The storage controller322manages multi-stream data in units of a stream. For example, the storage controller322may select and allocate a memory block, in which write data are to be written, in units of a stream ID. The storage controller322may select the memory block BLK3for storing data of the stream ID SID_1. Afterwards, in the case of receiving the data of the stream ID SID_1, the storage controller322may program the data of the stream ID SID_1in the activated (or selected) memory block BLK3. In addition, the storage controller322may select the memory block BLK9for storing data of the stream ID SID_2. In the case of receiving a write request for the data of the stream ID SID_2, the storage controller322may program the data of the stream ID SID_2in the activated (or selected) memory block BLK9. As in the above description, the storage controller322may respond to a write request for data of the stream ID SID_3by programming the data of the stream D SID_3in memory block BLK13. In the above way to allocate a memory block, in which data are to be written, for each stream ID, memory blocks, the number of which is equal to or more than the number of stream identifiers, are used as active blocks. In this case, when the stream data of the stream ID SID_1are completely written in the active block BLK3, a clean page may be present in the active block BLK3. In addition, when the stream data of the stream ID SID_2are completely written in the active block BLK9, a plurality of clean pages may be present in the active block BLK9. Likewise, when the stream data of the stream ID SID_3are completely written in the active block BLK13, a plurality of clean pages may be present in the active block BLK13. As the number of multi-streams increases, the number of active blocks in which clean pages are present may increase. The storage controller322according to an exemplary embodiment of the inventive concept may utilize the active blocks BLK3, BLK9, and BLK13, in which programming is terminated and which have a clean page, as a destination area of the garbage collection. In other words, when writing of the active block BLK3is completed, the storage controller322counts the elapse time ET. When the elapse time ET exceeds the threshold value TH, the storage controller322may utilize the clean pages of the active block BLK3as a destination area of the garbage collection. The storage controller322may count the elapse time ET in the same manner with respect to the active blocks BLK9and BLK13and may program data collected by the garbage collection at the clean pages of the active blocks BLK9and BLK13depending on the counting result. With regard to a plurality of active blocks, to count the elapse time ET and apply the garbage collection, the storage controller322may include a flash translation layer321and an active block management (ABM) table323. The flash translation layer321may classify randomly transmitted data depending on a stream ID SID. The flash translation layer321may program the classified data having the same stream ID SID in the same active block. The flash translation layer321may generate and update the active block management table323for the purpose of managing active blocks, in which a clean page is present, from among programmed active blocks. Active blocks in which a clean page is present are registered at the active block management table323. With regard to a registered active block, a block address, the elapse time ET after programming of a last page, the number of clean pages, an erase count EC, and any other relevant characteristic information may be registered at the active block management table323. The flash translation layer321may perform the garbage collection on the registered active blocks with reference to parameters of the active block management table323. In other words, the flash translation layer321may monitor the active block management table323and may utilize a clean page of an active block as a destination area of the garbage collection. FIG.9is a table illustrating an active block management (AMB) table ofFIG.8. Referring toFIGS.8and9, active blocks in which a clean page is present are listed in the active block management table323. Various parameters corresponding to a listed memory block BLK3may be added and updated in the active block management table323. For example, the elapse time ET (=T1) passing from a time when a last page of the active block BLK3is completely programmed may be written and updated. In addition, the number of clean pages (e.g.,19) included in the active block BLK3, the erase count EC (e.g.,450) of the active block BLK3, and characteristic information (e.g., whether to identify as a weak block) of the active block BLK3may be stored in the active block management table323, and thus, the active block management table323may be updated. All active blocks (e.g., BLK3, BLK9, BLK13, BLK20. . . ) generated at the storage device320and parameters of the active blocks may be registered and managed by using the active block management table323. FIG.10is a flowchart illustrating a garbage collection method of a storage device including a plurality of active blocks each having a clean page. Referring toFIGS.8and10, the storage controller322(refer toFIG.8) may utilize a plurality of active blocks, in which clean pages are present, as a destination area of the garbage collection. In operation S210, the storage controller322receives a write request from a host. The storage controller322receives the write request from the host310. For example, the host310may manage write data in a multi-stream manner, may allocate a stream ID SID to the write data, and may transmit the write data to the storage device320. In operation S220, the storage controller322selects a memory block in which the write-requested data are to be stored. For example, the storage controller322may select one of free blocks in an erase state. In operation S230, the storage controller322may program the write-requested stream data in the selected memory block. For example, the storage controller322may write multi-stream data in a memory block depending on a stream ID. Like the example ofFIG.8, the storage controller322writes write data of the stream ID SID_1in the memory block BLK3. In addition, the storage controller322may program data of the stream ID SID_2in the memory block BLK9. In operation S240, the storage controller322checks a state of an active block in which the write-requested data are programmed. In other words, the storage controller322determines whether all physical pages of an active block in which the write-requested data are programmed are in a programmed state. When all of the physical pages of the active block are programmed (full page programmed) (Yes), the procedure proceeds to operation S250. When at least one clean page is present in the active block (No), the procedure proceeds to operation S260. In operation S250, the storage controller322determines whether the write-requested data are completely processed. In other words, when it is determined that the write-requested data are completely programmed (Yes), the method may be terminated. However, when the size of the write-requested data exceeds a capacity of one memory block, the write operation may fail to be completed. Accordingly, the procedure proceeds to operation S255. In operation S255, the storage controller322selects one of the free blocks for additionally writing data. Afterwards, the procedure proceeds to operation S230for writing data in the selected memory block. In operation S260, the storage controller322registers, at the active block management table323, an active block in which at least one clean page is present after a last page is programmed. In operation S270, the storage controller322may check the elapse time ET of the active block registered at the active block management table323. In operation S272, the storage controller322monitors whether the elapse time ET reaches the threshold value TH. When there exists an active block, of which an elapse time reaches the threshold value TH, from among a plurality of active blocks registered at the active block management table323, the procedure proceeds to operation S274. When an active block, of which an elapse time reaches the threshold value TH, is absent (or not detected) from the plurality of active blocks registered at the active block management table323, the procedure proceeds to operation S272to continue monitoring. In operation S274, when the number of active blocks, of which the elapse times ET reach the threshold value TH, is 2 or more, the storage controller322may first select an active block in which the number of clean pages is great. In other words, the active block having the largest number of clean pages may be selected. In operation S280, the storage controller322triggers the garbage collection GC. In other words, the storage controller322starts the garbage collection GC for collecting valid data of memory blocks where data are stored and programming the collected valid data in a destination area. In this case, the collected valid data may be programmed at clean pages of an active block. In addition, when the number of active blocks, of which the elapse times ET reaches the threshold value TH, is 2 or more, the garbage collection may be performed on an active block in which the number of clean pages is relatively small. The method for deciding a selection priority when the number of active blocks, of which the elapse times ET reaches the threshold value TH, is 2 or more, is briefly described above. However, it is to be understood that the selection priorities of active blocks to which the garbage collection GC is applied when the number of active blocks, of which the elapse times ET reaches the threshold value TH, is 2 or more, can be changed depending on various conditions. FIG.11is a flowchart illustrating a garbage collection method of a storage device including a plurality of active blocks each having a clean page, according to another exemplary embodiment of the inventive concept. Referring toFIGS.8and11, the storage controller322(refer toFIG.8) may utilize a plurality of active blocks, in which clean pages are present, as a destination area of the garbage collection based on priorities. Here, operation S310to operation S370are substantially identical to operation S210to operation S272ofFIG.10, and thus, additional description will be omitted. InFIG.10an operation corresponding to operation S270is not shown but it may be present between operations S360and S370. In operation S375, when the number of active blocks, of which the elapse times ET reaches the threshold value TH, is 2 or more, the storage controller322selects an active block as a destination area of the garbage collection, depending on a priority determined in advance. Of priorities, for example, an active block, in which the elapse time ET is the longest, from among a plurality of active blocks may have a higher priority. Then, following the elapse time ET, an active block in which the number of clean pages are great and an active block of which the erase count EC is small may have a higher priority. Additionally, a priority may be determined depending on whether an active block is a weak block having a weak characteristic. In operation S380, the storage controller322triggers the garbage collection GC. In other words, the storage controller322starts the garbage collection GC for collecting valid data of memory blocks where data are stored and programming the collected valid data in a destination area. In this case, the collected valid data may be programmed at clean pages of an active block. The method for deciding a selection priority when the number of active blocks, of which the elapse times ET reaches the threshold value TH, is 2 or more, is briefly described above. FIG.12is a block diagram illustrating another reference for selecting an active block according to an exemplary embodiment of the inventive concept. Referring toFIG.12, a storage device400may include a plurality of nonvolatile memory devices NVM1to NVM8and a storage controller410, which are stacked on a printed circuit board (PCB) substrate. Priorities of the plurality of nonvolatile memory devices NVM1to NVM8may be determined depending on relative distances L0, L1, L2, and L3to the storage controller410. For example, the plurality of nonvolatile memory devices NVM1to NVM8may be classified into a plurality of groups420,430,440, and450depending on the relative distances L0, L1, L2, and L3to the storage controller410. In memory blocks included in the plurality of nonvolatile memory devices NVM1to NVM8, a write request may be completed in a program operation in an active state where a clean page exists. In this case, write-completed active blocks, each of which includes a clean page, are allocated to a destination area of the garbage collection depending on the elapse time ET. In the case where a plurality of active blocks are allocated to a destination area of the garbage collection, the storage controller410may preferentially select an active block included in a memory device relatively close to the storage controller410. For example, it is assumed that one of two active blocks having the same elapse time ET is included in the nonvolatile memory device NWM1belonging to the first group420and the other thereof is included in the nonvolatile memory device NVM4belonging to the third group440. In this case, the storage controller410may preferentially allocate an active block included in the nonvolatile memory device NVM1belonging to the first group420which is closer in distance to the storage controller410than the nonvolatile memory device NVM4belonging to the third group440to a destination area of the garbage collection. This priority is based on the fact that a driving temperature of a memory controller410is relatively high. The probability that a driving temperature of the nonvolatile memory device NVM1close to the memory controller410is higher than a driving temperature of the nonvolatile memory device NVM4relatively distant from the memory controller410is high. The EPI characteristic or bit error rate (BER) of a nonvolatile memory device is weaker at a high temperature than at a low temperature. Accordingly, the reliability may be improved by first selecting an active block of the nonvolatile memory device NVM1closer to the memory controller410and programming a clean page(s) of the active block. FIG.13is a block diagram illustrating another reference for selecting an active block according to an exemplary embodiment of the inventive concept. Referring toFIG.13, a priority for selecting a target to be designated as a destination area of the garbage collection may be decided according to a position of a finally programmed page of each active block. For example, a priority may be decided depending on whether a finally program page of an active block is a page selected by a first string selection line SSL1or is a page selected by a second string selection line SSL2. For example, in a program operation, a program order of one memory block is decided in units of a string selection line SSL. If a plurality of cell strings (e.g., SSLP1) connected to the first string selection line SSL1are completely programmed, then a plurality of cell strings (e.g., SSLP2) connected to the second string selection line SSL2may be programmed. According to an exemplary embodiment of the inventive concept, a priority to select an active block may be decided depending on a position of the string selection line SSL, at which a last page is included (or a position of a cell string SSLP in which a last page is included). FIG.14is a diagram illustrating a garbage collection method using a clean page of an active block, according to another exemplary embodiment of the inventive concept. Referring toFIG.14, a storage device according to an exemplary embodiment of the inventive concept may designate an active block510including clean pages as a free block and may use the clean pages of the active block510as a destination area in the garbage collection operation. A memory block that is completely erased through a block erase operation is included in a free block list500. Free blocks520,530, and540included in the free block list500correspond to the completely erased memory blocks. In addition, according to an exemplary embodiment of the inventive concept, the active block510in which at least one clean page is present may be included in the free block list500. In other words, the active block510in which a clean page is present may be included in the free block list500when the elapse time ET after programming of a last page exceeds the threshold value TH. Here, a free block list is mentioned for convenience. However, in another exemplary embodiment of the inventive concept, clean pages of the active block510may be utilized as a destination area of the garbage collection immediately when the elapse time ET exceeds the threshold value TH, without a procedure of designating the active block510as a free block. When the elapse time ET after programming of the active block510exceeds the threshold value TH, the storage controller110(refer toFIG.1) triggers the garbage collection GC. As such, valid data VALID stored in the plurality of data blocks550and560are collected by the flash translation layer114. The collected valid data VALID are copied to clean pages513of an active block510aselected as the destination area, while programmed page511remains untouched. In this case, the size of the collected valid data VALID may be smaller than the size of the clean pages513of the active block510a. Accordingly, only some pages512of the active block510amay be programmed with the collected valid data VALID. There may be the probability that some clean pages514are present even after the valid data VALID are programmed by the garbage collection. The storage controller322according to an exemplary embodiment of the inventive concept may program dummy data in an active block510bincluding the clean pages514which exist even after the garbage collection operation. The dummy data may be programmed at the clean pages514. In the case when the dummy data are programmed at the clean pages514, the active block510bmay be managed as a data block510cin which a clean page no longer exists. Another exemplary embodiment of the garbage collection of the inventive concept is briefly described above. A clean page of an active block, which is still present even after the active block is used as a destination area of the garbage collection may be reconfigured by additionally programming dummy data at the clean page. In the case of applying this memory management technique, clean pages of an active block may be more efficiently prevented from being left alone in an erase state. FIG.15is a flowchart illustrating a garbage collection method ofFIG.14. Referring toFIGS.14and15, the storage controller110(refer toFIG.1) may utilize an active block, in which clean pages are present, as a destination area of the garbage collection. Here, it is assumed that the elapse time ET of an active block is counted and the garbage collection operation is triggered. In operation S410, the storage controller110may select an active block to be used as a destination area of garbage collection. The active block in which clean pages are present is in a state where the elapse time ET after programming of a last page exceeds the threshold value TH. In operation S420, valid data VALID collected for the garbage collection are programmed at the clean pages of the active block. When the collected valid data VALID are completely programmed, the procedure proceeds to operation S430. In operation S430, whether all of the pages of the active block are programmed is checked. In other words, whether a clean page is present in the active block may be checked. When it is determined that a clean page does not exist in the active block, the procedure may proceed to operation S450. When at least one clean page is present in the active block (No), the procedure may proceed to operation S460. In operation S450, whether the garbage collection operation is completed is checked. When it is determined that the garbage collection operation is completed, the method may be terminated. However, when the garbage collection operation is not complete, the procedure proceeds to operation S455. In operation S455, a free block or an active block in which a clean page is present may be additionally selected for the garbage collection operation. When this block selection is completed, the procedure returns to operation S420. In operation S460, dummy data may be programmed at the remaining clean page(s) of the active block. When dummy data are programmed at the remaining clean pages, the active block may be considered to be in a full-page program state. When the dummy data are completely programmed, the procedure may proceed to operation S450. A method for programming dummy data at the remaining clean page(s) when a clean page still exists even after data collected for the garbage collection are programmed is described above. In combination with the dummy data programming method, clean pages that are not processed through the garbage collection operation may be processed to a program state. FIG.16is a block diagram for describing another exemplary embodiment of the inventive concept. Referring toFIG.16, the garbage collection according to the present embodiment may be applied in units of a sub-block. A storage controller610may manage memory blocks BLK1to BLKi of a nonvolatile memory device620in units of a sub-block SB. In other words, the storage controller610may erase and select a memory block in units of a sub-block and may utilize the selected sub-block of the memory block as a destination area of the garbage collection. For example, the storage controller610may regard a sub-block SB10of the memory block BLK1as one block and may allocate the block to an erase and free block. The way to manage a block in units of a sub-block may be applied to sub-blocks of each of the memory blocks BLK1to BLKi. For example, a sub-block SB20of the memory block BLK2may be allocated to an erase and free block. A physical, logical size of a memory block is gradually increasing. Accordingly, in the case where the size of one memory block becomes excessively large, there is an increasing demand on dividing and managing one block into a plurality of sub-blocks. The storage controller610includes a flash translation layer615that performs the garbage collection according to an exemplary embodiment of the inventive concept. As the flash translation layer615is executed, in a data write operation, the storage controller610according to an exemplary embodiment of the inventive concept counts the elapse time ET from a time when the programming of an active block in which a clean page is present is terminated. The elapse time ET is counted from a time when a last page of an active block is completely programmed. In this case, an active block may be selected in units of a sub-block, which corresponds to one of sub-blocks divided from one physical block. In the case where active blocks are managed in units of a sub-block, when a clean page exists after programming of one sub-block, the storage controller610counts the elapse time ET after a last page in the sub-block is completely programmed. When the elapse time ET reaches the threshold value TH, the storage controller610may program valid data collected for the garbage collection at clean pages of the sub-block, of which the last page is completely programmed. Accordingly, in a storage device600, there may be minimized a time during which clean pages of an active block managed in units of a sub-block are left alone in an erase state. According to an exemplary embodiment of the inventive concept, the storage device600may utilize clean pages included in an active block corresponding to a sub-block, in the garbage collection operation. Accordingly, the EPI error occurring at a clean page may be prevented, and an additional operation for programming dummy data at the clean page is unnecessary. The storage device600according to an exemplary embodiment of the inventive concept may perform memory management in units of a sub-block and may provide great cost reduction and reliability improvement under a storage policy in which the number of active blocks increases. FIG.17is a diagram illustrating a method for managing a physical block ofFIG.16by using a plurality of sub-blocks. Referring toFIGS.16and17, a physical block BLK2may be managed in units of a plurality of sub-blocks. Here, a description will be given as the one physical block BLK2is divided and managed into two sub-blocks SB20and SB21. However, it is to be understood that the one physical block BLK2is divided and managed into three or more sub-blocks. Each of the sub-block SB20and the sub-block SB21may be managed as one memory block. For example, the flash translation layer615of the storage controller610may select and manage a target for address managing, garbage collection, and wear leveling in units of a sub-block. In addition, in the case where a clean page is present in a completely programmed sub-block, the storage controller610counts the elapse time ET after a last page is completely programmed. When the elapse time ET reaches the threshold value TH, the storage controller610may trigger the garbage collection and may designate a clean page of the sub-block as a destination area of the garbage collection. For example, in the case where only pages corresponding to word lines WL0to WL2of the sub-block SB20are programmed and an additional write request is not received, the storage controller610counts the elapse time ET from a time when the word line WL2, which is a last page, is completely programmed. When the elapse time ET reaches the threshold value TH, the storage controller610may utilize clean pages corresponding to word lines WL3to WL7as a destination area of the garbage collection. Like the sub-block SB20, the sub-block SB21may be utilized as a management unit of the garbage collection. For example, pages corresponding to word line WL8may be programmed and pages corresponding to WL8to WL15may be assigned as destination areas for garbage collection. FIG.18is a diagram illustrating a garbage collection method performed in units of a sub-block at a storage device ofFIG.16. Referring toFIG.18, the storage device600according to an exemplary embodiment of the inventive concept may allocate an active sub-block710having a clean page to a free block and may use the active sub-block710as a destination area. Here, a description is given as the active sub-block710is first managed as a free block, but the management method ofFIG.18is merely an example. When the elapse time ET reaches the threshold value TH, the active sub-block710may be immediately used as a destination area of the garbage collection without allocation to a free block. A free block list700may include a plurality of free blocks720,730, and740. Each of the free blocks720,730, and740corresponds to the above-described sub-block. A sub-block may be erased and may then be included in the free block list700. In addition, according to the memory management technique of the inventive concept, the active sub-block710may also be included in the free block list700. In other words, the active sub-block710in which some pages are programmed and a clean page(s) is present may be included in the free block list700when the elapse time ET after programming exceeds the threshold value TH. When the elapse time ET after programming of the active sub-block710exceeds the threshold value TH, the storage controller610triggers the garbage collection GC. In this case, the flash translation layer615selects data blocks in which only invalid data INV are stored or data blocks in which valid data VALID and invalid data INV are mixed. For example, data blocks750and760in which valid data VALID and invalid data INV are mixed may be targeted for the garbage collection. Each of the data blocks750and760may correspond to a sub-block. Only valid data VALID stored in each of the data blocks750and760are collected. The collected valid data VALID are copied to clean pages of an active sub-block710a. As such, the clean pages of the active sub-block710amay be programmed by the garbage collection. When the clean pages are programmed by the garbage collection, afterwards, the active sub-block710amay be managed as a data sub-block710bwhere data are stored. A characteristic of the garbage collection according to an exemplary embodiment of the inventive concept is briefly described above. The storage controller610according to an exemplary embodiment of the inventive concept may use the active sub-block710, in which a clean page is present after programming, as a destination area of the garbage collection. In the case of applying this memory management technique, clean pages of an active sub-block may be prevented from being left alone in an erase state. According to an exemplary embodiment of the inventive concept, it is possible to implement a storage device that improve the EPI characteristic in a high-capacity flash memory device and that has a high reliability. While the inventive concept has been described with reference to exemplary embodiments thereof, it will be apparent to those of ordinary skill in the art that various changes and modifications may be made thereto without departing from the spirit and scope of the inventive concept as set forth in the following claims.
57,782
11860778
DETAILED DESCRIPTION OF SOME EXAMPLE EMBODIMENTS Embodiments of the present invention generally relate to garbage collection in data storage systems. More particularly, at least some embodiments of the invention relate to systems, hardware, software, computer-readable media, and methods for efficient, and cost-effective, garbage collection in cloud tier environments where copy forward processes are employed, and a minimum storage duration is enforced. Embodiments may involve performing the disclosed processes on millions, or more, objects, in a data storage environment. As well, the disclosed methods may be performed on a continual basis as data segments of objects expire and/or data segments are created/modified. In general, example embodiments of the invention may operate to implement a cost filter phase for cloud GC (Garbage Collection) processes. The cost filter phase may be employed in conjunction with otherwise conventional GC processes. In an example cost filter phase, cloud GC may iterate all selected objects for copy-forwarding and, for each object, the cloud GC may fetch the object creation time from metadata. The cost filter phase may then check, for each of one or more objects, if the object is still within the minimal storage duration or not. One example of pseudocode or an algorithm for this process may be: If Current Time <(Creation Time+Min Storage Duration) Then Object is STILL UNDER Min Storage Duration, Else, Object is now OUT OF Min Storage Duration purview. The cost filter phase may then deselect, from a copy forwarding process, all such objects which are found to be still within the minimum storageduration. The cloud GC process may then perform the copy-forward only for the final list of objects, that is, the objects which are beyond their respective minimum storage duration. By using this selective approach to copy-forward, embodiments do not copy objects that are already in storage, as dictated by their minimum storage time. Embodiments of the invention, such as the examples disclosed herein, may be beneficial in a variety of respects. For example, and as will be apparent from the present disclosure, one or more embodiments of the invention may provide one or more advantageous and unexpected effects, in any combination, some examples of which are set forth below. It should be noted that such effects are neither intended, nor should be construed, to limit the scope of the claimed invention in any way. It should further be noted that nothing herein should be construed as constituting an essential or indispensable element of any invention or embodiment. Rather, various aspects of the disclosed embodiments may be combined in a variety of ways so as to define yet further embodiments. Such further embodiments are considered as being within the scope of this disclosure. As well, none of the embodiments embraced within the scope of this disclosure should be construed as resolving, or being limited to the resolution of, any particular problem(s). Nor should any such embodiments be construed to implement, or be limited to implementation of, any particular technical effect(s) or solution(s). Finally, it is not required that any embodiment implement any of the advantageous and unexpected effects disclosed herein. In particular, one advantageous aspect of at least some embodiments of the invention is that objects and, thus, the segments that make up those objects, are not copied or stored more often than necessary, thus reducing the storage cost to the customer relative to the storage cost associated with approaches in which an object is copied forward and stored even when that object is still within its minimum storage duration. An embodiment of the invention may reduce, relative to conventional approaches, the amount of time needed for a GC process to run, since the number of copy forwards and new object creations may be reduced by only copying forward objects that have exceeded their minimum storage duration. Various other possible advantages of example embodiments will be apparent from this disclosure. It is noted that embodiments of the invention, whether claimed or not, cannot be performed, practically or otherwise, in the mind of a human. Accordingly, nothing herein should be construed as teaching or suggesting that any aspect of any embodiment of the invention could or would be performed, practically or otherwise, in the mind of a human. Further, and unless explicitly indicated otherwise herein, the disclosed methods, processes, and operations, are contemplated as being implemented by computing systems that may comprise hardware and/or software. That is, such methods processes, and operations, are defined as being computer-implemented. A. Aspects of an Example Architecture and Environment The following is a discussion of aspects of example operating environments for various embodiments of the invention. This discussion is not intended to limit the scope of the invention, or the applicability of the embodiments, in any way. In general, embodiments of the invention may be implemented in connection with systems, software, and components, that individually and/or collectively implement, and/or cause the implementation of, data protection operations which may include, but are not limited to, data replication operations, IO replication operations, data read/write/delete operations, data deduplication operations, data backup operations, data restore operations, data cloning operations, data archiving operations, and disaster recovery operations. More generally, the scope of the invention embraces any operating environment in which the disclosed concepts may be useful. At least some embodiments of the invention provide for the implementation of the disclosed functionality in existing backup platforms, examples of which include the Dell-EMC NetWorker and Avamar platforms and associated backup software, and storage environments such as the Dell-EMC DataDomain storage environment. In general however, the scope of the invention is not limited to any particular data backup platform or data storage environment. New and/or modified data collected and/or generated in connection with some embodiments, may be stored in a data protection environment that may take the form of a public or private cloud storage environment, an on-premises storage environment, and hybrid storage environments that include public and private elements. The new and/or modified data may be deduplicated before, or after, storage in a storage environment. Any of these example storage environments, may be partly, or completely, virtualized. The storage environment may comprise, or consist of, a datacenter which is operable to service read, write, delete, backup, restore, and/or cloning, operations initiated by one or more clients or other elements of the operating environment. Where a backup comprises groups of data with different respective characteristics, that data may be allocated, and stored, to different respective targets in the storage environment, where the targets each correspond to a data group having one or more particular characteristics. Example cloud computing environments, which may or may not be public, include storage environments that may provide data protection functionality for one or more clients. Another example of a cloud computing environment is one in which processing, data protection, and other, services may be performed on behalf of one or more clients. Some example cloud computing environments in connection with which embodiments of the invention may be employed include, but are not limited to, Microsoft Azure, Amazon AWS, Dell EMC Cloud Storage Services, and Google Cloud. More generally however, the scope of the invention is not limited to employment of any particular type or implementation of cloud computing environment. In addition to the cloud environment, the operating environment may also include one or more clients that are capable of collecting, modifying, and creating, data. As such, a particular client may employ, or otherwise be associated with, one or more instances of each of one or more applications that perform such operations with respect to data. Such clients may comprise physical machines, or virtual machines (VM) Particularly, devices in the operating environment may take the form of software, physical machines, or VMs, or any combination of these, though no particular device implementation or configuration is required for any embodiment. Similarly, data protection system components such as databases, storage servers, storage volumes (LUNs), storage disks, replication services, backup servers, restore servers, backup clients, and restore clients, for example, may likewise take the form of software, physical machines or virtual machines (VM), though no particular component implementation is required for any embodiment. Where VMs are employed, a hypervisor or other virtual machine monitor (VMM) may be employed to create and control the VMs. The term VM embraces, but is not limited to, any virtualization, emulation, or other representation, of one or more computing system elements, such as computing system hardware. A VM may be based on one or more computer architectures, and provides the functionality of a physical computer. A VM implementation may comprise, or at least involve the use of, hardware and/or software. An image of a VM may take the form of a .VMX file and one or more .VMDK files (VM hard disks) for example. New and/or modified data, as well as dead or expired data, may be stored in one or more containers. The containers may be located at various sites, including a local enterprise site, and at a remote cloud storage site. Containers may include only live segments, only dead segments, or combinations of live segments and dead segments. As used herein, the term ‘data’ is intended to be broad in scope. Thus, that term embraces, by way of example and not limitation, data segments such as may be produced by data stream segmentation processes, data chunks, data blocks, atomic data, emails, objects of any type, files of any type including media files, word processing files, spreadsheet files, and database files, as well as contacts, directories, sub-directories, volumes, and any group of one or more of the foregoing. Example embodiments of the invention are applicable to any system capable of storing and handling various types of objects, in analog, digital, or other form. Although terms such as document, file, segment, block, or object may be used by way of example, the principles of the disclosure are not limited to any particular form of representing and storing data or other information. Rather, such principles are equally applicable to any object capable of representing information. As used herein, the term ‘backup’ is intended to be broad in scope. As such, example backups in connection with which embodiments of the invention may be employed include, but are not limited to, full backups, partial backups, clones, snapshots, and incremental or differential backups. With particular attention now toFIG.1, one example of an operating environment for embodiments of the invention is denoted generally at100. In general, the operating environment100may include one or more clients102that each host a respective group of one or more applications104that may operate to generate new and/or modified data106. The applications104may also operate to delete data. The clients102may communicate with a backup and/or dedup server108. In general, the backup and/or dedup server108may operate to deduplicate the data106prior to storage of the data106at a cloud storage site110. In some embodiments, the dedup functionality of the backup and/or dedup server108may be implemented at the cloud storage site110, rather than at the backup and/or dedup server108. In some embodiments, a separate backup server and dedup server may be provided that may communicate with each other and with one or more clients, and with a cloud storage site. The backup and/or dedup server108may also cooperate with one or more of the clients102to create backups of the data106. These backups may be communicated by the backup and/or dedup server108to the cloud storage site110. The backups may be deduplicated by the backup and/or dedup server108, or by the cloud storage site110. The cloud storage site110and/or the backup and/or dedup server108may generate and/or gather metadata concerning any data stored, or to be stored, at the cloud storage site110. Such metadata, which may be stored at the cloud storage site110and/or may be stored at the backup and/or dedup server108, may include, but is not limited to, minimum storage information for objects stored at the cloud storage site110and/or minimum storage information for objects to be stored at the cloud storage site110, and/or for new objects created at the cloud storage site110. The cloud storage site110may also include a cloud provider price list API (Application Program Interface) that indicates costs for different storage tiers or storage classes of the cloud storage site. Such tiers or classes may include, for example, (i) standard/hot (data more frequently accessed/modified), (ii) infrequently accessed/cold (data less frequently accessed/modified), (iii) archive (data not accessed for long periods of time, which may be measured in months or years for example), and (iv) staging/deep archive (data not accessed for many years, or never). With continued reference to the example ofFIG.1, the cloud storage site110may comprise a GC (garbage collection) module112which runs a GC process on part or all of storage114. The GC process may, or may not, run on a regularly scheduled basis. Finally, the cloud storage site110may comprise a billing module116that may cooperate with the storage114and/or with the GC module112to generate and transmit bills that reflect data storage costs incurred by one or more of the clients102. The bills may be transmitted to the clients102and/or to other entities. B. Overview In some dedup systems, a file may be split into segments and these segments may be deduped across all files in the system. The segments may be packed in regions and containers which are represented as objects in the active tier or local tier, that is, on premises at the customer site. The segment sizes may generally vary between 4 KB-64 KB and the container objects may vary in size between 1 MB-4 MB, and sometimes 8 MB, or 16 MB, depending on the dedup app that is used. With greater adaptation of cloud storage, dedup servers/apps allow moving deduped container objects to the cloud for long term retention (LTR). The dedup ratio, object sizes, and other parameters, may vary in the cloud tier though. For example, DeIIEMC Power Protect based DeIIEMC DataDomain systems supports object sizes of 1 MB in public cloud providers, and 4.5 MB in private cloud providers. The data/objects are moved by the dedup application based on various policies and configurations. One example of such a policy is one which specifies “move all the data older than 2 weeks to the cloud.” Public cloud providers such as AWS, GCP, and Azure, for example, provide S3 storage under a variety of storage classes/tiers/categories such as storage class based on access times, cost and minimum storage durations. The following storage classes are illustrative: (i) standard/hot—data more frequently accessed/modified; (ii) infrequently accessed/cold—data less frequently accessed/modified; (iii) archive—data not accessed for many months, or years; and (iv) staging/deep archive—data not accessed/modified for many years, or never. These are just a few example storage classes, but cloud providers typically have their own hierarchy and definitions along these, or similar, lines. A number of items are worth noting about storage classes. For example, the respective costs associated with storage classes tend to decrease with the frequency of data access. Thus, it may be relatively less expensive to store data that is only rarely accessed, and relatively more expensive to store data that is frequently accessed. decreases. Accordingly, in the foregoing illustrative list, the costs may tend to be highest for the first type (i) of storage, decreasing to lowest costs for type (iv) of storage. The cost variation may be due to the processing overhead involved in fulfilling data access requests, and/or may be due to the type of storage used to store the data. As well, the minimum storage duration for billing typically increases with a decrease in the frequency of data access. For example, if the minimum storage duration of a storage class is 30 days and an object is moved to that storage class and then retrieved back or deleted before 30 days, then storage of the object may still be billed for 30 days. By way of comparison, archival/staging storage classes may have a minimum storage duration measured in months, or years. A final item to note is that data can be moved to one of these classes either directly by a backup server, indirectly via an automatic lifecycle configuration policy—for example, move data from standard to archive class after 90 days, or by using intelligent tiers. In general then, each data storage class may suit specific data access/usage patterns in, and/or by way of, a backup server or other computing entity. C. Aspects of Some Example Embodiments Attention is directed now to aspects of various example embodiments of the invention. Such embodiments may be implemented in an operating environment such as the operating environment100discussed above in connection withFIG.1. However, embodiments of the invention are not required to be implemented in connection with the example operating environment100. C.1 Cloud Storage Tier/Class and Metadata If a backup server uses a cloud storage tier/class in association with respective minimum storage durations, then the cloud storage tier/class information, or simply ‘tier/class information,’ may be stored, on an object basis, file basis, segment basis, or other basis, in the system, that is, in the backup server and/or at a cloud storage site. The cloud storage tier/class information may comprise, and/or take the form of, tier/class metadata. The cloud storage tier/class metadata may be created at various times such as, for example, when a backup is created, before/while/after a dedup process is performed, or before/while/after data is stored at a cloud storage site. The tier/class metadata may be stored persistently and/or in memory. In some embodiments, the tier/class metadata may be stored as part of one or more cloud metadata structures. The tier/class metadata may include, for example, the minimum storage duration value, which may be expressed in any suitable terms, such as number of days for example. If this minimum storage duration value is 0, that indicates that the cloud tier is not enforcing any minimum storage duration for the data to which that minimum storage duration value has been assigned. In some embodiments, the minimum storage duration value for data may be assigned when the cloud tier that will be used to store that data, is first attached to the system, that is, the system that originated the data to be stored at the cloud storage site. More specifically, when data, such as data106for example (seeFIG.1) has been identified that is to be stored at a cloud storage site, such as the cloud storage site110for example (seeFIG.1), a cloud storage tier of the cloud storage site110may be attached to the system that includes the client102(seeFIG.1) or other system that created, and/or modified, and/or stored, the data. C.2 Cloud Storage Provider Price List API While some embodiments employ tier/class metadata that defines minimum storage duration values, other embodiments may employ a different, or complementary, approach. For example, in some embodiments, when a cloud tier is attached to the system, embodiments of the invention may query, such as by using a backup server, dedup server, or backup/dedup server, the cloud provider price list APIs. Example cloud provider APIs that may be queried include, but are not limited to, the AWS Price List Service API, GCP Cloud Billing Catalog API, and Azure Retail Rates Prices API. The response to the query, which response may be provided by a cloud storage site, may include the price list API, as well as the minimum storage duration for the tier/class that is being attached to the system. The querying entity may then locally store the price list API and/or minimum storage duration information, such as information storage duration metadata. In some embodiments, the price list API and/or the minimum storage duration information may change only when all the objects are migrated to some other storage class. The migration of these objects may be performed automatically, or manually. This approach to changing the price list API and/or the changing minimum storage duration metadata may be particularly beneficial in auto-tiering or intelligent tiering kind of environments where the minimum storage duration may change as the data changes tiers/storage classes. C.3 Copy Forwarding Objects According to some example embodiments of a cloud GC process, when the cloud GC process starts, the method may then scan the metadata of the objects, with respect to which the GC process is being run, and select any partially filled/fragmented objects, as well as objects marked for deletion, such as unused objects and dead objects. If local metadata for the objects is available in the dedupe system, then the local object metadata, rather than the object metadata stored at the cloud, may be scanned. Example embodiments of the cloud GC process may further provide a cost filter phase. In the cost filter phase, the cloud GC process may iterate all selected objects for copy-forwarding, and for each object, the cloud GC process may fetch the object creation time from the object metadata. The cloud GC process may then check to see if the object is still within the minimal storage duration or not. One example of such a check may take the form:If Current Time<(Creation Time+Min Storage Duration) Then Object is STILL UNDER Min Storage DurationElseObject is now OUT OF Min Storage Duration purview Based on this query, or check, the cloud GC process may then deselect, for a copy forward process, all such objects which are found to be still within the ‘Min Storage Duration.’ That is, such objects may be deselected in the sense that those objects will not be copied forward. After deselection has been performed, the cloud GC process may then perform the copy-forward process only for the objects that appear in the final list of objects, that is, the objects that have been determined to be out of their minimum storage duration. The GC process may also mark these objects for deletion once the copy-forward process has been performed. Note that deleting these objects that are out of their minimum storage duration may not be problematic with regard to data retention requirements, as those objects have already satisfied their respective min storage duration(s). C.4 Deletion of Dead Objects Example embodiments of a GC process may provide for deletion of dead objects. Dead objects may include objects that do not contain any live segments. As such, dead objects may be deleted without the need for any other processing, except for identification of the objects as being dead ones. In example embodiments of the invention, even if a dead object is within minimum storage period, an example GC method may proceed ahead and delete that object. Note that deletion of a dead object in this way may not incur any extra/additional cost, only the cost associated with storing that object for the minimum min storage duration. Thus, example embodiments may provide that a cloud GC may delete all dead objects, irrespective of whether those dead objects are within, or outside, the minimum storage duration that has been assigned to them. The decision, by some example embodiments, to de-select the dead objects still within minimum storage duration, that is, the decision not to copy forward the dead object, may not have any negative effect on any process. Particularly, those dead objects will still be billed to the customer for storage for the applicable minimum storage duration, regardless of whether those dead objects are kept, or copied forward and deleted. However, keeping dead objects until their minimum storage duration has run may be advantageous in preventing incurring additional costs since, according to some embodiments, once the minimum storage duration has run for an object, and that object is not copied forward, no further charges should be incurred for storage of that object. Note that retaining the dead objects in the system until their minimum storage duration has run might help in cases where new files created might dedupe to this dead objects thus avoiding the need for creating new objects in dedupe systems. D. Example Methods It is noted with respect to the example method ofFIG.2that any of the disclosed processes, operations, methods, and/or any portion of any of these, may be performed in response to, as a result of, and/or, based upon, the performance of any preceding process(es), methods, and/or, operations. Correspondingly, performance of one or more processes, for example, may be a predicate or trigger to subsequent performance of one or more additional processes, operations, and/or methods. Thus, for example, the various processes that may make up a method may be linked together or otherwise associated with each other by way of relations such as the examples just noted. Finally, and while it is not required, the individual processes that make up the various example methods disclosed herein are, in some embodiments, performed in the specific sequence recited in those examples. In other embodiments, the individual processes that make up a disclosed method may be performed in a sequence other than the specific sequence recited. Directing attention now toFIG.2, an example method200is disclosed. The method200may be part of a cloud GC method or process, or may take the form of a stand-alone process. The method200may begin at202where object metadata is scanned. The object metadata may concern objects that have been stored at a cloud storage site, for example. The scan202of the object metadata may reveal a respective object creation time for one or more objects stored at the cloud storage site. Accordingly, the object creation time(s) may then be fetched204from the object metadata. After the object creation time is known for a stored object, a check206may then be performed to determine if that object is out of its corresponding minimum storage duration, that is, whether that object has been stored for at least the minimum storage duration assigned to that object. If the check206indicates that the object is not out of its minimum storage duration time, the method200may advance to207where that object is de-selected, or removed, from a list of objects that have been slated for a copy forward process. After de-selection207, the method200may re-iterate, as shown. That is, one or more subsequent checks206may be performed over time until a determination is made that the object is out of its minimum storage duration. When the check206reveals that the object is out of its minimum storage duration, the method200may advance to208where the object is copied forward. After the new copy has been created at208, the copy which was determined206to have been outside of its minimum storage duration may then be marked for deletion210. Objects marked for deletion210may then be deleted from storage, enabling reclamation of the storage space for use in storing other objects. Reclamation may, or may not, be part of the method200. E. Further Discussion As will be apparent from this disclosure, example embodiments may include various useful aspects. For example, disclosed embodiments may ensures that a copy forward process of a cloud GC process does not result in creation of new objects with segments from older objects which are still within the minimum storage duration and, in turn, such embodiments may thus prevent additional storage costs from incurring un-necessarily. As another example, some embodiments may also handle any dead objects which need no copy-forward but are within the minimum storage duration. This handling may involve simply deleting those dead objects after expiration of their minimum storage duration. Example embodiments may also operate, in scenarios where storage class with minimum storage duration is used, to reduce the overall GC processing time. That is, embodiments may operate to reduce, possibly substantially, both the number of copy-forwards performed, and the creation of new objects for copy forward operations. As a final example, embodiments within the scope of the invention may be particularly beneficial for cloud GC processes operating on cloud storage classes that use minimum storage durations. This is particularly true in view of the current popularity of storage classes due to their ability to help reduce customer data storage costs. F. Further Example Embodiments Following are some further example embodiments of the invention. These are presented only by way of example and are not intended to limit the scope of the invention in any way. Embodiment 1. A method, comprising: scanning, at a cloud storage site, metadata associated with an object stored at the cloud storage site; fetching, from the metadata, an object creation time for the object; and determining whether the object is out of a minimum storage duration, and: when the object is out of the minimum storage duration, copy-forwarding the object, and then marking the object for deletion; and when the object is not out of the minimum storage duration, deselecting the object from a list of objects to be copy-forwarded. Embodiment 2. The method as recited in embodiment 1, wherein the minimum storage duration is part of a policy set by the cloud storage site, and the minimum storage duration corresponds to a specified storage class. Embodiment 3. The method as recited in any of embodiments 1-2, wherein the metadata resides either at a dedup server, or at the cloud storage site. Embodiment 4. The method as recited in any of embodiments 1-3, wherein the minimum storage duration associated with a storage class is obtained by way of a price list API associated with the cloud storage site. Embodiment 5. The method as recited in any of embodiments 1-4, wherein the method is performed as part of a garbage collection process at the cloud storage site. Embodiment 6. The method as recited in any of embodiments 1-5, wherein the object is only copy-forwarded when: (i) the object is out of the minimum storage duration; and (ii) the object is not a dead object. Embodiment 7. The method as recited in any of embodiments 1-6, wherein when the object is not out of the minimum storage duration, and the object is a dead object that does not include any live segments, the dead object is deleted —additionally, or alternatively, the dead object is not deleted until minimum storage duration completes, so that new incoming data can refer, for deduplication, to the segments within these objects. Embodiment 8. The method as recited in any of embodiments 1-7, wherein after the object is deselected, the object is retained until the minimum storage duration ends. Embodiment 9. The method as recited in any of embodiments 1-8, wherein deselecting the object reduces a storage cost for the object relative to a storage cost that would be incurred if the object were not deselected. Embodiment 10. The method as recited in any of embodiments 1-9, wherein determining whether the object is out of a minimum storage duration comprises running the algorithm:If Current Time<(Creation Time+Min Storage Duration)Then Object is STILL UNDER Min Storage DurationElseObject is now OUT OF Min Storage Duration purview. Embodiment 11. A method for performing any of the operations, methods, or processes, or any portion of any of these, disclosed herein. Embodiment 12. A computer readable storage medium having stored therein instructions that are executable by one or more hardware processors to perform operations comprising the operations of any one or more of embodiments 1-11. G. Example Computing Devices and Associated Media The embodiments disclosed herein may include the use of a special purpose or general-purpose computer including various computer hardware or software modules, as discussed in greater detail below. A computer may include a processor and computer storage media carrying instructions that, when executed by the processor and/or caused to be executed by the processor, perform any one or more of the methods disclosed herein, or any part(s) of any method disclosed. As indicated above, embodiments within the scope of the present invention also include computer storage media, which are physical media for carrying or having computer-executable instructions or data structures stored thereon. Such computer storage media may be any available physical media that may be accessed by a general purpose or special purpose computer. By way of example, and not limitation, such computer storage media may comprise hardware storage such as solid state disk/device (SSD), RAM, ROM, EEPROM, CD-ROM, flash memory, phase-change memory (“PCM”), or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other hardware storage devices which may be used to store program code in the form of computer-executable instructions or data structures, which may be accessed and executed by a general-purpose or special-purpose computer system to implement the disclosed functionality of the invention. Combinations of the above should also be included within the scope of computer storage media. Such media are also examples of non-transitory storage media, and non-transitory storage media also embraces cloud-based storage systems and structures, although the scope of the invention is not limited to these examples of non-transitory storage media. Computer-executable instructions comprise, for example, instructions and data which, when executed, cause a general purpose computer, special purpose computer, or special purpose processing device to perform a certain function or group of functions. As such, some embodiments of the invention may be downloadable to one or more systems or devices, for example, from a website, mesh topology, or other source. As well, the scope of the invention embraces any hardware system or device that comprises an instance of an application that comprises the disclosed executable instructions. Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts disclosed herein are disclosed as example forms of implementing the claims. As used herein, the term ‘module’ or ‘component’ may refer to software objects or routines that execute on the computing system. The different components, modules, engines, and services described herein may be implemented as objects or processes that execute on the computing system, for example, as separate threads. While the system and methods described herein may be implemented in software, implementations in hardware or a combination of software and hardware are also possible and contemplated. In the present disclosure, a ‘computing entity’ may be any computing system as previously defined herein, or any module or combination of modules running on a computing system. In at least some instances, a hardware processor is provided that is operable to carry out executable instructions for performing a method or process, such as the methods and processes disclosed herein. The hardware processor may or may not comprise an element of other hardware, such as the computing devices and systems disclosed herein. In terms of computing environments, embodiments of the invention may be performed in client-server environments, whether network or local environments, or in any other suitable environment. Suitable operating environments for at least some embodiments of the invention include cloud computing environments where one or more of a client, server, or other machine may reside and operate in a cloud environment. With reference briefly now toFIG.3, any one or more of the entities disclosed, or implied, byFIGS.1-2and/or elsewhere herein, may take the form of, or include, or be implemented on, or hosted by, a physical computing device, one example of which is denoted at300. As well, where any of the aforementioned elements comprise or consist of a virtual machine (VM), that VM may constitute a virtualization of any combination of the physical components disclosed inFIG.3. In the example ofFIG.3, the physical computing device300includes a memory302which may include one, some, or all, of random access memory (RAM), non-volatile memory (NVM)304such as NVRAM for example, read-only memory (ROM), and persistent memory, one or more hardware processors306, non-transitory storage media308, UI device310, and data storage312. One or more of the memory components302of the physical computing device300may take the form of solid state device (SSD) storage. As well, one or more applications314may be provided that comprise instructions executable by one or more hardware processors306to perform any of the operations, or portions thereof, disclosed herein. Such executable instructions may take various forms including, for example, instructions executable to perform any method or portion thereof disclosed herein, and/or executable by/at any of a storage site, whether on-premises at an enterprise, or a cloud computing site, client, datacenter, data protection site including a cloud storage site, or backup server, to perform any of the functions disclosed herein. As well, such instructions may be executable to perform any of the other operations and methods, and any portions thereof, disclosed herein. The present invention may be embodied in other specific forms without departing from its spirit or essential characteristics. The described embodiments are to be considered in all respects only as illustrative and not restrictive. The scope of the invention is, therefore, indicated by the appended claims rather than by the foregoing description. All changes which come within the meaning and range of equivalency of the claims are to be embraced within their scope.
38,969
11860779
DETAILED DESCRIPTION First Exemplary Embodiment Explanation follows regarding an exemplary embodiment of an onboard relay device10, an onboard relay method, and a non-transitory recording medium according to the present disclosure, with reference to the drawings. FIG.1illustrates a vehicle12provided with the onboard relay device10of the present exemplary embodiment. The vehicle12includes the onboard relay device10, an in-vehicle network14, a first electronic control unit (ECU)16A, a second ECU16B, a third ECU16C, and a fourth ECU16D. The onboard relay device10includes a gateway function, and is connected to the first ECU16A, the second ECU16B, the third ECU16C, and the fourth ECU16D via the in-vehicle network14. Specifically, the onboard relay device10and the first ECU16A are connected together through a first bus14A. The onboard relay device10and the second ECU16B are connected together through a second bus14B. The onboard relay device10and the third ECU16C are connected together through a third bus14C. The onboard relay device10and the fourth ECU16D are connected together through a fourth bus16D. The in-vehicle network14is for example configured by Ethernet (registered trademark), a controller area network (CAN), or Flex Ray (registered trademark). As illustrated inFIG.1, a vehicle wheel speed sensor20is connected to the first ECU16A of the present exemplary embodiment. The vehicle wheel speed sensor20transmits information relating to the acquired vehicle wheel speed to the first ECU16A at a predetermined cycle. Cameras21are connected to the second ECU16B. The cameras21include a peripheral camera that captures an imaging subject positioned at the periphery of (outside) the vehicle12and an in-vehicle camera that captures an imaging subject inside the vehicle. The respective cameras21transmit acquired captured data to the second ECU16B at a predetermined cycle. A shift position sensor22is connected to the third ECU16C. The shift position sensor22acquires a shift position of a shift lever (not illustrated in the drawings) provided in the vehicle, and transmits acquired information regarding the shift position to the third ECU16C at a predetermined cycle. The shift lever is, for example, capable of moving to respective shift positions corresponding to a D (drive) range, a first gear range, a second gear range, an R (reverse) range, a P (parking) range, and an N (neutral) range. Namely, the vehicle12is an automatic transmission vehicle (AT vehicle). A GPS receiver23, a display24(touch panel), and a wireless communication device25are connected to the fourth ECU16D. The GPS receiver23acquires position information (for example latitude and longitude) corresponding to the travel location of the vehicle12at a predetermined cycle based on GPS signals transmitted from artificial satellites, and transmits the acquired position information to the fourth ECU16D at a predetermined cycle. The display24includes a touch panel. The display24transmits information input through the touch panel to the fourth ECU16D. For example, the touch panel is employed by a driver to input their personal information. The input personal information of the driver is recorded in ROM of the fourth ECU16D. The wireless communication device25communicates wirelessly with external communication devices over the internet or the like. The wireless communication device25transmits information acquired by wireless communication to the fourth ECU16D at a predetermined cycle. For example, the wireless communication device25is capable of communicating wirelessly with a smartphone40(seeFIG.1) and a roadside device45(seeFIG.1). For example, the wireless communication device25communicates wirelessly with the roadside device45in response to operation of the touch panel of the display24or the smartphone40by an occupant of the vehicle12. As illustrated inFIG.2, the onboard relay device10is an ECU. The onboard relay device10will accordingly be referred to hereafter as the relay ECU10. The relay ECU10is configured to include a central processing unit (CPU; processor) (computer)10A, a read only memory (ROM)10B serving as a non-transitory recording medium, a random access memory (RAM)10C serving as a non-transitory recording medium, a storage10D serving as a non-transitory recording medium, a communication interface (I/F)10E, and an input/output I/F10F. The CPU10A, the ROM10B, the RAM10C, the storage10D, the communication I/F10E, and the input/output I/F10F are connected together through a bus10Z so as to be capable of communicating with each other. The relay ECU10is capable of acquiring time-related information from a timer (not illustrated in the drawings). Note that although not illustrated in the drawings, hardware configurations of the first ECU16A, the second ECU16B, the third ECU16C, and the fourth ECU16D are the same as that of the relay ECU10. A driving support switch26is provided to a dashboard (not illustrated in the drawings) of the vehicle12. The driving support switch26is connected to the relay ECU10. Various driving support control is executed when the driving support switch26has been switched on by an occupant of the vehicle12. The occupant is also able to cause the vehicle12to execute specific driving support control by operating the driving support switch26. When the driving support switch26has been switched off by the occupant, the relay ECU10does not execute driving support control. For example, the relay ECU10executes various driving support control by operating a steering wheel, a brake pedal, an accelerator pedal, and a lever of indicator lights (none of these are illustrated in the drawings). The driving support control for example includes adaptive cruise control (ACC), lane tracing assistance (LTA) control, and lane change assistance (LCA) control. The CPU10A is a central processing unit that executes various programs and controls various sections. Namely, the CPU10A reads a program from the ROM10B or the storage10D and executes the program using the RAM10C as a workspace. The CPU10A controls various configurations and performs various arithmetic processing according to the program recorded in the memory configured by the ROM10B or the storage10D. Namely, for example, the CPU10A performs various arithmetic processing and controls the steering wheel, brake pedal, accelerator pedal, and the lever of indicator lights in order to execute driving support control. The ROM10B holds various programs and various data. For example, an operating system (OS) is installed in the ROM10B. As illustrated inFIG.1, various applications AP1, AP2, AP3, and AP4are installed in the ROM10B. The applications AP1, AP2, AP3, and AP4are, for example, downloaded onto the ROM10B from an external server (not illustrated in the drawings) over the internet by the wireless communication device25after the vehicle12is manufactured. The RAM10C serves as a workspace that temporarily stores programs and data. The storage10D is configured by a storage device such as a hard disk drive (HDD) or a solid state drive (SSD), and holds various programs and various data. The communication I/F10E is an interface through which the relay ECU10communicates with other devices. The communication I/F10E is connected to the in-vehicle network14. The input/output I/F10F is an interface for communicating with various devices installed in the vehicle12. For example, the driving support switch26is connected to the input/output I/F10F of the relay ECU10. FIG.3is a block diagram illustrating an example of functional configuration of the relay ECU10. The functional configuration of the relay ECU10includes a transmission request section101, a transmission section102, a reception section103, a receiving section104, a data identification section105, a cache memory section106, a cache determination section107, and a vehicle determination section108. The transmission request section101, the transmission section102, the reception section103, the receiving section104, the data identification section105, the cache memory section106, the cache determination section107, and the vehicle determination section108are implemented by the CPU10A reading and executing a program stored in the ROM10B or the storage10D. During execution of at least one of the application AP1, AP2, AP3, or AP4, the transmission request section101is capable of generating a transmission request under the control of at least one of the application AP1, AP2, AP3, or AP4. The transmission request generated by the transmission request section101is sent to the receiving section104. The transmission request is a request to transmit a frame27(described later) generated by data generation sections17(described later) of the first ECU16A, the second ECU16B, the third ECU16C, and the fourth ECU16D, or a frame27recorded in the cache memory section106, to the application AP1, AP2, AP3, or AP4(RAM10C). The transmission section102transmits the transmission request generated by the transmission request section101and other information generated in the relay ECU10(for example information relating to driving support control) through at least one of the first bus14A, the second bus14B, the third bus14C, or the fourth bus14D. The transmission section102is also capable of transmitting the frame27, received by the reception section103through at least one of the first bus14A, the second bus14B, the third bus14C, or the fourth bus14D, through a different bus (for example the second bus14B) from the bus (for example the first bus14A) through which the frame27was sent, or to the applications AP1, AP2, AP3, and AP4(RAM10C). The reception section103is capable of receiving the frame27generated by at least one of the first ECU16A, the second ECU16B, the third ECU16C, or the fourth ECU16D and transmitted through at least one of the first bus14A, the second bus14B, the third bus14C, or the fourth bus14D. The receiving section104is capable of receiving the transmission request transmitted from at least one of the transmission request section101, the first ECU16A, the second ECU16B, the third ECU16C, or the fourth ECU16D. The cache memory section106is capable of recording a frame27that satisfies a predetermined condition described later. The cache memory section106is configured by volatile memory. Namely, when an ignition switch of the vehicle12is switched from off to on, the cache memory section106becomes capable of recording data, and when the ignition switch is switched off, the data recorded in the cache memory section106is erased. The cache determination section107is capable of determining whether or not transmission request data that is the frame27subject to the transmission request is recorded in the cache memory section106. The vehicle determination section108determines a “vehicle state” and a “vehicle peripheral environment” at the current time. The “vehicle state” includes, for example, a vehicle speed, a shift position, a driving support state, a battery status, an air conditioner status, a steering angle of the steering wheel, and an occupant state of the vehicle12. The “occupant state” includes, for example, a total number of occupants and a driving skill level of a driver. The “vehicle peripheral environment” includes, for example, the name of the country in which the vehicle12is traveling, the road category (general road or expressway), a road congestion level, an inter-vehicle distance between the vehicle12and another vehicle positioned directly ahead of the vehicle12, the weather, and the time period. FIG.4is a block diagram illustrating an example of functional configuration of the first ECU16A, the second ECU16B, the third ECU16C, and the fourth ECU16D. The respective functional configurations of the first ECU16A, the second ECU16B, the third ECU16C, and the fourth ECU16D includes the data generation section17, a transmission section18, and a reception section19. The data generation section17, the transmission section18, and the reception section19are functions which are realized by processing executed by the CPUs of the first ECU16A, the second ECU16B, the third ECU16C, and the fourth ECU16D. One or more applications installed in each of the first ECU16A, the second ECU16B, the third ECU16C, and the fourth ECU16D causes these CPUs to execute this processing. The ROM of the fourth ECU16D is recorded with a navigation system application and information relating to traffic laws and regulations of several countries (referred to hereafter as traffic code information). The traffic code information includes, for example, information relating to speed limits. Content relating to speed limits is differentiated by country and by road category (general road or expressway). The respective data generation sections17of the first ECU16A, the second ECU16B, the third ECU16C, and the fourth ECU16D are capable of generating messages using information received from at least one of the vehicle wheel speed sensor20, the cameras21, the shift position sensor22, the GPS receiver23, the display24, or the wireless communication device25. Each of the data generation sections17divides the generated message into plural segments of a predetermined size. The data generation section17then generates a frame27(seeFIG.5) including each of the divided segments so as to conform to an Ethernet protocol or the like. Each data generation section17is capable of generating various types of frame27(data). As illustrated inFIG.5, each frame27includes an ID, this being an identifier indicating the type of frame27. The frame27also includes information relating to the generation date and time of the frame27and information expressing the data content of the frame27. The ROM10B of the relay ECU10is recorded with a frame list28(seeFIG.6). The frame list28represents frame27IDs (frame types), content and attributes of the data (information) recorded in the respective frames27, and the entity that generated each of the frames27. As illustrated by the frame list28, a frame27generated by the data generation section17of the first ECU16A based on information transmitted from the vehicle wheel speed sensor20has an ID of100, and includes vehicle speed information. A frame27with an ID110is generated by the data generation section17of the second ECU16B based on image information transmitted from the cameras21, and includes image information (for example an image of an imaging subject peripheral to the vehicle12). A frame27with an ID120is generated by the data generation section17of the second ECU16B based on image information transmitted from the camera21(peripheral camera), and includes information relating to the inter-vehicle distance between the vehicle12and another vehicle (not illustrated in the drawings) positioned directly ahead of the vehicle12. A frame27with an ID130is generated by the data generation section17of the second ECU16B based on image information transmitted from the camera21(in-vehicle camera), and includes information relating to the number of occupants present in the vehicle12. A frame27with an ID140is generated by the data generation section17of the third ECU16C based on information transmitted from the shift position sensor22, and includes information relating to the shift position. A frame27with an ID150is generated by the data generation section17of the fourth ECU16D based on information transmitted from the GPS receiver23, and includes position information of the vehicle12. A frame27with an ID160is generated by the data generation section17of the fourth ECU16D based on information recorded in the ROM of the fourth ECU16D, and includes personal information of a driver. This personal information includes information such as information on “driver's name”, “driver's date of birth”, and “driver's driving skill level”. A frame27with an ID170is generated by the data generation section17of the fourth ECU16D based on information input to the fourth ECU16D via the wireless communication device25, and includes congestion information for the road on which the vehicle12is traveling. A frame27with an ID180is generated by the data generation section17of the fourth ECU16D based on traffic code information recorded in the ROM of the fourth ECU16D, and includes information relating to a road speed limit. The frame list28lists “vehicle speed information”, “image information”, “shift position information”, “position information”, “inter-vehicle distance information”, “number of occupant information”, “congestion information”, and “speed limit information” as variable data, and lists “(driver) personal information” as fixed data. Variable data is data expressing a variable value having content that changes as time changes. Fixed data is data expressing a fixed value not changing in content as time changes. Note that data which has content that may change as time changes is treated as fixed data as long as this data is intentionally input by an occupant (human). For example, data input by the occupant using the display (touch panel)24or the smartphone40and having content that may change as time changes is fixed data. Such fixed data may include the driving skill level of the occupant, the address of the occupant, the body weight of the occupant, the telephone number of the occupant, and the email address of the occupant. When the receiving section104of the relay ECU10has received a transmission request, the data identification section105of the relay ECU10identifies the data content, the data attribute, and the generating entity of the frame27subject to the transmission request based on the frame list28and on the ID information (of the frame27) included in the transmission request. The transmission section18of the first ECU16A transmits the frame27with the ID100through the first bus14A at a predetermined cycle. The transmission section18of the second ECU16B transmits the frames27with the IDs110,120, and130through the second bus14B when the second ECU16B receives image data from the cameras21. The transmission section18of the third ECU16C transmits the frame27with the ID140through the third bus14C whenever a signal has been received from the shift position sensor22. The transmission section18of the fourth ECU16D transmits the frames27with the IDs150,160,170, and180through the fourth bus14D at a predetermined cycle. The respective transmission sections18of the first ECU16A, the second ECU16B, the third ECU16C, and the fourth ECU16D are capable of generating a transmission request, and are capable of transmitting the transmission request to the receiving section104of the relay ECU10. A transmission request generated by the transmission section18is a request for the receiving section104of the relay ECU10, and is made by one ECU (for example the first ECU16A). Further, a transmission request requests the receiving section104to forward a frame27generated by another ECU (for example the second ECU16B) to the one ECU (for example the first ECU16A). Each of the reception sections19receives information flowing through the bus connected thereto. For example, the reception section19of the first ECU16A receives signals from the vehicle wheel speed sensor20, and communication content (such as transmission requests, and frames27generated by other ECUs) from the relay ECU10. Next, explanation follows regarding a flow of processing performed by the relay ECU10of the present exemplary embodiment, with reference to the flowchart illustrated inFIG.9. The relay ECU10executes the processing in the flowchart ofFIG.9each time a predetermined duration elapses. First, at step S10, the relay ECU10determines whether or not the receiving section104has received a transmission request from a request originator, namely at least one of the applications AP1, AP2, AP3, AP4, the first ECU16A, the second ECU16B, the third ECU16C, or the fourth ECU16D. This transmission request includes ID information of the frame27for which transmission is being requested. In cases in which determination is affirmative at step S10, the relay ECU10proceeds to step S11. On proceeding to step S11, the data identification section105of the relay ECU10determines whether or not the frame27subject to the transmission request (data subject to the transmission request) is fixed data, based on the ID information included in the transmission request and the frame list28illustrated inFIG.6. In cases in which determination is affirmative at step S11, the relay ECU10proceeds to step S12. On proceeding to step S12, the cache determination section107of the relay ECU10refers to the ID information included in the transmission request and the ID information of the data (frames) recorded in the cache memory section106in order to determine whether or not the frame27subject to the transmission request (data subject to the transmission request) is recorded in the cache memory section106. In cases in which determination is affirmative at step S12, the relay ECU10proceeds to step S13. On proceeding to step S13, the transmission section102of the relay ECU10transmits the frame27subject to the transmission request (transmission request data) and recorded in the cache memory section106to the request originator. On the other hand, in cases in which determination is negative at step S11, the relay ECU10proceeds to step S14, and the vehicle determination section108determines whether or not a predetermined specific condition relating to target data is satisfied, based on the target data configuring the transmission request data that is variable data, and on at least one of the vehicle state or vehicle peripheral environment. At his time, for example, the vehicle determination section108identifies a vehicle state based on the vehicle speed, the shift position, the driving support state, and an occupant state of the vehicle12. The vehicle determination section108also identifies the vehicle peripheral environment based on for example the name of the country in which the vehicle12is traveling, the road category, the road congestion level, the inter-vehicle distance between the vehicle12and another vehicle positioned directly ahead of the vehicle12, the weather, and the time period. The vehicle determination section108also refers to a two-dimensional map32recorded in the ROM10B and illustrated inFIG.8to determine whether or not the specific condition is satisfied. As can be seen from the two-dimensional map32, the decision as to whether or not the specific condition is satisfied is made using a combination of the target data and at least one of the vehicle state or the vehicle peripheral environment. For example, in a case in which the vehicle12is traveling on an expressway, a specific condition is satisfied for target data A, this being predetermined target data. On the other hand, in a case in which the vehicle12is traveling on an expressway, a specific condition may not be satisfied for target data B, this also being predetermined target data. In cases in which the specific condition is satisfied, the frame27(target data) that is variable data has a utility value irrespective of the length of elapsed duration, namely a duration from a generation time of the frame27to the current time. In other words, in such cases, it is considered that the data content of the frame27that is variable data is substantively unchanged during this elapsed duration. Namely, in such cases, the data content of the frame27that is variable data either does not change at all, or only changes within a permissible range for processing by the request originator. On the other hand, in cases in which the specific condition is not satisfied, the frame27(target data) that is variable data with a long elapsed duration does not have a utility value. In other words, in such cases, it is considered that the data content of the frame27being variable data substantively changes during this elapsed duration. Namely, in such cases, the data content of the frame27that is variable data has changed beyond the permissible range for processing by the request originator. In cases in which the specific condition is satisfied, the relay ECU10makes affirmative determination at step S14and proceeds to step S12. On the other hand, in cases in which determination is negative at step S14, the relay ECU10proceeds to step S15. On proceeding to step S15, the data identification section105of the relay ECU10refers to a one-dimensional map30recorded in the ROM10B and illustrated inFIG.7. Thresholds relating to the elapsed duration, namely the duration from the generation time of the frame27to the current time, are set in the one-dimensional map30for each frame27type (ID). The data identification section105compares the elapsed duration of the frame27(transmission request data) that is the variable data subject to the transmission request against the threshold, based on the one-dimensional map30and the time when the reception section103received a frame27of the same type after the ignition switch was switched from off to on and before the current time (this time is referred to hereafter as the prior reception time). Note that the prior reception time and the generation time of the frame27received by the reception section103at the prior reception time are substantially the same. In cases in which the data identification section105determines the elapsed duration of the frame27to be the threshold or lower (step S15: YES), the relay ECU10proceeds to step S12. Note that in cases in which the reception section103has not received a frame27of the same type after the ignition switch has been switched from off to on, the data identification section105makes negative determination at step S15. In cases in which the cache determination section107makes affirmative determination at step S12, at step S13, the transmission section102transmits the frame27(transmission request data) that is recorded in the cache memory section106and subject to the transmission request to the request originator. On the other hand, in cases in which determination is negative at step S15or cases in which determination is negative at step S12, the relay ECU10proceeds to step S16. On proceeding to step S16, the data identification section105of the relay ECU10identifies a transmission originator which is at least one of the first ECU16A, the second ECU16B, the third ECU16C, or the fourth ECU16D and is capable of generating the frame27(transmission request data) that is variable data subject to the transmission request, based on the frame list28. The data identification section105then issues a request to the thus identified transmission originator to transmit this frame27to the reception section103of the relay ECU10. On completion of the processing of step S16, the relay ECU10proceeds to step S17. At step S17, the reception section103of the relay ECU10determines whether or not the frame27(transmission request data) subject to the transmission request has been received from the transmission originator. In cases in which determination is affirmative at step S17, the relay ECU10proceeds to step S18. On proceeding to step S18, the transmission section102of the relay ECU10transmits the frame27(received data) received from the reception section103to the request originator. On the other hand, in cases in which determination is negative at step S17, the relay ECU10repeats the processing of step S17. On completion of the processing of step S13or step S18, or in cases in which determination is negative at step S10, the relay ECU10ends the current round of processing of the flowchart. Furthermore, the relay ECU10executes the processing of the flowchart illustrated inFIG.10each time a predetermined duration elapses. At step S20, the relay ECU10determines whether or not the reception section103has received any data (frame27) from the transmission originator. In cases in which determination is affirmative at step S20, the relay ECU10proceeds to step S21. On proceeding to step S21, the data identification section105of the relay ECU10refers to the frame list28to determine whether or not the received frame27is fixed data. In cases in which determination is affirmative at step S21, the relay ECU10proceeds to step S22. At step S22, the relay ECU10determines whether or not this frame27(fixed data) has already been recorded in the cache memory section106. In cases in which determination is negative at step S22, the relay ECU10proceeds to step S23. At step S23, the cache memory section106of the relay ECU10records this frame27(fixed data). On the other hand, in cases in which determination is negative at step S21, the relay ECU10proceeds to step S24. The processing of step S24is the same as that of step S14. In cases in which determination is negative at step S24, the relay ECU10proceeds to step S25. On proceeding to step S25, the data identification section105of the relay ECU10performs the same processing as that of step S15. In cases in which the data identification section105determines the elapsed duration of the frame27to be the threshold or lower (step S25: YES), the relay ECU10proceeds to step S23. On proceeding to step S23, the cache memory section106of the relay ECU10records this frame27that is variable data. Note that in cases in which a frame27(variable data) having the same ID has already been recorded in the cache memory section106, the content relating to this frame27recorded in the cache memory section106is updated. On the other hand, in cases in which determination is affirmative at step S24, the relay ECU10proceeds to step S22. On completion of the processing of step S23, in cases in which determination is negative at step S20or step S25, or in cases in which determination is affirmative at step S22, the relay ECU10ends the current round of processing of the flowchart. Operation and Advantageous Effects Next, explanation follows regarding operation and advantageous effects of the present exemplary embodiment in various different cases. Case 1A A case is envisaged in which the transmission request section101issues a transmission request to the receiving section104under the control of the application AP1as the request originator, and the transmission request includes ID information for the frame27with the ID140and the frame27with the ID130. In this case, at step S14inFIG.9, the vehicle determination section108of the relay ECU10determines whether or not specific conditions relating to the frame27with the ID140and the frame27with the ID130that are the target data are satisfied, based on the two-dimensional map32. In this example, a case is envisaged in which the vehicle12is traveling on a general road. When an application of the navigation system is being executed, the vehicle determination section108identifies that the vehicle12is traveling on a general road based on the frame27with the ID150received from the fourth ECU16D. In such a case, the vehicle determination section108determines the specific condition not to be satisfied based on the two-dimensional map32. Namely, the vehicle determination section108makes negative determination at step S14. In this case, there is a possibility that the position of the shift lever of the vehicle12change in a short duration. Moreover, there is a possibility that at least one occupant of the vehicle12leave the vehicle12at a particular location on the general road. Namely, in this case, there is a possibility that the data content of the frame27with the ID140which expresses information relating to the shift position and the frame27with the ID130which expresses information relating to the number of occupants change. Next, at step S15inFIG.9, the data identification section105of the relay ECU10determines whether or not the elapsed duration of the frame27with the ID140is a threshold E or lower. The data identification section105also determines whether or not the elapsed duration of the frame27with the ID130is a threshold D or lower. In cases in which determination is negative at step S15, these frames27are sent from their transmission originators, namely the second ECU16B and the third ECU16C, to the relay ECU10(step S16, step S17: YES), and the frames27are then transmitted to the application AP1that is the request originator by the transmission section102(step S18). Note that in present specification, explanation such as “a frame is transmitted to an application that is the request originator by the transmission section” or the like is understood as meaning “the transmission section transmits a frame to RAM and notifies the application of this, and the application then accesses the RAM on receipt of the notification”. After an affirmative determination at step S17, the relay ECU10makes affirmative determination at step S20. The relay ECU10then makes negative determinations at steps S21, S24, and S25, such that these frames27that correspond to variable data are not recorded in the cache memory section106. In this case 1A, the freshly generated frames27that are variable data are transmitted from the second ECU16B and the third ECU16C to the application AP1that is the request originator via the relay ECU10. This enables the application AP1to execute processing using frames27storing data that has a utility value. Case 1B A case is envisaged in which the transmission request section101issues a transmission request to the receiving section104at a predetermined first timing under the control of the application AP1as the request originator, and this transmission request includes ID information for the frame27with the ID140and the frame27with the ID130. A case is envisaged in which the vehicle12is traveling at a high speed (for example 80 km/h or greater) on an uncongested expressway. Moreover, a case is envisaged in which the transmission request section101has issued a transmission request to the receiving section104at a second timing earlier than the first timing under the control of the application AP2as the request originator, and this transmission request also included ID information for the frame27with the ID140and the frame27with the ID130. Furthermore, the frame27with the ID140transmitted from the third ECU16C to the relay ECU10is envisaged to have been transmitted to the application AP2and also recorded in the cache memory section106. Moreover, it is assumed that the frame27with the ID130which was transmitted from the second ECU16B to the relay ECU10is transmitted to the application AP2and this frame27is recorded in the cache memory section106. In this case, at step S14inFIG.9, the vehicle determination section108of the relay ECU10determines whether or not the specific conditions relating to the frame27with the ID140and the frame27with the ID130that are the target data are satisfied based on the two-dimensional map32. When an application of the navigation system is being executed, the vehicle determination section108identifies that the vehicle12is traveling at a high speed on an uncongested expressway based on the frame27with the ID100received from the first ECU16A and the frame27with the ID150and the frame27with the ID170received from the fourth ECU16D. In such a case, the vehicle determination section108determines the specific conditions to be satisfied based on the two-dimensional map32. Namely, the vehicle determination section108makes affirmative determination at step S14. In this case, there is a high possibility that the position of the shift lever of the vehicle12does not move from the D range. Moreover, there is a high possibility that no occupant leaves the vehicle12. Namely, in this case, there is a low possibility that the data content of the frame27with the ID140which expresses information relating to the shift position and the data content of the frame27with the ID130which expresses information relating to the number of occupants change. Next, at step S12inFIG.9, the data identification section105of the relay ECU10determines whether or not these frames27have been recorded in the cache memory section106. In this case, since these frames27have been recorded in the cache memory section106(step S12: YES), at step S13the transmission section102transmits the frame27with the ID140and the frame27with the ID130recorded in the cache memory section106to the application AP1that is the request originator. In case 1B, since the application AP1that is the request originator acquires the frame27with the ID140and the frame27with the ID130from the cache memory section106, there is no need for communication between the second ECU16B and the third ECU16C and the application AP1. Accordingly, an increase in the communication traffic volume over the in-vehicle network14is suppressed. Moreover, considering the vehicle state and the vehicle peripheral environment, the frames27(variable data) recorded in the cache memory section106are data that has a utility value for the application AP1. The application AP1is thus able to execute processing using frames27recorded with data that has a utility value. Case 2A A case is envisaged in which the transmission request section101issues a transmission request to the receiving section104under the control of the application AP2as the request originator, and the transmission request includes ID information for the frame27with the ID170. In this case, at step S14inFIG.9, the vehicle determination section108of the relay ECU10determines whether or not the specific condition relating to the frame27with the ID170that is the target data is satisfied, based on the two-dimensional map32. In this example, a case is envisaged in which the vehicle determination section108identifies the current time period to be daytime based on information relating to the current time acquired from the timer. In such a case, the vehicle determination section108determines the specific condition not to be satisfied based on the two-dimensional map32. Namely, the vehicle determination section108makes negative determination at step S14. In this case, there is a possibility that the road congestion level where the vehicle12is traveling change by a large degree that exceeds a permissible range for processing by the request originator. Namely, in this case, there is a possibility that the data content of the frame27with the ID170which expresses congestion information changes. Next, at step S15inFIG.9, the data identification section105of the relay ECU10determines whether or not the elapsed duration of the frame27with the ID170is a threshold G or lower. In cases in which determination is negative at step S15, the frame27that is variable data is sent from the transmission section18of the fourth ECU16D that is the transmission originator to the reception section103(step S16, step S17: YES), and this frame27is then transmitted from the transmission section102to the application AP2that is the request originator (step S18). Moreover, after affirmative determination has been made at step S17, the relay ECU10makes affirmative determination at step S20. The relay ECU10then makes negative determination at steps S21, S24, and S25, such that this frame27that is variable data is not recorded in the cache memory section106. In this case 2A, the freshly generated frame27that is variable data is transmitted from the fourth ECU16D to the application AP2that is the request originator. This enables the application AP2to execute processing using the frame27recorded with data that has a utility value. Case 2B A case is envisaged in which the transmission request section101issues a transmission request to the receiving section104at a predetermined first timing under the control of the application AP2as the request originator, and the transmission request includes ID information for the frame27with the ID170. In this case, at step S14inFIG.9, the vehicle determination section108of the relay ECU10determines whether or not the specific condition relating to the frame27with the ID170that is the target data is satisfied, based on the two-dimensional map32. In this example, a case is envisaged in which the vehicle determination section108identifies the current time period to be nighttime based on the information relating to the current time acquired from the timer. In this case, the vehicle determination section108determines the specific condition to be satisfied based on the two-dimensional map32. Namely, the vehicle determination section108makes affirmative determination at step S14. In this case, it is inferred that congestion level of a road where the vehicle12is traveling is highly unlikely to change by a large degree that exceeds the permissible range for processing by the request originator. Namely, it is inferred that the road will remain in an uncongested state. Namely, in this case, there is a low possibility that the data content of the frame27with the ID170which expresses information relating to congestion changes. Next, at step S12inFIG.9, the data identification section105of the relay ECU10determines whether or not this frame27is recorded in the cache memory section106. In this example, a case is envisaged in which this frame27is not recorded in the cache memory section106(step S12: NO). In this case, this frame27is sent from the fourth ECU16D that is the transmission originator to the reception section103of the relay ECU10(step S16, step S17: YES), and this frame27is then transmitted to the application AP2that is the request originator by the transmission section102(step S18). At a third timing later than the first timing, the relay ECU10makes affirmative determination at step S20, makes negative determination at step S21, makes affirmative determination at step S24, and makes negative determination at step S22. Accordingly, at step S23, this frame27that is variable data is recorded in the cache memory section106. In this case 2B, the application AP2that is the request originator acquires the frame27with the ID170from the fourth ECU16D. This enables the application AP2to execute processing using the frame27recorded with variable data that is freshly generated and has a utility value. Case 3A A case is envisaged in which the transmission request section101issues a transmission request to the receiving section104under the control of the application AP3as the request originator, and the transmission request includes ID information for the frame27with the ID120. In this case, at step S14inFIG.9, the vehicle determination section108of the relay ECU10determines whether or not the specific condition relating to the frame27with the ID120that is the target data is satisfied, based on the two-dimensional map32. In this example, a case is envisaged in which the vehicle determination section108determines that driving support control is not being performed based on a signal transmitted from the driving support switch26. In other words, a case is envisaged in which the vehicle determination section108determines the vehicle12to be in a normal travel mode. In this case, the vehicle determination section108determines the specific condition not to be satisfied based on the two-dimensional map32. Namely, the vehicle determination section108makes negative determination at step S14. In this case, there is a possibility that the inter-vehicle distance between the vehicle12and another vehicle positioned directly ahead of the vehicle12changes by a large degree that exceeds a permissible range for processing by the request originator. Namely, in this case, there is a possibility that the data content of the frame27with the ID120which expresses information regarding the inter-vehicle distance changes. Next, at step S15inFIG.9, the data identification section105of the relay ECU10determines whether or not the elapsed duration of the frame27with the ID120is a threshold C or lower. In this example, a case is envisaged in which determination is affirmative at step S15and determination is negative at step S12. In such a case, the frame27that is variable data is sent from the second ECU16B that is the transmission originator to the relay ECU10(step S16, step S17: YES), and this frame27is then transmitted from the transmission section102to the application AP3that is the request originator (step S18). Moreover, after making affirmative determination at step S17, the relay ECU10makes affirmative determination at step S20. The relay ECU10then makes negative determination at steps S21and S24, and makes affirmative determination at step S25. Accordingly, at step S23, this frame27that is variable data is recorded in the cache memory section106. In this case 3A, the freshly generated frame27that is variable data is transmitted from the second ECU16B to the application AP3that is the request originator. This enables the application AP3to execute processing using the frame27recorded with data that has a utility value. Case 3B A case is envisaged in which the transmission request section101issues a transmission request to the receiving section104at a predetermined first timing under the control of the application AP3as the request originator, and the transmission request includes ID information for the frame27with the ID120. Moreover, a case is envisaged in which the transmission request section101has issued a transmission request to the receiving section104at a second timing earlier than the first timing under the control of the application AP4as the request originator, and this transmission request also included ID information for the frame27with the ID120. Furthermore, it is assumed that the frame27with the ID120which was transmitted from the second ECU16B to the relay ECU10is transmitted from the transmission section102to the application AP4and this frame27is recorded in the cache memory section106. In this case, at step S14inFIG.9, the vehicle determination section108of the relay ECU10determines whether or not the specific condition relating to the frame27with the ID120that is the target data is satisfied, based on the two-dimensional map32. In this example, a case is envisaged in which the vehicle determination section108identifies that adaptive cruise control (ACC), this being one form of driving support control, is being executed based on a signal transmitted from the driving support switch26. In this case, the vehicle determination section108determines the specific condition to be satisfied based on the two-dimensional map32. Namely, the vehicle determination section108makes affirmative determination at step S14. In this case, since a substantially constant inter-vehicle distance is maintained between the vehicle12and the other vehicle, it is highly unlikely that this inter-vehicle distance will change to an extent that exceeds the permissible range for processing by the request originator. Namely, in this case, there is a low possibility that the data content of the frame27with the ID120which expresses information relating to the inter-vehicle distance changes. Next, at step S12inFIG.9, the data identification section105of the relay ECU10determines whether or not this frame27is recorded in the cache memory section106. In this case, since the frame27is recorded in the cache memory section106(step S12: YES), at step S13, the transmission section102transmits the frame27with the ID120recorded in the cache memory section106to the application AP3that is the request originator. In this case 3B, the application AP3that is the request originator acquires the frame27with the ID120from the cache memory section106. Accordingly, an increase in the communication traffic volume over the in-vehicle network14is suppressed. Moreover, considering the vehicle state and the vehicle peripheral environment, the frame27(variable data) recorded in the cache memory section106is data that has a utility value to the application AP3. The application AP3is thus able to execute processing using the frame27that is recorded with data that has a utility value. Case 4 The application AP3compares inter-vehicle distance information obtained from the frame27with the ID120against a predetermined inter-vehicle distance threshold, and displays a warning on the display24if the inter-vehicle distance is an inter-vehicle distance threshold or lower. Moreover, plural inter-vehicle distance thresholds are provided according to the driving skill levels of the drivers. The application AP3selects the inter-vehicle distance threshold corresponding to the driving skill level, and compares the selected inter-vehicle distance threshold against the inter-vehicle distance. A case is envisaged in which the transmission request section101issues a transmission request to the receiving section104at a predetermined first timing under the control of the application AP3as the request originator, and this transmission request includes ID information for the frame27with the ID160. Moreover, a case is envisaged in which the transmission request section101has issued a transmission request to the receiving section104at a second timing earlier than the first timing under the control of the application AP4as the request originator, and the transmission request also included ID information for the frame27with the ID160. Moreover, it is assumed that the frame27with the ID160which was transmitted from the fourth ECU16D to the reception section103is transmitted from the transmission section102to the application AP4and this frame27is recorded in the cache memory section106. Note that for example after the ignition switch has been switched from off to on, the in-vehicle camera of the camera21may capture the face of the driver each time a predetermined duration elapses, and the second ECU16B may transmit the captured data to the relay ECU10each time a predetermined duration elapses. In such cases, for example, the relay ECU10identifies the driver based on the captured data received from the second ECU16B at the second timing. Also at the second timing, the relay ECU10issues a transmission request for the frame27with the ID160which corresponds to the identified driver to the fourth ECU16D, and causes the received frame27with the ID160to be recorded in the cache memory section106. Typically, the driver of the vehicle12does not change between a time at which the ignition switch of the vehicle12is switched from off to on and a time at which the ignition switch is switched from on to off. Accordingly, as illustrated inFIG.6, the frame27with the ID160is fixed data. In such a case, the data identification section105of the relay ECU10makes affirmative determination at step S11and step S12inFIG.9at the first timing. Then, at step S13, the transmission section102transmits the frame27with the ID160recorded in the cache memory section106to the application AP3that is the request originator. In this case 4, the application AP3that is the request originator acquires the frame27with the ID160from the cache memory section106. Accordingly, an increase in the communication traffic volume over the in-vehicle network14is suppressed. The corresponding frame27recorded in the cache memory section106is fixed data having content that does not change as time changes. The application AP3is thus able to execute processing using the frame27that stores data having a utility value. Case 5A A case is envisaged in which the transmission request section101issues a transmission request to the receiving section104under the control of the application AP4as the request originator, and the transmission request includes ID information for the frame27with the ID180. In this case, at step S14inFIG.9, the vehicle determination section108of the relay ECU10determines whether or not the specific condition relating to the frame27with the ID180that is the target data is satisfied, based on the two-dimensional map32. In this example, a case is envisaged in which the vehicle12is traveling on a road in a country that has a land border with another country and includes both general roads and expressways (this country is referred to hereafter as a specific country). When an application of the navigation system is being executed, the vehicle determination section108identifies that the vehicle12is traveling on a road in the specific country based on the frame27with the ID150received from the fourth ECU16D. In this case, the vehicle determination section108determines the specific condition not to be satisfied based on the two-dimensional map32. Namely, the vehicle determination section108makes negative determination at step S14. In this case, there is a possibility that the vehicle12moves between a general road and an expressway in the specific country, or that the vehicle12travels on a road in another country that borders the specific country. Namely, in this case, there is a possibility that the data content of the frame27with the ID180which expresses information relating to the speed limit changes. Next, at step S15inFIG.9, the data identification section105of the relay ECU10determines whether or not the elapsed duration of the frame27with the ID180is a threshold H or lower. In cases in which determination is negative at step S15, the frame27that is variable data is sent from the fourth ECU16D that is the transmission originator to the reception section103(step S16, step S17: YES), and this frame27is then transmitted from the transmission section102to the application AP4that is the request originator (step S18). Moreover, after affirmative determination has been made at step S17, the relay ECU10makes affirmative determination at step S20. The relay ECU10then makes negative determination at steps S21, S24, and S25, such that the frame27that is variable data is not recorded in the cache memory section106. In this case 5A, the frame27that is freshly generated variable data is transmitted from the fourth ECU16D to the application AP4that is the request originator via the relay ECU10. The application AP4is thus able to execute processing using the frame27that is recorded with data having a utility value. Case 5B A case is envisaged in which the transmission request section101issues a transmission request to the receiving section104at a predetermined first timing under the control of the application AP4as the request originator, and the transmission request includes ID information for the frame27with the ID180. Moreover, a case is envisaged in which the transmission request section101has issued a transmission request to the receiving section104at a second timing earlier than the first timing under the control of the application AP1as the request originator, and this transmission request also included ID information for the frame27with the ID180. Moreover, it is assumed that the frame27with the ID180which was transmitted from the fourth ECU16D to the reception section103is transmitted from the transmission section102to the application AP1and this frame27is recorded in the cache memory section106. In this case, at step S14inFIG.9, the vehicle determination section108of the relay ECU10determines whether or not the specific condition relating to the frame27with the ID180that is the target data is satisfied, based on the two-dimensional map32. In this example, the vehicle12is envisaged to be traveling on a general road on an island surrounded by sea in a particular country. There are no expressways on this island. When an application of the navigation system is being executed, the vehicle determination section108identifies that the vehicle12is traveling on the general road on the island based on the frame27with the ID150received from the fourth ECU16D. In such a case, the vehicle determination section108determines the specific condition to be satisfied based on the two-dimensional map32. Namely, the vehicle determination section108makes affirmative determination at step S14. In this case, there is no possibility of the vehicle12moving to a road outside of the island or of the vehicle12traveling on an expressway. Namely, in this case, there is no possibility of the data content of the frame27with the ID180which expresses information relating to the speed limit changing. Next, at step S12inFIG.9, the data identification section105of the relay ECU10determines whether or not this frame27has been recorded in the cache memory section106. In this case, since the frame27has been recorded in the cache memory section106(step S12: YES), at step S13, the transmission section102transmits the frame27with the ID180recorded in the cache memory section106to the application AP4that is the request originator. In this case 5B, the application AP4that is the request originator acquires the frame27with the ID180from the cache memory section106. Accordingly, an increase in the communication traffic volume over the in-vehicle network14is suppressed. Moreover, considering the vehicle state and the vehicle peripheral environment, the frame27(variable data) recorded in the cache memory section106is data that has a utility value to the application AP4. The application AP4is thus able to execute processing using the frame27that stores data having a utility value. As described above based on the respective cases, the cache memory section106of the present exemplary embodiment temporarily records transmission request data that is fixed data. The cache memory section106also temporarily records target data (transmission request data that is variable data) when the specific condition is satisfied. Accordingly, the request originator does not acquire, from the cache memory section106, variable data having a long elapsed duration and has no utility value. This enables the chance of the request originator acquiring data that does not have a utility value from the cache memory section106to be eliminated. Moreover, in cases in which the specific condition is satisfied, the transmission section102transmits the target data recorded in the cache memory section106to the request originator as transmission request data. In cases in which the specific condition is satisfied, the target data has a utility value, irrespective of the length of elapsed duration. This enables the chance of the request originator acquiring data that does not have a utility value from the cache memory section to be eliminated. In cases in which the specific condition is not satisfied, the cache memory section106temporarily records the corresponding frame27(target data) that is variable data when the data identification section105has determined the elapsed duration to be the corresponding threshold or lower. This target data has a utility value due to the short elapsed duration. This enables the chance of the request originator acquiring data that does not have a utility value from the cache memory section to be eliminated. Moreover, in cases in which the data identification section105determines that the elapsed duration of the transmission request data has exceeded the corresponding threshold, the transmission section102transmits transmission request data that has been transmitted from the transmission originator to the reception section103to the request originator. This enables the request originator to receive transmission request data that is variable data that has a short elapsed duration and still has a utility value. In cases in which the transmission request data is any out of fixed data, variable data that satisfies the specific condition, or variable data that does not satisfy the specific condition and has an elapsed duration of the corresponding threshold or lower, the fixed data or variable data recorded in the cache memory section106is transmitted to the request originator. Each request originator may, for example, issue transmission requests several dozen times or more per second. Accordingly, were all the transmission request data to be transmitted to the relay ECU10by the first ECU16A, the second ECU16B, the third ECU16C, and the fourth ECU16D, the communication traffic volume over the in-vehicle network14would become very high. However, by transmitting fixed data and variable data recorded in the cache memory section106to the request originator as in the present exemplary embodiment, such an increase in the communication traffic volume over the in-vehicle network14is suppressed. Although explanation has been given regarding the relay ECU10, the onboard relay method, and non-transitory recording medium according to the present exemplary embodiment, appropriate design modifications may be made to the relay ECU10, the onboard relay method, and non-transitory recording medium within a range that does not depart from the spirit of the present disclosure. The number of ECUs connected to the relay ECU10and applications is not limited as long as there is one or more of each. The present disclosure may be configured such that information is received from satellites of a global navigation satellite system other than GPS (for example, Galileo) to acquire position information of the vehicle12. The above-described combinations of at least one of the vehicle state or the vehicle peripheral environment combined with target data that satisfy the specific condition are merely examples, and specific conditions may be satisfied when other combinations are made. For example, a specific condition may be satisfied for predetermined target data when driving support control other than ACC is being executed. Moreover, whether or not such specific conditions are satisfied may be decided based on combinations of a vehicle state and target data. Similarly, whether or not such specific conditions are satisfied may be decided based on combinations of the vehicle peripheral environment and target data. The fixed data is not limited to the data described above. For example, the fixed data may include information regarding part numbers of vehicle components. The variable data is not limited to the data described above. For example, the variable data may include information on steering angle of a steering wheel, operation state of an engine (revolution speed, cooling water temperature, and the like), actuation state of a wiper, tire pressure, door open/closed state, control parameter employed in driving support control, and currency. The cache memory section106may be configured by non-volatile memory.
62,391
11860780
DESCRIPTION OF EMBODIMENTS Example methods, apparatus, and products for storage cache management in accordance with embodiments of the present disclosure are described with reference to the accompanying drawings, beginning withFIG.1A.FIG.1Aillustrates an example system for data storage, in accordance with some implementations. System100(also referred to as “storage system” herein) includes numerous elements for purposes of illustration rather than limitation. It may be noted that system100may include the same, more, or fewer elements configured in the same or different manner in other implementations. System100includes a number of computing devices164A-B. Computing devices (also referred to as “client devices” herein) may be embodied, for example, a server in a data center, a workstation, a personal computer, a notebook, or the like. Computing devices164A-B may be coupled for data communications to one or more storage arrays102A-B through a storage area network (‘SAN’)158or a local area network (‘LAN’)160. The SAN158may be implemented with a variety of data communications fabrics, devices, and protocols. For example, the fabrics for SAN158may include Fibre Channel, Ethernet, Infiniband, Serial Attached Small Computer System Interface (‘SAS’), or the like. Data communications protocols for use with SAN158may include Advanced Technology Attachment (‘ATA’), Fibre Channel Protocol, Small Computer System Interface (‘SCSI’), Internet Small Computer System Interface (‘iSCSI’), HyperSCSI, Non-Volatile Memory Express (‘NVMe’) over Fabrics, or the like. It may be noted that SAN158is provided for illustration, rather than limitation. Other data communication couplings may be implemented between computing devices164A-B and storage arrays102A-B. The LAN160may also be implemented with a variety of fabrics, devices, and protocols. For example, the fabrics for LAN160may include Ethernet (802.3), wireless (802.11), or the like. Data communication protocols for use in LAN160may include Transmission Control Protocol (‘TCP’), User Datagram Protocol (‘UDP’), Internet Protocol (‘IP’), HyperText Transfer Protocol (‘HTTP’), Wireless Access Protocol (‘WAP’), Handheld Device Transport Protocol (‘HDTP’), Session Initiation Protocol (‘SIP’), Real Time Protocol (‘RTP’), or the like. The LAN160may also connect to the Internet162. Storage arrays102A-B may provide persistent data storage for the computing devices164A-B. Storage array102A may be contained in a chassis (not shown), and storage array102B may be contained in another chassis (not shown), in implementations. Storage array102A and102B may include one or more storage array controllers110A-D (also referred to as “controller” herein). A storage array controller110A-D may be embodied as a module of automated computing machinery comprising computer hardware, computer software, or a combination of computer hardware and software. In some implementations, the storage array controllers110A-D may be configured to carry out various storage tasks. Storage tasks may include writing data received from the computing devices164A-B to storage array102A-B, erasing data from storage array102A-B, retrieving data from storage array102A-B and providing data to computing devices164A-B, monitoring and reporting of disk utilization and performance, performing redundancy operations, such as Redundant Array of Independent Drives (‘RAID’) or RAID-like data redundancy operations, compressing data, encrypting data, and so forth. Storage array controller110A-D may be implemented in a variety of ways, including as a Field Programmable Gate Array (‘FPGA’), a Programmable Logic Chip (‘PLC’), an Application Specific Integrated Circuit (‘ASIC’), System-on-Chip (‘SOC’), or any computing device that includes discrete components such as a processing device, central processing unit, computer memory, or various adapters. Storage array controller110A-D may include, for example, a data communications adapter configured to support communications via the SAN158or LAN160. In some implementations, storage array controller110A-D may be independently coupled to the LAN160. In implementations, storage array controller110A-D may include an I/O controller or the like that couples the storage array controller110A-D for data communications, through a midplane (not shown), to a persistent storage resource170A-B (also referred to as a “storage resource” herein). The persistent storage resource170A-B main include any number of storage drives171A-F (also referred to as “storage devices” herein) and any number of non-volatile Random Access Memory (‘NVRAM’) devices (not shown). In some implementations, the NVRAM devices of a persistent storage resource170A-B may be configured to receive, from the storage array controller110A-D, data to be stored in the storage drives171A-F. In some examples, the data may originate from computing devices164A-B. In some examples, writing data to the NVRAM device may be carried out more quickly than directly writing data to the storage drive171A-F. In implementations, the storage array controller110A-D may be configured to utilize the NVRAM devices as a quickly accessible buffer for data destined to be written to the storage drives171A-F. Latency for write requests using NVRAM devices as a buffer may be improved relative to a system in which a storage array controller110A-D writes data directly to the storage drives171A-F. In some implementations, the NVRAM devices may be implemented with computer memory in the form of high bandwidth, low latency RAM. The NVRAM device is referred to as “non-volatile” because the NVRAM device may receive or include a unique power source that maintains the state of the RAM after main power loss to the NVRAM device. Such a power source may be a battery, one or more capacitors, or the like. In response to a power loss, the NVRAM device may be configured to write the contents of the RAM to a persistent storage, such as the storage drives171A-F. In implementations, storage drive171A-F may refer to any device configured to record data persistently, where “persistently” or “persistent” refers as to a device's ability to maintain recorded data after loss of power. In some implementations, storage drive171A-F may correspond to non-disk storage media. For example, the storage drive171A-F may be one or more solid-state drives (‘SSDs’), flash memory based storage, any type of solid-state non-volatile memory, or any other type of non-mechanical storage device. In other implementations, storage drive171A-F may include mechanical or spinning hard disk, such as hard-disk drives (‘HDD’). In some implementations, the storage array controllers110A-D may be configured for offloading device management responsibilities from storage drive171A-F in storage array102A-B. For example, storage array controllers110A-D may manage control information that may describe the state of one or more memory blocks in the storage drives171A-F. The control information may indicate, for example, that a particular memory block has failed and should no longer be written to, that a particular memory block contains boot code for a storage array controller110A-D, the number of program-erase (‘PIE’) cycles that have been performed on a particular memory block, the age of data stored in a particular memory block, the type of data that is stored in a particular memory block, and so forth. In some implementations, the control information may be stored with an associated memory block as metadata. In other implementations, the control information for the storage drives171A-F may be stored in one or more particular memory blocks of the storage drives171A-F that are selected by the storage array controller110A-D. The selected memory blocks may be tagged with an identifier indicating that the selected memory block contains control information. The identifier may be utilized by the storage array controllers110A-D in conjunction with storage drives171A-F to quickly identify the memory blocks that contain control information. For example, the storage controllers110A-D may issue a command to locate memory blocks that contain control information. It may be noted that control information may be so large that parts of the control information may be stored in multiple locations, that the control information may be stored in multiple locations for purposes of redundancy, for example, or that the control information may otherwise be distributed across multiple memory blocks in the storage drive171A-F. In implementations, storage array controllers110A-D may offload device management responsibilities from storage drives171A-F of storage array102A-B by retrieving, from the storage drives171A-F, control information describing the state of one or more memory blocks in the storage drives171A-F. Retrieving the control information from the storage drives171A-F may be carried out, for example, by the storage array controller110A-D querying the storage drives171A-F for the location of control information for a particular storage drive171A-F. The storage drives171A-F may be configured to execute instructions that enable the storage drive171A-F to identify the location of the control information. The instructions may be executed by a controller (not shown) associated with or otherwise located on the storage drive171A-F and may cause the storage drive171A-F to scan a portion of each memory block to identify the memory blocks that store control information for the storage drives171A-F. The storage drives171A-F may respond by sending a response message to the storage array controller110A-D that includes the location of control information for the storage drive171A-F. Responsive to receiving the response message, storage array controllers110A-D may issue a request to read data stored at the address associated with the location of control information for the storage drives171A-F. In other implementations, the storage array controllers110A-D may further offload device management responsibilities from storage drives171A-F by performing, in response to receiving the control information, a storage drive management operation. A storage drive management operation may include, for example, an operation that is typically performed by the storage drive171A-F (e.g., the controller (not shown) associated with a particular storage drive171A-F). A storage drive management operation may include, for example, ensuring that data is not written to failed memory blocks within the storage drive171A-F, ensuring that data is written to memory blocks within the storage drive171A-F in such a way that adequate wear leveling is achieved, and so forth. In implementations, storage array102A-B may implement two or more storage array controllers110A-D. For example, storage array102A may include storage array controllers110A and storage array controllers110B. At a given instance, a single storage array controller110A-D (e.g., storage array controller110A) of a storage system100may be designated with primary status (also referred to as “primary controller” herein), and other storage array controllers110A-D (e.g., storage array controller110B) may be designated with secondary status (also referred to as “secondary controller” herein). The primary controller may have particular rights, such as permission to alter data in persistent storage resource170A-B (e.g., writing data to persistent storage resource170A-B). At least some of the rights of the primary controller may supersede the rights of the secondary controller. For instance, the secondary controller may not have permission to alter data in persistent storage resource170A-B when the primary controller has the right. The status of storage array controllers110A-D may change. For example, storage array controller110A may be designated with secondary status, and storage array controller110B may be designated with primary status. In some implementations, a primary controller, such as storage array controller110A, may serve as the primary controller for one or more storage arrays102A-B, and a second controller, such as storage array controller110B, may serve as the secondary controller for the one or more storage arrays102A-B. For example, storage array controller110A may be the primary controller for storage array102A and storage array102B, and storage array controller110B may be the secondary controller for storage array102A and102B. In some implementations, storage array controllers110C and110D (also referred to as “storage processing modules”) may neither have primary or secondary status. Storage array controllers110C and110D, implemented as storage processing modules, may act as a communication interface between the primary and secondary controllers (e.g., storage array controllers110A and110B, respectively) and storage array102B. For example, storage array controller110A of storage array102A may send a write request, via SAN158, to storage array102B. The write request may be received by both storage array controllers110C and110D of storage array102B. Storage array controllers110C and110D facilitate the communication, e.g., send the write request to the appropriate storage drive171A-F. It may be noted that in some implementations storage processing modules may be used to increase the number of storage drives controlled by the primary and secondary controllers. In implementations, storage array controllers110A-D are communicatively coupled, via a midplane (not shown), to one or more storage drives171A-F and to one or more NVRAM devices (not shown) that are included as part of a storage array102A-B. The storage array controllers110A-D may be coupled to the midplane via one or more data communication links and the midplane may be coupled to the storage drives171A-F and the NVRAM devices via one or more data communications links. The data communications links described herein are collectively illustrated by data communications links108A-D and may include a Peripheral Component Interconnect Express (‘PCIe’) bus, for example. FIG.1Billustrates an example system for data storage, in accordance with some implementations. Storage array controller101illustrated inFIG.1Bmay be similar to the storage array controllers110A-D described with respect toFIG.1A. In one example, storage array controller101may be similar to storage array controller110A or storage array controller110B. Storage array controller101includes numerous elements for purposes of illustration rather than limitation. It may be noted that storage array controller101may include the same, more, or fewer elements configured in the same or different manner in other implementations. It may be noted that elements ofFIG.1Amay be included below to help illustrate features of storage array controller101. Storage array controller101may include one or more processing devices104and random access memory (‘RAM’)111. Processing device104(or controller101) represents one or more general-purpose processing devices such as a microprocessor, central processing unit, or the like. More particularly, the processing device104(or controller101) may be a complex instruction set computing (‘CISC’) microprocessor, reduced instruction set computing (‘RISC’) microprocessor, very long instruction word (‘VLIW’) microprocessor, or a processor implementing other instruction sets or processors implementing a combination of instruction sets. The processing device104(or controller101) may also be one or more special-purpose processing devices such as an ASIC, an FPGA, a digital signal processor (‘DSP’), network processor, or the like. The processing device104may be connected to the RAM111via a data communications link106, which may be embodied as a high speed memory bus such as a Double-Data Rate4(‘DDR4’) bus. Stored in RAM111is an operating system112. In some implementations, instructions113are stored in RAM111. Instructions113may include computer program instructions for performing operations in a direct-mapped flash storage system. In one embodiment, a direct-mapped flash storage system is one that addresses data blocks within flash drives directly and without an address translation performed by the storage controllers of the flash drives. In implementations, storage array controller101includes one or more host bus adapters103A-C that are coupled to the processing device104via a data communications link105A-C. In implementations, host bus adapters103A-C may be computer hardware that connects a host system (e.g., the storage array controller) to other network and storage arrays. In some examples, host bus adapters103A-C may be a Fibre Channel adapter that enables the storage array controller101to connect to a SAN, an Ethernet adapter that enables the storage array controller101to connect to a LAN, or the like. Host bus adapters103A-C may be coupled to the processing device104via a data communications link105A-C such as, for example, a PCIe bus. In implementations, storage array controller101may include a host bus adapter114that is coupled to an expander115. The expander115may be used to attach a host system to a larger number of storage drives. The expander115may, for example, be a SAS expander utilized to enable the host bus adapter114to attach to storage drives in an implementation where the host bus adapter114is embodied as a SAS controller. In implementations, storage array controller101may include a switch116coupled to the processing device104via a data communications link109. The switch116may be a computer hardware device that can create multiple endpoints out of a single endpoint, thereby enabling multiple devices to share a single endpoint. The switch116may, for example, be a PCIe switch that is coupled to a PCIe bus (e.g., data communications link109) and presents multiple PCIe connection points to the midplane. In implementations, storage array controller101includes a data communications link107for coupling the storage array controller101to other storage array controllers. In some examples, data communications link107may be a QuickPath Interconnect (QPI) interconnect. A traditional storage system that uses traditional flash drives may implement a process across the flash drives that are part of the traditional storage system. For example, a higher level process of the storage system may initiate and control a process across the flash drives. However, a flash drive of the traditional storage system may include its own storage controller that also performs the process. Thus, for the traditional storage system, a higher level process (e.g., initiated by the storage system) and a lower level process (e.g., initiated by a storage controller of the storage system) may both be performed. To resolve various deficiencies of a traditional storage system, operations may be performed by higher level processes and not by the lower level processes. For example, the flash storage system may include flash drives that do not include storage controllers that provide the process. Thus, the operating system of the flash storage system itself may initiate and control the process. This may be accomplished by a direct-mapped flash storage system that addresses data blocks within the flash drives directly and without an address translation performed by the storage controllers of the flash drives. In implementations, storage drive171A-F may be one or more zoned storage devices. In some implementations, the one or more zoned storage devices may be a shingled HDD. In implementations, the one or more storage devices may be a flash-based SSD. In a zoned storage device, a zoned namespace on the zoned storage device can be addressed by groups of blocks that are grouped and aligned by a natural size, forming a number of addressable zones. In implementations utilizing an SSD, the natural size may be based on the erase block size of the SSD. In some implementations, the zones of the zoned storage device may be defined during initialization of the zoned storage device. In implementations, the zones may be defined dynamically as data is written to the zoned storage device. In some implementations, zones may be heterogeneous, with some zones each being a page group and other zones being multiple page groups. In implementations, some zones may correspond to an erase block and other zones may correspond to multiple erase blocks. In an implementation, zones may be any combination of differing numbers of pages in page groups and/or erase blocks, for heterogeneous mixes of programming modes, manufacturers, product types and/or product generations of storage devices, as applied to heterogeneous assemblies, upgrades, distributed storages, etc. In some implementations, zones may be defined as having usage characteristics, such as a property of supporting data with particular kinds of longevity (very short lived or very long lived, for example). These properties could be used by a zoned storage device to determine how the zone will be managed over the zone's expected lifetime. It should be appreciated that a zone is a virtual construct. Any particular zone may not have a fixed location at a storage device. Until allocated, a zone may not have any location at a storage device. A zone may correspond to a number representing a chunk of virtually allocatable space that is the size of an erase block or other block size in various implementations. When the system allocates or opens a zone, zones get allocated to flash or other solid-state storage memory and, as the system writes to the zone, pages are written to that mapped flash or other solid-state storage memory of the zoned storage device. When the system closes the zone, the associated erase block(s) or other sized block(s) are completed. At some point in the future, the system may delete a zone which will free up the zone's allocated space. During its lifetime, a zone may be moved around to different locations of the zoned storage device, e.g., as the zoned storage device does internal maintenance. In implementations, the zones of the zoned storage device may be in different states. A zone may be in an empty state in which data has not been stored at the zone. An empty zone may be opened explicitly, or implicitly by writing data to the zone. This is the initial state for zones on a fresh zoned storage device, but may also be the result of a zone reset. In some implementations, an empty zone may have a designated location within the flash memory of the zoned storage device. In an implementation, the location of the empty zone may be chosen when the zone is first opened or first written to (or later if writes are buffered into memory). A zone may be in an open state either implicitly or explicitly, where a zone that is in an open state may be written to store data with write or append commands. In an implementation, a zone that is in an open state may also be written to using a copy command that copies data from a different zone. In some implementations, a zoned storage device may have a limit on the number of open zones at a particular time. A zone in a closed state is a zone that has been partially written to, but has entered a closed state after issuing an explicit close operation. A zone in a closed state may be left available for future writes, but may reduce some of the run-time overhead consumed by keeping the zone in an open state. In implementations, a zoned storage device may have a limit on the number of closed zones at a particular time. A zone in a full state is a zone that is storing data and can no longer be written to. A zone may be in a full state either after writes have written data to the entirety of the zone or as a result of a zone finish operation. Prior to a finish operation, a zone may or may not have been completely written. After a finish operation, however, the zone may not be opened a written to further without first performing a zone reset operation. The mapping from a zone to an erase block (or to a shingled track in an HDD) may be arbitrary, dynamic, and hidden from view. The process of opening a zone may be an operation that allows a new zone to be dynamically mapped to underlying storage of the zoned storage device, and then allows data to be written through appending writes into the zone until the zone reaches capacity. The zone can be finished at any point, after which further data may not be written into the zone. When the data stored at the zone is no longer needed, the zone can be reset which effectively deletes the zone's content from the zoned storage device, making the physical storage held by that zone available for the subsequent storage of data. Once a zone has been written and finished, the zoned storage device ensures that the data stored at the zone is not lost until the zone is reset. In the time between writing the data to the zone and the resetting of the zone, the zone may be moved around between shingle tracks or erase blocks as part of maintenance operations within the zoned storage device, such as by copying data to keep the data refreshed or to handle memory cell aging in an SSD. In implementations utilizing an HDD, the resetting of the zone may allow the shingle tracks to be allocated to a new, opened zone that may be opened at some point in the future. In implementations utilizing an SSD, the resetting of the zone may cause the associated physical erase block(s) of the zone to be erased and subsequently reused for the storage of data. In some implementations, the zoned storage device may have a limit on the number of open zones at a point in time to reduce the amount of overhead dedicated to keeping zones open. The operating system of the flash storage system may identify and maintain a list of allocation units across multiple flash drives of the flash storage system. The allocation units may be entire erase blocks or multiple erase blocks. The operating system may maintain a map or address range that directly maps addresses to erase blocks of the flash drives of the flash storage system. Direct mapping to the erase blocks of the flash drives may be used to rewrite data and erase data. For example, the operations may be performed on one or more allocation units that include a first data and a second data where the first data is to be retained and the second data is no longer being used by the flash storage system. The operating system may initiate the process to write the first data to new locations within other allocation units and erasing the second data and marking the allocation units as being available for use for subsequent data. Thus, the process may only be performed by the higher level operating system of the flash storage system without an additional lower level process being performed by controllers of the flash drives. Advantages of the process being performed only by the operating system of the flash storage system include increased reliability of the flash drives of the flash storage system as unnecessary or redundant write operations are not being performed during the process. One possible point of novelty here is the concept of initiating and controlling the process at the operating system of the flash storage system. In addition, the process can be controlled by the operating system across multiple flash drives. This is contrast to the process being performed by a storage controller of a flash drive. A storage system can consist of two storage array controllers that share a set of drives for failover purposes, or it could consist of a single storage array controller that provides a storage service that utilizes multiple drives, or it could consist of a distributed network of storage array controllers each with some number of drives or some amount of Flash storage where the storage array controllers in the network collaborate to provide a complete storage service and collaborate on various aspects of a storage service including storage allocation and garbage collection. FIG.1Cillustrates a third example system117for data storage in accordance with some implementations. System117(also referred to as “storage system” herein) includes numerous elements for purposes of illustration rather than limitation. It may be noted that system117may include the same, more, or fewer elements configured in the same or different manner in other implementations. In one embodiment, system117includes a dual Peripheral Component Interconnect (‘PCI’) flash storage device118with separately addressable fast write storage. System117may include a storage device controller119. In one embodiment, storage device controller119A-D may be a CPU, ASIC, FPGA, or any other circuitry that may implement control structures necessary according to the present disclosure. In one embodiment, system117includes flash memory devices (e.g., including flash memory devices120a-n), operatively coupled to various channels of the storage device controller119. Flash memory devices120a-n, may be presented to the controller119A-D as an addressable collection of Flash pages, erase blocks, and/or control elements sufficient to allow the storage device controller119A-D to program and retrieve various aspects of the Flash. In one embodiment, storage device controller119A-D may perform operations on flash memory devices120a-nincluding storing and retrieving data content of pages, arranging and erasing any blocks, tracking statistics related to the use and reuse of Flash memory pages, erase blocks, and cells, tracking and predicting error codes and faults within the Flash memory, controlling voltage levels associated with programming and retrieving contents of Flash cells, etc. In one embodiment, system117may include RAM121to store separately addressable fast-write data. In one embodiment, RAM121may be one or more separate discrete devices. In another embodiment, RAM121may be integrated into storage device controller119A-D or multiple storage device controllers. The RAM121may be utilized for other purposes as well, such as temporary program memory for a processing device (e.g., a CPU) in the storage device controller119. In one embodiment, system117may include a stored energy device122, such as a rechargeable battery or a capacitor. Stored energy device122may store energy sufficient to power the storage device controller119, some amount of the RAM (e.g., RAM121), and some amount of Flash memory (e.g., Flash memory120a-120n) for sufficient time to write the contents of RAM to Flash memory. In one embodiment, storage device controller119A-D may write the contents of RAM to Flash Memory if the storage device controller detects loss of external power. In one embodiment, system117includes two data communications links123a,123b. In one embodiment, data communications links123a,123bmay be PCI interfaces. In another embodiment, data communications links123a,123bmay be based on other communications standards (e.g., HyperTransport, InfiniBand, etc.). Data communications links123a,123bmay be based on non-volatile memory express (‘NVMe’) or NVMe over fabrics (‘NVMf’) specifications that allow external connection to the storage device controller119A-D from other components in the storage system117. It should be noted that data communications links may be interchangeably referred to herein as PCI buses for convenience. System117may also include an external power source (not shown), which may be provided over one or both data communications links123a,123b, or which may be provided separately. An alternative embodiment includes a separate Flash memory (not shown) dedicated for use in storing the content of RAM121. The storage device controller119A-D may present a logical device over a PCI bus which may include an addressable fast-write logical device, or a distinct part of the logical address space of the storage device118, which may be presented as PCI memory or as persistent storage. In one embodiment, operations to store into the device are directed into the RAM121. On power failure, the storage device controller119A-D may write stored content associated with the addressable fast-write logical storage to Flash memory (e.g., Flash memory120a-n) for long-term persistent storage. In one embodiment, the logical device may include some presentation of some or all of the content of the Flash memory devices120a-n, where that presentation allows a storage system including a storage device118(e.g., storage system117) to directly address Flash memory pages and directly reprogram erase blocks from storage system components that are external to the storage device through the PCI bus. The presentation may also allow one or more of the external components to control and retrieve other aspects of the Flash memory including some or all of: tracking statistics related to use and reuse of Flash memory pages, erase blocks, and cells across all the Flash memory devices; tracking and predicting error codes and faults within and across the Flash memory devices; controlling voltage levels associated with programming and retrieving contents of Flash cells; etc. In one embodiment, the stored energy device122may be sufficient to ensure completion of in-progress operations to the Flash memory devices120a-120n. The stored energy device122may power storage device controller119A-D and associated Flash memory devices (e.g.,120a-n) for those operations, as well as for the storing of fast-write RAM to Flash memory. Stored energy device122may be used to store accumulated statistics and other parameters kept and tracked by the Flash memory devices120a-nand/or the storage device controller119. Separate capacitors or stored energy devices (such as smaller capacitors near or embedded within the Flash memory devices themselves) may be used for some or all of the operations described herein. Various schemes may be used to track and optimize the life span of the stored energy component, such as adjusting voltage levels over time, partially discharging the stored energy device122to measure corresponding discharge characteristics, etc. If the available energy decreases over time, the effective available capacity of the addressable fast-write storage may be decreased to ensure that it can be written safely based on the currently available stored energy. FIG.1Dillustrates a third example storage system124for data storage in accordance with some implementations. In one embodiment, storage system124includes storage controllers125a,125b. In one embodiment, storage controllers125a,125bare operatively coupled to Dual PCI storage devices. Storage controllers125a,125bmay be operatively coupled (e.g., via a storage network130) to some number of host computers127a-n. In one embodiment, two storage controllers (e.g.,125aand125b) provide storage services, such as a SCS block storage array, a file server, an object server, a database or data analytics service, etc. The storage controllers125a,125bmay provide services through some number of network interfaces (e.g.,126a-d) to host computers127a-noutside of the storage system124. Storage controllers125a,125bmay provide integrated services or an application entirely within the storage system124, forming a converged storage and compute system. The storage controllers125a,125bmay utilize the fast write memory within or across storage devices119a-dto journal in progress operations to ensure the operations are not lost on a power failure, storage controller removal, storage controller or storage system shutdown, or some fault of one or more software or hardware components within the storage system124. In one embodiment, storage controllers125a,125boperate as PCI masters to one or the other PCI buses128a,128b. In another embodiment,128aand128bmay be based on other communications standards (e.g., HyperTransport, InfiniBand, etc.). Other storage system embodiments may operate storage controllers125a,125bas multi-masters for both PCI buses128a,128b. Alternately, a PCI/NVMe/NVMf switching infrastructure or fabric may connect multiple storage controllers. Some storage system embodiments may allow storage devices to communicate with each other directly rather than communicating only with storage controllers. In one embodiment, a storage device controller119amay be operable under direction from a storage controller125ato synthesize and transfer data to be stored into Flash memory devices from data that has been stored in RAM (e.g., RAM121ofFIG.1C). For example, a recalculated version of RAM content may be transferred after a storage controller has determined that an operation has fully committed across the storage system, or when fast-write memory on the device has reached a certain used capacity, or after a certain amount of time, to ensure improve safety of the data or to release addressable fast-write capacity for reuse. This mechanism may be used, for example, to avoid a second transfer over a bus (e.g.,128a,128b) from the storage controllers125a,125b. In one embodiment, a recalculation may include compressing data, attaching indexing or other metadata, combining multiple data segments together, performing erasure code calculations, etc. In one embodiment, under direction from a storage controller125a,125b, a storage device controller119a,119bmay be operable to calculate and transfer data to other storage devices from data stored in RAM (e.g., RAM121ofFIG.1C) without involvement of the storage controllers125a,125b. This operation may be used to mirror data stored in one storage controller125ato another storage controller125b, or it could be used to offload compression, data aggregation, and/or erasure coding calculations and transfers to storage devices to reduce load on storage controllers or the storage controller interface129a,129bto the PCI bus128a,128b. A storage device controller119A-D may include mechanisms for implementing high availability primitives for use by other parts of a storage system external to the Dual PCI storage device118. For example, reservation or exclusion primitives may be provided so that, in a storage system with two storage controllers providing a highly available storage service, one storage controller may prevent the other storage controller from accessing or continuing to access the storage device. This could be used, for example, in cases where one controller detects that the other controller is not functioning properly or where the interconnect between the two storage controllers may itself not be functioning properly. In one embodiment, a storage system for use with Dual PCI direct mapped storage devices with separately addressable fast write storage includes systems that manage erase blocks or groups of erase blocks as allocation units for storing data on behalf of the storage service, or for storing metadata (e.g., indexes, logs, etc.) associated with the storage service, or for proper management of the storage system itself. Flash pages, which may be a few kilobytes in size, may be written as data arrives or as the storage system is to persist data for long intervals of time (e.g., above a defined threshold of time). To commit data more quickly, or to reduce the number of writes to the Flash memory devices, the storage controllers may first write data into the separately addressable fast write storage on one or more storage devices. In one embodiment, the storage controllers125a,125bmay initiate the use of erase blocks within and across storage devices (e.g.,118) in accordance with an age and expected remaining lifespan of the storage devices, or based on other statistics. The storage controllers125a,125bmay initiate garbage collection and data migration between storage devices in accordance with pages that are no longer needed as well as to manage Flash page and erase block lifespans and to manage overall system performance. In one embodiment, the storage system124may utilize mirroring and/or erasure coding schemes as part of storing data into addressable fast write storage and/or as part of writing data into allocation units associated with erase blocks. Erasure codes may be used across storage devices, as well as within erase blocks or allocation units, or within and across Flash memory devices on a single storage device, to provide redundancy against single or multiple storage device failures or to protect against internal corruptions of Flash memory pages resulting from Flash memory operations or from degradation of Flash memory cells. Mirroring and erasure coding at various levels may be used to recover from multiple types of failures that occur separately or in combination. The embodiments depicted with reference toFIGS.2A-Gillustrate a storage cluster that stores user data, such as user data originating from one or more user or client systems or other sources external to the storage cluster. The storage cluster distributes user data across storage nodes housed within a chassis, or across multiple chassis, using erasure coding and redundant copies of metadata. Erasure coding refers to a method of data protection or reconstruction in which data is stored across a set of different locations, such as disks, storage nodes or geographic locations. Flash memory is one type of solid-state memory that may be integrated with the embodiments, although the embodiments may be extended to other types of solid-state memory or other storage medium, including non-solid state memory. Control of storage locations and workloads are distributed across the storage locations in a clustered peer-to-peer system. Tasks such as mediating communications between the various storage nodes, detecting when a storage node has become unavailable, and balancing I/Os (inputs and outputs) across the various storage nodes, are all handled on a distributed basis. Data is laid out or distributed across multiple storage nodes in data fragments or stripes that support data recovery in some embodiments. Ownership of data can be reassigned within a cluster, independent of input and output patterns. This architecture described in more detail below allows a storage node in the cluster to fail, with the system remaining operational, since the data can be reconstructed from other storage nodes and thus remain available for input and output operations. In various embodiments, a storage node may be referred to as a cluster node, a blade, or a server. The storage cluster may be contained within a chassis, i.e., an enclosure housing one or more storage nodes. A mechanism to provide power to each storage node, such as a power distribution bus, and a communication mechanism, such as a communication bus that enables communication between the storage nodes are included within the chassis. The storage cluster can run as an independent system in one location according to some embodiments. In one embodiment, a chassis contains at least two instances of both the power distribution and the communication bus which may be enabled or disabled independently. The internal communication bus may be an Ethernet bus, however, other technologies such as PCIe, InfiniBand, and others, are equally suitable. The chassis provides a port for an external communication bus for enabling communication between multiple chassis, directly or through a switch, and with client systems. The external communication may use a technology such as Ethernet, InfiniBand, Fibre Channel, etc. In some embodiments, the external communication bus uses different communication bus technologies for inter-chassis and client communication. If a switch is deployed within or between chassis, the switch may act as a translation between multiple protocols or technologies. When multiple chassis are connected to define a storage cluster, the storage cluster may be accessed by a client using either proprietary interfaces or standard interfaces such as network file system (‘NFS’), common internet file system (‘CIFS’), small computer system interface (‘SCSI’) or hypertext transfer protocol (‘HTTP’). Translation from the client protocol may occur at the switch, chassis external communication bus or within each storage node. In some embodiments, multiple chassis may be coupled or connected to each other through an aggregator switch. A portion and/or all of the coupled or connected chassis may be designated as a storage cluster. As discussed above, each chassis can have multiple blades, each blade has a media access control (‘MAC’) address, but the storage cluster is presented to an external network as having a single cluster IP address and a single MAC address in some embodiments. Each storage node may be one or more storage servers and each storage server is connected to one or more non-volatile solid state memory units, which may be referred to as storage units or storage devices. One embodiment includes a single storage server in each storage node and between one to eight non-volatile solid state memory units, however this one example is not meant to be limiting. The storage server may include a processor, DRAM and interfaces for the internal communication bus and power distribution for each of the power buses. Inside the storage node, the interfaces and storage unit share a communication bus, e.g., PCI Express, in some embodiments. The non-volatile solid state memory units may directly access the internal communication bus interface through a storage node communication bus, or request the storage node to access the bus interface. The non-volatile solid state memory unit contains an embedded CPU, solid state storage controller, and a quantity of solid state mass storage, e.g., between 2-32 terabytes (‘TB’) in some embodiments. An embedded volatile storage medium, such as DRAM, and an energy reserve apparatus are included in the non-volatile solid state memory unit. In some embodiments, the energy reserve apparatus is a capacitor, super-capacitor, or battery that enables transferring a subset of DRAM contents to a stable storage medium in the case of power loss. In some embodiments, the non-volatile solid state memory unit is constructed with a storage class memory, such as phase change or magnetoresistive random access memory (‘MRAM’) that substitutes for DRAM and enables a reduced power hold-up apparatus. One of many features of the storage nodes and non-volatile solid state storage is the ability to proactively rebuild data in a storage cluster. The storage nodes and non-volatile solid state storage can determine when a storage node or non-volatile solid state storage in the storage cluster is unreachable, independent of whether there is an attempt to read data involving that storage node or non-volatile solid state storage. The storage nodes and non-volatile solid state storage then cooperate to recover and rebuild the data in at least partially new locations. This constitutes a proactive rebuild, in that the system rebuilds data without waiting until the data is needed for a read access initiated from a client system employing the storage cluster. These and further details of the storage memory and operation thereof are discussed below. FIG.2Ais a perspective view of a storage cluster161, with multiple storage nodes150and internal solid-state memory coupled to each storage node to provide network attached storage or storage area network, in accordance with some embodiments. A network attached storage, storage area network, or a storage cluster, or other storage memory, could include one or more storage clusters161, each having one or more storage nodes150, in a flexible and reconfigurable arrangement of both the physical components and the amount of storage memory provided thereby. The storage cluster161is designed to fit in a rack, and one or more racks can be set up and populated as desired for the storage memory. The storage cluster161has a chassis138having multiple slots142. It should be appreciated that chassis138may be referred to as a housing, enclosure, or rack unit. In one embodiment, the chassis138has fourteen slots142, although other numbers of slots are readily devised. For example, some embodiments have four slots, eight slots, sixteen slots, thirty-two slots, or other suitable number of slots. Each slot142can accommodate one storage node150in some embodiments. Chassis138includes flaps148that can be utilized to mount the chassis138on a rack. Fans144provide air circulation for cooling of the storage nodes150and components thereof, although other cooling components could be used, or an embodiment could be devised without cooling components. A switch fabric146couples storage nodes150within chassis138together and to a network for communication to the memory. In an embodiment depicted in herein, the slots142to the left of the switch fabric146and fans144are shown occupied by storage nodes150, while the slots142to the right of the switch fabric146and fans144are empty and available for insertion of storage node150for illustrative purposes. This configuration is one example, and one or more storage nodes150could occupy the slots142in various further arrangements. The storage node arrangements need not be sequential or adjacent in some embodiments. Storage nodes150are hot pluggable, meaning that a storage node150can be inserted into a slot142in the chassis138, or removed from a slot142, without stopping or powering down the system. Upon insertion or removal of storage node150from slot142, the system automatically reconfigures in order to recognize and adapt to the change. Reconfiguration, in some embodiments, includes restoring redundancy and/or rebalancing data or load. Each storage node150can have multiple components. In the embodiment shown here, the storage node150includes a printed circuit board159populated by a CPU156, i.e., processor, a memory154coupled to the CPU156, and a non-volatile solid state storage152coupled to the CPU156, although other mountings and/or components could be used in further embodiments. The memory154has instructions which are executed by the CPU156and/or data operated on by the CPU156. As further explained below, the non-volatile solid state storage152includes flash or, in further embodiments, other types of solid-state memory. Referring toFIG.2A, storage cluster161is scalable, meaning that storage capacity with non-uniform storage sizes is readily added, as described above. One or more storage nodes150can be plugged into or removed from each chassis and the storage cluster self-configures in some embodiments. Plug-in storage nodes150, whether installed in a chassis as delivered or later added, can have different sizes. For example, in one embodiment a storage node150can have any multiple of 4 TB, e.g., 8 TB, 12 TB, 16 TB, 32 TB, etc. In further embodiments, a storage node150could have any multiple of other storage amounts or capacities. Storage capacity of each storage node150is broadcast, and influences decisions of how to stripe the data. For maximum storage efficiency, an embodiment can self-configure as wide as possible in the stripe, subject to a predetermined requirement of continued operation with loss of up to one, or up to two, non-volatile solid state storage152units or storage nodes150within the chassis. FIG.2Bis a block diagram showing a communications interconnect173and power distribution bus172coupling multiple storage nodes150. Referring back toFIG.2A, the communications interconnect173can be included in or implemented with the switch fabric146in some embodiments. Where multiple storage clusters161occupy a rack, the communications interconnect173can be included in or implemented with a top of rack switch, in some embodiments. As illustrated inFIG.2B, storage cluster161is enclosed within a single chassis138. External port176is coupled to storage nodes150through communications interconnect173, while external port174is coupled directly to a storage node. External power port178is coupled to power distribution bus172. Storage nodes150may include varying amounts and differing capacities of non-volatile solid state storage152as described with reference toFIG.2A. In addition, one or more storage nodes150may be a compute only storage node as illustrated inFIG.2B. Authorities168are implemented on the non-volatile solid state storage152, for example as lists or other data structures stored in memory. In some embodiments the authorities are stored within the non-volatile solid state storage152and supported by software executing on a controller or other processor of the non-volatile solid state storage152. In a further embodiment, authorities168are implemented on the storage nodes150, for example as lists or other data structures stored in the memory154and supported by software executing on the CPU156of the storage node150. Authorities168control how and where data is stored in the non-volatile solid state storage152in some embodiments. This control assists in determining which type of erasure coding scheme is applied to the data, and which storage nodes150have which portions of the data. Each authority168may be assigned to a non-volatile solid state storage152. Each authority may control a range of inode numbers, segment numbers, or other data identifiers which are assigned to data by a file system, by the storage nodes150, or by the non-volatile solid state storage152, in various embodiments. Every piece of data, and every piece of metadata, has redundancy in the system in some embodiments. In addition, every piece of data and every piece of metadata has an owner, which may be referred to as an authority. If that authority is unreachable, for example through failure of a storage node, there is a plan of succession for how to find that data or that metadata. In various embodiments, there are redundant copies of authorities168. Authorities168have a relationship to storage nodes150and non-volatile solid state storage152in some embodiments. Each authority168, covering a range of data segment numbers or other identifiers of the data, may be assigned to a specific non-volatile solid state storage152. In some embodiments the authorities168for all of such ranges are distributed over the non-volatile solid state storage152of a storage cluster. Each storage node150has a network port that provides access to the non-volatile solid state storage(s)152of that storage node150. Data can be stored in a segment, which is associated with a segment number and that segment number is an indirection for a configuration of a RAID (redundant array of independent disks) stripe in some embodiments. The assignment and use of the authorities168thus establishes an indirection to data. Indirection may be referred to as the ability to reference data indirectly, in this case via an authority168, in accordance with some embodiments. A segment identifies a set of non-volatile solid state storage152and a local identifier into the set of non-volatile solid state storage152that may contain data. In some embodiments, the local identifier is an offset into the device and may be reused sequentially by multiple segments. In other embodiments the local identifier is unique for a specific segment and never reused. The offsets in the non-volatile solid state storage152are applied to locating data for writing to or reading from the non-volatile solid state storage152(in the form of a RAID stripe). Data is striped across multiple units of non-volatile solid state storage152, which may include or be different from the non-volatile solid state storage152having the authority168for a particular data segment. If there is a change in where a particular segment of data is located, e.g., during a data move or a data reconstruction, the authority168for that data segment should be consulted, at that non-volatile solid state storage152or storage node150having that authority168. In order to locate a particular piece of data, embodiments calculate a hash value for a data segment or apply an inode number or a data segment number. The output of this operation points to a non-volatile solid state storage152having the authority168for that particular piece of data. In some embodiments there are two stages to this operation. The first stage maps an entity identifier (ID), e.g., a segment number, inode number, or directory number to an authority identifier. This mapping may include a calculation such as a hash or a bit mask. The second stage is mapping the authority identifier to a particular non-volatile solid state storage152, which may be done through an explicit mapping. The operation is repeatable, so that when the calculation is performed, the result of the calculation repeatably and reliably points to a particular non-volatile solid state storage152having that authority168. The operation may include the set of reachable storage nodes as input. If the set of reachable non-volatile solid state storage units changes the optimal set changes. In some embodiments, the persisted value is the current assignment (which is always true) and the calculated value is the target assignment the cluster will attempt to reconfigure towards. This calculation may be used to determine the optimal non-volatile solid state storage152for an authority in the presence of a set of non-volatile solid state storage152that are reachable and constitute the same cluster. The calculation also determines an ordered set of peer non-volatile solid state storage152that will also record the authority to non-volatile solid state storage mapping so that the authority may be determined even if the assigned non-volatile solid state storage is unreachable. A duplicate or substitute authority168may be consulted if a specific authority168is unavailable in some embodiments. With reference toFIGS.2A and2B, two of the many tasks of the CPU156on a storage node150are to break up write data, and reassemble read data. When the system has determined that data is to be written, the authority168for that data is located as above. When the segment ID for data is already determined the request to write is forwarded to the non-volatile solid state storage152currently determined to be the host of the authority168determined from the segment. The host CPU156of the storage node150, on which the non-volatile solid state storage152and corresponding authority168reside, then breaks up or shards the data and transmits the data out to various non-volatile solid state storage152. The transmitted data is written as a data stripe in accordance with an erasure coding scheme. In some embodiments, data is requested to be pulled, and in other embodiments, data is pushed. In reverse, when data is read, the authority168for the segment ID containing the data is located as described above. The host CPU156of the storage node150on which the non-volatile solid state storage152and corresponding authority168reside requests the data from the non-volatile solid state storage and corresponding storage nodes pointed to by the authority. In some embodiments the data is read from flash storage as a data stripe. The host CPU156of storage node150then reassembles the read data, correcting any errors (if present) according to the appropriate erasure coding scheme, and forwards the reassembled data to the network. In further embodiments, some or all of these tasks can be handled in the non-volatile solid state storage152. In some embodiments, the segment host requests the data be sent to storage node150by requesting pages from storage and then sending the data to the storage node making the original request. In embodiments, authorities168operate to determine how operations will proceed against particular logical elements. Each of the logical elements may be operated on through a particular authority across a plurality of storage controllers of a storage system. The authorities168may communicate with the plurality of storage controllers so that the plurality of storage controllers collectively perform operations against those particular logical elements. In embodiments, logical elements could be, for example, files, directories, object buckets, individual objects, delineated parts of files or objects, other forms of key-value pair databases, or tables. In embodiments, performing an operation can involve, for example, ensuring consistency, structural integrity, and/or recoverability with other operations against the same logical element, reading metadata and data associated with that logical element, determining what data should be written durably into the storage system to persist any changes for the operation, or where metadata and data can be determined to be stored across modular storage devices attached to a plurality of the storage controllers in the storage system. In some embodiments the operations are token based transactions to efficiently communicate within a distributed system. Each transaction may be accompanied by or associated with a token, which gives permission to execute the transaction. The authorities168are able to maintain a pre-transaction state of the system until completion of the operation in some embodiments. The token based communication may be accomplished without a global lock across the system, and also enables restart of an operation in case of a disruption or other failure. In some systems, for example in UNIX-style file systems, data is handled with an index node or inode, which specifies a data structure that represents an object in a file system. The object could be a file or a directory, for example. Metadata may accompany the object, as attributes such as permission data and a creation timestamp, among other attributes. A segment number could be assigned to all or a portion of such an object in a file system. In other systems, data segments are handled with a segment number assigned elsewhere. For purposes of discussion, the unit of distribution is an entity, and an entity can be a file, a directory or a segment. That is, entities are units of data or metadata stored by a storage system. Entities are grouped into sets called authorities. Each authority has an authority owner, which is a storage node that has the exclusive right to update the entities in the authority. In other words, a storage node contains the authority, and that the authority, in turn, contains entities. A segment is a logical container of data in accordance with some embodiments. A segment is an address space between medium address space and physical flash locations, i.e., the data segment number, are in this address space. Segments may also contain meta-data, which enable data redundancy to be restored (rewritten to different flash locations or devices) without the involvement of higher level software. In one embodiment, an internal format of a segment contains client data and medium mappings to determine the position of that data. Each data segment is protected, e.g., from memory and other failures, by breaking the segment into a number of data and parity shards, where applicable. The data and parity shards are distributed, i.e., striped, across non-volatile solid state storage152coupled to the host CPUs156(SeeFIGS.2E and2G) in accordance with an erasure coding scheme. Usage of the term segments refers to the container and its place in the address space of segments in some embodiments. Usage of the term stripe refers to the same set of shards as a segment and includes how the shards are distributed along with redundancy or parity information in accordance with some embodiments. A series of address-space transformations takes place across an entire storage system. At the top are the directory entries (file names) which link to an inode. Inodes point into medium address space, where data is logically stored. Medium addresses may be mapped through a series of indirect mediums to spread the load of large files, or implement data services like deduplication or snapshots. Segment addresses are then translated into physical flash locations. Physical flash locations have an address range bounded by the amount of flash in the system in accordance with some embodiments. Medium addresses and segment addresses are logical containers, and in some embodiments use a 128 bit or larger identifier so as to be practically infinite, with a likelihood of reuse calculated as longer than the expected life of the system. Addresses from logical containers are allocated in a hierarchical fashion in some embodiments. Initially, each non-volatile solid state storage152unit may be assigned a range of address space. Within this assigned range, the non-volatile solid state storage152is able to allocate addresses without synchronization with other non-volatile solid state storage152. Data and metadata is stored by a set of underlying storage layouts that are optimized for varying workload patterns and storage devices. These layouts incorporate multiple redundancy schemes, compression formats and index algorithms. Some of these layouts store information about authorities and authority masters, while others store file metadata and file data. The redundancy schemes include error correction codes that tolerate corrupted bits within a single storage device (such as a NAND flash chip), erasure codes that tolerate the failure of multiple storage nodes, and replication schemes that tolerate data center or regional failures. In some embodiments, low density parity check (‘LDPC’) code is used within a single storage unit. Reed-Solomon encoding is used within a storage cluster, and mirroring is used within a storage grid in some embodiments. Metadata may be stored using an ordered log structured index (such as a Log Structured Merge Tree), and large data may not be stored in a log structured layout. In order to maintain consistency across multiple copies of an entity, the storage nodes agree implicitly on two things through calculations: (1) the authority that contains the entity, and (2) the storage node that contains the authority. The assignment of entities to authorities can be done by pseudo randomly assigning entities to authorities, by splitting entities into ranges based upon an externally produced key, or by placing a single entity into each authority. Examples of pseudorandom schemes are linear hashing and the Replication Under Scalable Hashing (‘RUSH’) family of hashes, including Controlled Replication Under Scalable Hashing (‘CRUSH’). In some embodiments, pseudo-random assignment is utilized only for assigning authorities to nodes because the set of nodes can change. The set of authorities cannot change so any subjective function may be applied in these embodiments. Some placement schemes automatically place authorities on storage nodes, while other placement schemes rely on an explicit mapping of authorities to storage nodes. In some embodiments, a pseudorandom scheme is utilized to map from each authority to a set of candidate authority owners. A pseudorandom data distribution function related to CRUSH may assign authorities to storage nodes and create a list of where the authorities are assigned. Each storage node has a copy of the pseudorandom data distribution function, and can arrive at the same calculation for distributing, and later finding or locating an authority. Each of the pseudorandom schemes requires the reachable set of storage nodes as input in some embodiments in order to conclude the same target nodes. Once an entity has been placed in an authority, the entity may be stored on physical devices so that no expected failure will lead to unexpected data loss. In some embodiments, rebalancing algorithms attempt to store the copies of all entities within an authority in the same layout and on the same set of machines. Examples of expected failures include device failures, stolen machines, datacenter fires, and regional disasters, such as nuclear or geological events. Different failures lead to different levels of acceptable data loss. In some embodiments, a stolen storage node impacts neither the security nor the reliability of the system, while depending on system configuration, a regional event could lead to no loss of data, a few seconds or minutes of lost updates, or even complete data loss. In the embodiments, the placement of data for storage redundancy is independent of the placement of authorities for data consistency. In some embodiments, storage nodes that contain authorities do not contain any persistent storage. Instead, the storage nodes are connected to non-volatile solid state storage units that do not contain authorities. The communications interconnect between storage nodes and non-volatile solid state storage units consists of multiple communication technologies and has non-uniform performance and fault tolerance characteristics. In some embodiments, as mentioned above, non-volatile solid state storage units are connected to storage nodes via PCI express, storage nodes are connected together within a single chassis using Ethernet backplane, and chassis are connected together to form a storage cluster. Storage clusters are connected to clients using Ethernet or fiber channel in some embodiments. If multiple storage clusters are configured into a storage grid, the multiple storage clusters are connected using the Internet or other long-distance networking links, such as a “metro scale” link or private link that does not traverse the internet. Authority owners have the exclusive right to modify entities, to migrate entities from one non-volatile solid state storage unit to another non-volatile solid state storage unit, and to add and remove copies of entities. This allows for maintaining the redundancy of the underlying data. When an authority owner fails, is going to be decommissioned, or is overloaded, the authority is transferred to a new storage node. Transient failures make it non-trivial to ensure that all non-faulty machines agree upon the new authority location. The ambiguity that arises due to transient failures can be achieved automatically by a consensus protocol such as Paxos, hot-warm failover schemes, via manual intervention by a remote system administrator, or by a local hardware administrator (such as by physically removing the failed machine from the cluster, or pressing a button on the failed machine). In some embodiments, a consensus protocol is used, and failover is automatic. If too many failures or replication events occur in too short a time period, the system goes into a self-preservation mode and halts replication and data movement activities until an administrator intervenes in accordance with some embodiments. As authorities are transferred between storage nodes and authority owners update entities in their authorities, the system transfers messages between the storage nodes and non-volatile solid state storage units. With regard to persistent messages, messages that have different purposes are of different types. Depending on the type of the message, the system maintains different ordering and durability guarantees. As the persistent messages are being processed, the messages are temporarily stored in multiple durable and non-durable storage hardware technologies. In some embodiments, messages are stored in RAM, NVRAM and on NAND flash devices, and a variety of protocols are used in order to make efficient use of each storage medium. Latency-sensitive client requests may be persisted in replicated NVRAM, and then later NAND, while background rebalancing operations are persisted directly to NAND. Persistent messages are persistently stored prior to being transmitted. This allows the system to continue to serve client requests despite failures and component replacement. Although many hardware components contain unique identifiers that are visible to system administrators, manufacturer, hardware supply chain and ongoing monitoring quality control infrastructure, applications running on top of the infrastructure address virtualize addresses. These virtualized addresses do not change over the lifetime of the storage system, regardless of component failures and replacements. This allows each component of the storage system to be replaced over time without reconfiguration or disruptions of client request processing, i.e., the system supports non-disruptive upgrades. In some embodiments, the virtualized addresses are stored with sufficient redundancy. A continuous monitoring system correlates hardware and software status and the hardware identifiers. This allows detection and prediction of failures due to faulty components and manufacturing details. The monitoring system also enables the proactive transfer of authorities and entities away from impacted devices before failure occurs by removing the component from the critical path in some embodiments. FIG.2Cis a multiple level block diagram, showing contents of a storage node150and contents of a non-volatile solid state storage152of the storage node150. Data is communicated to and from the storage node150by a network interface controller (‘NIC’)202in some embodiments. Each storage node150has a CPU156, and one or more non-volatile solid state storage152, as discussed above. Moving down one level inFIG.2C, each non-volatile solid state storage152has a relatively fast non-volatile solid state memory, such as nonvolatile random access memory (‘NVRAM’)204, and flash memory206. In some embodiments, NVRAM204may be a component that does not require program/erase cycles (DRAM, MRAM, PCM), and can be a memory that can support being written vastly more often than the memory is read from. Moving down another level inFIG.2C, the NVRAM204is implemented in one embodiment as high speed volatile memory, such as dynamic random access memory (DRAM)216, backed up by energy reserve218. Energy reserve218provides sufficient electrical power to keep the DRAM216powered long enough for contents to be transferred to the flash memory206in the event of power failure. In some embodiments, energy reserve218is a capacitor, super-capacitor, battery, or other device, that supplies a suitable supply of energy sufficient to enable the transfer of the contents of DRAM216to a stable storage medium in the case of power loss. The flash memory206is implemented as multiple flash dies222, which may be referred to as packages of flash dies222or an array of flash dies222. It should be appreciated that the flash dies222could be packaged in any number of ways, with a single die per package, multiple dies per package (i.e., multichip packages), in hybrid packages, as bare dies on a printed circuit board or other substrate, as encapsulated dies, etc. In the embodiment shown, the non-volatile solid state storage152has a controller212or other processor, and an input output (I/O) port210coupled to the controller212. I/O port210is coupled to the CPU156and/or the network interface controller202of the flash storage node150. Flash input output (I/O) port220is coupled to the flash dies222, and a direct memory access unit (DMA)214is coupled to the controller212, the DRAM216and the flash dies222. In the embodiment shown, the I/O port210, controller212, DMA unit214and flash I/O port220are implemented on a programmable logic device (‘PLD’)208, e.g., an FPGA. In this embodiment, each flash die222has pages, organized as sixteen kB (kilobyte) pages224, and a register226through which data can be written to or read from the flash die222. In further embodiments, other types of solid-state memory are used in place of, or in addition to flash memory illustrated within flash die222. Storage clusters161, in various embodiments as disclosed herein, can be contrasted with storage arrays in general. The storage nodes150are part of a collection that creates the storage cluster161. Each storage node150owns a slice of data and computing required to provide the data. Multiple storage nodes150cooperate to store and retrieve the data. Storage memory or storage devices, as used in storage arrays in general, are less involved with processing and manipulating the data. Storage memory or storage devices in a storage array receive commands to read, write, or erase data. The storage memory or storage devices in a storage array are not aware of a larger system in which they are embedded, or what the data means. Storage memory or storage devices in storage arrays can include various types of storage memory, such as RAM, solid state drives, hard disk drives, etc. The non-volatile solid state storage152units described herein have multiple interfaces active simultaneously and serving multiple purposes. In some embodiments, some of the functionality of a storage node150is shifted into a storage unit152, transforming the storage unit152into a combination of storage unit152and storage node150. Placing computing (relative to storage data) into the storage unit152places this computing closer to the data itself. The various system embodiments have a hierarchy of storage node layers with different capabilities. By contrast, in a storage array, a controller owns and knows everything about all of the data that the controller manages in a shelf or storage devices. In a storage cluster161, as described herein, multiple controllers in multiple non-volatile solid state storage152units and/or storage nodes150cooperate in various ways (e.g., for erasure coding, data sharding, metadata communication and redundancy, storage capacity expansion or contraction, data recovery, and so on). FIG.2Dshows a storage server environment, which uses embodiments of the storage nodes150and storage152units ofFIGS.2A-C. In this version, each non-volatile solid state storage152unit has a processor such as controller212(seeFIG.2C), an FPGA, flash memory206, and NVRAM204(which is super-capacitor backed DRAM216, seeFIGS.2B and2C) on a PCIe (peripheral component interconnect express) board in a chassis138(seeFIG.2A). The non-volatile solid state storage152unit may be implemented as a single board containing storage, and may be the largest tolerable failure domain inside the chassis. In some embodiments, up to two non-volatile solid state storage152units may fail and the device will continue with no data loss. The physical storage is divided into named regions based on application usage in some embodiments. The NVRAM204is a contiguous block of reserved memory in the non-volatile solid state storage152DRAM216, and is backed by NAND flash. NVRAM204is logically divided into multiple memory regions written for two as spool (e.g., spool_region). Space within the NVRAM204spools is managed by each authority168independently. Each device provides an amount of storage space to each authority168. That authority168further manages lifetimes and allocations within that space. Examples of a spool include distributed transactions or notions. When the primary power to a non-volatile solid state storage152unit fails, onboard super-capacitors provide a short duration of power hold up. During this holdup interval, the contents of the NVRAM204are flushed to flash memory206. On the next power-on, the contents of the NVRAM204are recovered from the flash memory206. As for the storage unit controller, the responsibility of the logical “controller” is distributed across each of the blades containing authorities168. This distribution of logical control is shown inFIG.2Das a host controller242, mid-tier controller244and storage unit controller(s)246. Management of the control plane and the storage plane are treated independently, although parts may be physically co-located on the same blade. Each authority168effectively serves as an independent controller. Each authority168provides its own data and metadata structures, its own background workers, and maintains its own lifecycle. FIG.2Eis a blade252hardware block diagram, showing a control plane254, compute and storage planes256,258, and authorities168interacting with underlying physical resources, using embodiments of the storage nodes150and storage units152ofFIGS.2A-Cin the storage server environment ofFIG.2D. The control plane254is partitioned into a number of authorities168which can use the compute resources in the compute plane256to run on any of the blades252. The storage plane258is partitioned into a set of devices, each of which provides access to flash206and NVRAM204resources. In one embodiment, the compute plane256may perform the operations of a storage array controller, as described herein, on one or more devices of the storage plane258(e.g., a storage array). In the compute and storage planes256,258ofFIG.2E, the authorities168interact with the underlying physical resources (i.e., devices). From the point of view of an authority168, its resources are striped over all of the physical devices. From the point of view of a device, it provides resources to all authorities168, irrespective of where the authorities happen to run. Each authority168has allocated or has been allocated one or more partitions260of storage memory in the storage units152, e.g., partitions260in flash memory206and NVRAM204. Each authority168uses those allocated partitions260that belong to it, for writing or reading user data. Authorities can be associated with differing amounts of physical storage of the system. For example, one authority168could have a larger number of partitions260or larger sized partitions260in one or more storage units152than one or more other authorities168. FIG.2Fdepicts elasticity software layers in blades252of a storage cluster, in accordance with some embodiments. In the elasticity structure, elasticity software is symmetric, i.e., each blade's compute module270runs the three identical layers of processes depicted inFIG.2F. Storage managers274execute read and write requests from other blades252for data and metadata stored in local storage unit152NVRAM204and flash206. Authorities168fulfill client requests by issuing the necessary reads and writes to the blades252on whose storage units152the corresponding data or metadata resides. Endpoints272parse client connection requests received from switch fabric146supervisory software, relay the client connection requests to the authorities168responsible for fulfillment, and relay the authorities'168responses to clients. The symmetric three-layer structure enables the storage system's high degree of concurrency. Elasticity scales out efficiently and reliably in these embodiments. In addition, elasticity implements a unique scale-out technique that balances work evenly across all resources regardless of client access pattern, and maximizes concurrency by eliminating much of the need for inter-blade coordination that typically occurs with conventional distributed locking. Still referring toFIG.2F, authorities168running in the compute modules270of a blade252perform the internal operations required to fulfill client requests. One feature of elasticity is that authorities168are stateless, i.e., they cache active data and metadata in their own blades'252DRAMs for fast access, but the authorities store every update in their NVRAM204partitions on three separate blades252until the update has been written to flash206. All the storage system writes to NVRAM204are in triplicate to partitions on three separate blades252in some embodiments. With triple-mirrored NVRAM204and persistent storage protected by parity and Reed-Solomon RAID checksums, the storage system can survive concurrent failure of two blades252with no loss of data, metadata, or access to either. Because authorities168are stateless, they can migrate between blades252. Each authority168has a unique identifier. NVRAM204and flash206partitions are associated with authorities'168identifiers, not with the blades252on which they are running in some embodiments. Thus, when an authority168migrates, the authority168continues to manage the same storage partitions from its new location. When a new blade252is installed in an embodiment of the storage cluster, the system automatically rebalances load by: partitioning the new blade's252storage for use by the system's authorities168, migrating selected authorities168to the new blade252, starting endpoints272on the new blade252and including them in the switch fabric's146client connection distribution algorithm. From their new locations, migrated authorities168persist the contents of their NVRAM204partitions on flash206, process read and write requests from other authorities168, and fulfill the client requests that endpoints272direct to them. Similarly, if a blade252fails or is removed, the system redistributes its authorities168among the system's remaining blades252. The redistributed authorities168continue to perform their original functions from their new locations. FIG.2Gdepicts authorities168and storage resources in blades252of a storage cluster, in accordance with some embodiments. Each authority168is exclusively responsible for a partition of the flash206and NVRAM204on each blade252. The authority168manages the content and integrity of its partitions independently of other authorities168. Authorities168compress incoming data and preserve it temporarily in their NVRAM204partitions, and then consolidate, RAID-protect, and persist the data in segments of the storage in their flash206partitions. As the authorities168write data to flash206, storage managers274perform the necessary flash translation to optimize write performance and maximize media longevity. In the background, authorities168“garbage collect,” or reclaim space occupied by data that clients have made obsolete by overwriting the data. It should be appreciated that since authorities'168partitions are disjoint, there is no need for distributed locking to execute client and writes or to perform background functions. The embodiments described herein may utilize various software, communication and/or networking protocols. In addition, the configuration of the hardware and/or software may be adjusted to accommodate various protocols. For example, the embodiments may utilize Active Directory, which is a database based system that provides authentication, directory, policy, and other services in a WINDOWS™ environment. In these embodiments, LDAP (Lightweight Directory Access Protocol) is one example application protocol for querying and modifying items in directory service providers such as Active Directory. In some embodiments, a network lock manager (‘NLM’) is utilized as a facility that works in cooperation with the Network File System (‘NFS’) to provide a System V style of advisory file and record locking over a network. The Server Message Block (‘SMB’) protocol, one version of which is also known as Common Internet File System (‘CIFS’), may be integrated with the storage systems discussed herein. SMP operates as an application-layer network protocol typically used for providing shared access to files, printers, and serial ports and miscellaneous communications between nodes on a network. SMB also provides an authenticated inter-process communication mechanism. AMAZON™ S3 (Simple Storage Service) is a web service offered by Amazon Web Services, and the systems described herein may interface with Amazon S3 through web services interfaces (REST (representational state transfer), SOAP (simple object access protocol), and BitTorrent). A RESTful API (application programming interface) breaks down a transaction to create a series of small modules. Each module addresses a particular underlying part of the transaction. The control or permissions provided with these embodiments, especially for object data, may include utilization of an access control list (‘ACL’). The ACL is a list of permissions attached to an object and the ACL specifies which users or system processes are granted access to objects, as well as what operations are allowed on given objects. The systems may utilize Internet Protocol version 6 (‘IPv6’), as well as IPv4, for the communications protocol that provides an identification and location system for computers on networks and routes traffic across the Internet. The routing of packets between networked systems may include Equal-cost multi-path routing (‘ECMP’), which is a routing strategy where next-hop packet forwarding to a single destination can occur over multiple “best paths” which tie for top place in routing metric calculations. Multi-path routing can be used in conjunction with most routing protocols, because it is a per-hop decision limited to a single router. The software may support Multi-tenancy, which is an architecture in which a single instance of a software application serves multiple customers. Each customer may be referred to as a tenant. Tenants may be given the ability to customize some parts of the application, but may not customize the application's code, in some embodiments. The embodiments may maintain audit logs. An audit log is a document that records an event in a computing system. In addition to documenting what resources were accessed, audit log entries typically include destination and source addresses, a timestamp, and user login information for compliance with various regulations. The embodiments may support various key management policies, such as encryption key rotation. In addition, the system may support dynamic root passwords or some variation dynamically changing passwords. FIG.3Asets forth a diagram of a storage system306that is coupled for data communications with a cloud services provider302in accordance with some embodiments of the present disclosure. Although depicted in less detail, the storage system306depicted inFIG.3Amay be similar to the storage systems described above with reference toFIGS.1A-1DandFIGS.2A-2G. In some embodiments, the storage system306depicted inFIG.3Amay be embodied as a storage system that includes imbalanced active/active controllers, as a storage system that includes balanced active/active controllers, as a storage system that includes active/active controllers where less than all of each controller's resources are utilized such that each controller has reserve resources that may be used to support failover, as a storage system that includes fully active/active controllers, as a storage system that includes dataset-segregated controllers, as a storage system that includes dual-layer architectures with front-end controllers and back-end integrated storage controllers, as a storage system that includes scale-out clusters of dual-controller arrays, as well as combinations of such embodiments. In the example depicted inFIG.3A, the storage system306is coupled to the cloud services provider302via a data communications link304. The data communications link304may be embodied as a dedicated data communications link, as a data communications pathway that is provided through the use of one or data communications networks such as a wide area network (‘WAN’) or LAN, or as some other mechanism capable of transporting digital information between the storage system306and the cloud services provider302. Such a data communications link304may be fully wired, fully wireless, or some aggregation of wired and wireless data communications pathways. In such an example, digital information may be exchanged between the storage system306and the cloud services provider302via the data communications link304using one or more data communications protocols. For example, digital information may be exchanged between the storage system306and the cloud services provider302via the data communications link304using the handheld device transfer protocol (‘HDTP’), hypertext transfer protocol (‘HTTP’), internet protocol (′IP), real-time transfer protocol (‘RTP’), transmission control protocol (‘TCP’), user datagram protocol (‘UDP’), wireless application protocol (‘WAP’), or other protocol. The cloud services provider302depicted inFIG.3Amay be embodied, for example, as a system and computing environment that provides a vast array of services to users of the cloud services provider302through the sharing of computing resources via the data communications link304. The cloud services provider302may provide on-demand access to a shared pool of configurable computing resources such as computer networks, servers, storage, applications and services, and so on. The shared pool of configurable resources may be rapidly provisioned and released to a user of the cloud services provider302with minimal management effort. Generally, the user of the cloud services provider302is unaware of the exact computing resources utilized by the cloud services provider302to provide the services. Although in many cases such a cloud services provider302may be accessible via the Internet, readers of skill in the art will recognize that any system that abstracts the use of shared resources to provide services to a user through any data communications link may be considered a cloud services provider302. In the example depicted inFIG.3A, the cloud services provider302may be configured to provide a variety of services to the storage system306and users of the storage system306through the implementation of various service models. For example, the cloud services provider302may be configured to provide services through the implementation of an infrastructure as a service (‘IaaS’) service model, through the implementation of a platform as a service (‘PaaS’) service model, through the implementation of a software as a service (‘SaaS’) service model, through the implementation of an authentication as a service (‘AaaS’) service model, through the implementation of a storage as a service model where the cloud services provider302offers access to its storage infrastructure for use by the storage system306and users of the storage system306, and so on. Readers will appreciate that the cloud services provider302may be configured to provide additional services to the storage system306and users of the storage system306through the implementation of additional service models, as the service models described above are included only for explanatory purposes and in no way represent a limitation of the services that may be offered by the cloud services provider302or a limitation as to the service models that may be implemented by the cloud services provider302. In the example depicted inFIG.3A, the cloud services provider302may be embodied, for example, as a private cloud, as a public cloud, or as a combination of a private cloud and public cloud. In an embodiment in which the cloud services provider302is embodied as a private cloud, the cloud services provider302may be dedicated to providing services to a single organization rather than providing services to multiple organizations. In an embodiment where the cloud services provider302is embodied as a public cloud, the cloud services provider302may provide services to multiple organizations. In still alternative embodiments, the cloud services provider302may be embodied as a mix of a private and public cloud services with a hybrid cloud deployment. Although not explicitly depicted inFIG.3A, readers will appreciate that a vast amount of additional hardware components and additional software components may be necessary to facilitate the delivery of cloud services to the storage system306and users of the storage system306. For example, the storage system306may be coupled to (or even include) a cloud storage gateway. Such a cloud storage gateway may be embodied, for example, as hardware-based or software-based appliance that is located on premises with the storage system306. Such a cloud storage gateway may operate as a bridge between local applications that are executing on the storage system306and remote, cloud-based storage that is utilized by the storage system306. Through the use of a cloud storage gateway, organizations may move primary iSCSI or NAS to the cloud services provider302, thereby enabling the organization to save space on their on-premises storage systems. Such a cloud storage gateway may be configured to emulate a disk array, a block-based device, a file server, or other storage system that can translate the SCSI commands, file server commands, or other appropriate command into REST-space protocols that facilitate communications with the cloud services provider302. In order to enable the storage system306and users of the storage system306to make use of the services provided by the cloud services provider302, a cloud migration process may take place during which data, applications, or other elements from an organization's local systems (or even from another cloud environment) are moved to the cloud services provider302. In order to successfully migrate data, applications, or other elements to the cloud services provider's302environment, middleware such as a cloud migration tool may be utilized to bridge gaps between the cloud services provider's302environment and an organization's environment. Such cloud migration tools may also be configured to address potentially high network costs and long transfer times associated with migrating large volumes of data to the cloud services provider302, as well as addressing security concerns associated with sensitive data to the cloud services provider302over data communications networks. In order to further enable the storage system306and users of the storage system306to make use of the services provided by the cloud services provider302, a cloud orchestrator may also be used to arrange and coordinate automated tasks in pursuit of creating a consolidated process or workflow. Such a cloud orchestrator may perform tasks such as configuring various components, whether those components are cloud components or on-premises components, as well as managing the interconnections between such components. The cloud orchestrator can simplify the inter-component communication and connections to ensure that links are correctly configured and maintained. In the example depicted inFIG.3A, and as described briefly above, the cloud services provider302may be configured to provide services to the storage system306and users of the storage system306through the usage of a SaaS service model, eliminating the need to install and run the application on local computers, which may simplify maintenance and support of the application. Such applications may take many forms in accordance with various embodiments of the present disclosure. For example, the cloud services provider302may be configured to provide access to data analytics applications to the storage system306and users of the storage system306. Such data analytics applications may be configured, for example, to receive vast amounts of telemetry data phoned home by the storage system306. Such telemetry data may describe various operating characteristics of the storage system306and may be analyzed for a vast array of purposes including, for example, to determine the health of the storage system306, to identify workloads that are executing on the storage system306, to predict when the storage system306will run out of various resources, to recommend configuration changes, hardware or software upgrades, workflow migrations, or other actions that may improve the operation of the storage system306. The cloud services provider302may also be configured to provide access to virtualized computing environments to the storage system306and users of the storage system306. Such virtualized computing environments may be embodied, for example, as a virtual machine or other virtualized computer hardware platforms, virtual storage devices, virtualized computer network resources, and so on. Examples of such virtualized environments can include virtual machines that are created to emulate an actual computer, virtualized desktop environments that separate a logical desktop from a physical machine, virtualized file systems that allow uniform access to different types of concrete file systems, and many others. Although the example depicted inFIG.3Aillustrates the storage system306being coupled for data communications with the cloud services provider302, in other embodiments the storage system306may be part of a hybrid cloud deployment in which private cloud elements (e.g., private cloud services, on-premises infrastructure, and so on) and public cloud elements (e.g., public cloud services, infrastructure, and so on that may be provided by one or more cloud services providers) are combined to form a single solution, with orchestration among the various platforms. Such a hybrid cloud deployment may leverage hybrid cloud management software such as, for example, Azure™ Arc from Microsoft™, that centralize the management of the hybrid cloud deployment to any infrastructure and enable the deployment of services anywhere. In such an example, the hybrid cloud management software may be configured to create, update, and delete resources (both physical and virtual) that form the hybrid cloud deployment, to allocate compute and storage to specific workloads, to monitor workloads and resources for performance, policy compliance, updates and patches, security status, or to perform a variety of other tasks. Readers will appreciate that by pairing the storage systems described herein with one or more cloud services providers, various offerings may be enabled. For example, disaster recovery as a service (‘DRaaS’) may be provided where cloud resources are utilized to protect applications and data from disruption caused by disaster, including in embodiments where the storage systems may serve as the primary data store. In such embodiments, a total system backup may be taken that allows for business continuity in the event of system failure. In such embodiments, cloud data backup techniques (by themselves or as part of a larger DRaaS solution) may also be integrated into an overall solution that includes the storage systems and cloud services providers described herein. The storage systems described herein, as well as the cloud services providers, may be utilized to provide a wide array of security features. For example, the storage systems may encrypt data at rest (and data may be sent to and from the storage systems encrypted) and may make use of Key Management-as-a-Service (‘KMaaS’) to manage encryption keys, keys for locking and unlocking storage devices, and so on. Likewise, cloud data security gateways or similar mechanisms may be utilized to ensure that data stored within the storage systems does not improperly end up being stored in the cloud as part of a cloud data backup operation. Furthermore, microsegmentation or identity-based-segmentation may be utilized in a data center that includes the storage systems or within the cloud services provider, to create secure zones in data centers and cloud deployments that enables the isolation of workloads from one another. For further explanation,FIG.3Bsets forth a diagram of a storage system306in accordance with some embodiments of the present disclosure. Although depicted in less detail, the storage system306depicted inFIG.3Bmay be similar to the storage systems described above with reference toFIGS.1A-1DandFIGS.2A-2Gas the storage system may include many of the components described above. The storage system306depicted inFIG.3Bmay include a vast amount of storage resources308, which may be embodied in many forms. For example, the storage resources308can include nano-RAM or another form of nonvolatile random access memory that utilizes carbon nanotubes deposited on a substrate, 3D crosspoint non-volatile memory, flash memory including single-level cell (‘SLC’) NAND flash, multi-level cell (‘MLC’) NAND flash, triple-level cell (‘TLC’) NAND flash, quad-level cell (‘QLC’) NAND flash, or others. Likewise, the storage resources308may include non-volatile magnetoresistive random-access memory (‘MRAM’), including spin transfer torque (‘STT’) MRAM. The example storage resources308may alternatively include non-volatile phase-change memory (‘PCM’), quantum memory that allows for the storage and retrieval of photonic quantum information, resistive random-access memory (‘ReRAM’), storage class memory (‘SCM’), or other form of storage resources, including any combination of resources described herein. Readers will appreciate that other forms of computer memories and storage devices may be utilized by the storage systems described above, including DRAM, SRAM, EEPROM, universal memory, and many others. The storage resources308depicted inFIG.3Amay be embodied in a variety of form factors, including but not limited to, dual in-line memory modules (‘DIMMs’), non-volatile dual in-line memory modules (‘NVDIMMs’), M.2, U.2, and others. The storage resources308depicted inFIG.3Bmay include various forms of SCM. SCM may effectively treat fast, non-volatile memory (e.g., NAND flash) as an extension of DRAM such that an entire dataset may be treated as an in-memory dataset that resides entirely in DRAM. SCM may include non-volatile media such as, for example, NAND flash. Such NAND flash may be accessed utilizing NVMe that can use the PCIe bus as its transport, providing for relatively low access latencies compared to older protocols. In fact, the network protocols used for SSDs in all-flash arrays can include NVMe using Ethernet (ROCE, NVME TCP), Fibre Channel (NVMe FC), InfiniBand (iWARP), and others that make it possible to treat fast, non-volatile memory as an extension of DRAM. In view of the fact that DRAM is often byte-addressable and fast, non-volatile memory such as NAND flash is block-addressable, a controller software/hardware stack may be needed to convert the block data to the bytes that are stored in the media. Examples of media and software that may be used as SCM can include, for example, 3D XPoint, Intel Memory Drive Technology, Samsung's Z-SSD, and others. The storage resources308depicted inFIG.3Bmay also include racetrack memory (also referred to as domain-wall memory). Such racetrack memory may be embodied as a form of non-volatile, solid-state memory that relies on the intrinsic strength and orientation of the magnetic field created by an electron as it spins in addition to its electronic charge, in solid-state devices. Through the use of spin-coherent electric current to move magnetic domains along a nanoscopic permalloy wire, the domains may pass by magnetic read/write heads positioned near the wire as current is passed through the wire, which alter the domains to record patterns of bits. In order to create a racetrack memory device, many such wires and read/write elements may be packaged together. The example storage system306depicted inFIG.3Bmay implement a variety of storage architectures. For example, storage systems in accordance with some embodiments of the present disclosure may utilize block storage where data is stored in blocks, and each block essentially acts as an individual hard drive. Storage systems in accordance with some embodiments of the present disclosure may utilize object storage, where data is managed as objects. Each object may include the data itself, a variable amount of metadata, and a globally unique identifier, where object storage can be implemented at multiple levels (e.g., device level, system level, interface level). Storage systems in accordance with some embodiments of the present disclosure utilize file storage in which data is stored in a hierarchical structure. Such data may be saved in files and folders, and presented to both the system storing it and the system retrieving it in the same format. The example storage system306depicted inFIG.3Bmay be embodied as a storage system in which additional storage resources can be added through the use of a scale-up model, additional storage resources can be added through the use of a scale-out model, or through some combination thereof. In a scale-up model, additional storage may be added by adding additional storage devices. In a scale-out model, however, additional storage nodes may be added to a cluster of storage nodes, where such storage nodes can include additional processing resources, additional networking resources, and so on. The example storage system306depicted inFIG.3Bmay leverage the storage resources described above in a variety of different ways. For example, some portion of the storage resources may be utilized to serve as a write cache, storage resources within the storage system may be utilized as a read cache, or tiering may be achieved within the storage systems by placing data within the storage system in accordance with one or more tiering policies. The storage system306depicted inFIG.3Balso includes communications resources310that may be useful in facilitating data communications between components within the storage system306, as well as data communications between the storage system306and computing devices that are outside of the storage system306, including embodiments where those resources are separated by a relatively vast expanse. The communications resources310may be configured to utilize a variety of different protocols and data communication fabrics to facilitate data communications between components within the storage systems as well as computing devices that are outside of the storage system. For example, the communications resources310can include fibre channel (‘FC’) technologies such as FC fabrics and FC protocols that can transport SCSI commands over FC network, FC over ethernet (‘FCoE’) technologies through which FC frames are encapsulated and transmitted over Ethernet networks, InfiniBand (‘IB’) technologies in which a switched fabric topology is utilized to facilitate transmissions between channel adapters, NVM Express (‘NVMe’) technologies and NVMe over fabrics (‘NVMeoF’) technologies through which non-volatile storage media attached via a PCI express (‘PCIe’) bus may be accessed, and others. In fact, the storage systems described above may, directly or indirectly, make use of neutrino communication technologies and devices through which information (including binary information) is transmitted using a beam of neutrinos. The communications resources310can also include mechanisms for accessing storage resources308within the storage system306utilizing serial attached SCSI (‘SAS’), serial ATA (‘SATA’) bus interfaces for connecting storage resources308within the storage system306to host bus adapters within the storage system306, internet small computer systems interface (‘iSCSI’) technologies to provide block-level access to storage resources308within the storage system306, and other communications resources that that may be useful in facilitating data communications between components within the storage system306, as well as data communications between the storage system306and computing devices that are outside of the storage system306. The storage system306depicted inFIG.3Balso includes processing resources312that may be useful in executing computer program instructions and performing other computational tasks within the storage system306. The processing resources312may include one or more ASICs that are customized for some particular purpose as well as one or more CPUs. The processing resources312may also include one or more DSPs, one or more FPGAs, one or more systems on a chip (‘SoCs’), or other form of processing resources312. The storage system306may utilize the storage resources312to perform a variety of tasks including, but not limited to, supporting the execution of software resources314that will be described in greater detail below. The storage system306depicted inFIG.3Balso includes software resources314that, when executed by processing resources312within the storage system306, may perform a vast array of tasks. The software resources314may include, for example, one or more modules of computer program instructions that when executed by processing resources312within the storage system306are useful in carrying out various data protection techniques. Such data protection techniques may be carried out, for example, by system software executing on computer hardware within the storage system, by a cloud services provider, or in other ways. Such data protection techniques can include data archiving, data backup, data replication, data snapshotting, data and database cloning, and other data protection techniques. The software resources314may also include software that is useful in implementing software-defined storage (‘SDS’). In such an example, the software resources314may include one or more modules of computer program instructions that, when executed, are useful in policy-based provisioning and management of data storage that is independent of the underlying hardware. Such software resources314may be useful in implementing storage virtualization to separate the storage hardware from the software that manages the storage hardware. The software resources314may also include software that is useful in facilitating and optimizing I/O operations that are directed to the storage system306. For example, the software resources314may include software modules that perform various data reduction techniques such as, for example, data compression, data deduplication, and others. The software resources314may include software modules that intelligently group together I/O operations to facilitate better usage of the underlying storage resource308, software modules that perform data migration operations to migrate from within a storage system, as well as software modules that perform other functions. Such software resources314may be embodied as one or more software containers or in many other ways. For further explanation,FIG.3Csets forth an example of a cloud-based storage system318in accordance with some embodiments of the present disclosure. In the example depicted inFIG.3C, the cloud-based storage system318is created entirely in a cloud computing environment316such as, for example, Amazon Web Services (‘AWS’)™, Microsoft Azure™, Google Cloud Platform™, IBM Cloud™, Oracle Cloud™, and others. The cloud-based storage system318may be used to provide services similar to the services that may be provided by the storage systems described above. The cloud-based storage system318depicted inFIG.3Cincludes two cloud computing instances320,322that each are used to support the execution of a storage controller application324,326. The cloud computing instances320,322may be embodied, for example, as instances of cloud computing resources (e.g., virtual machines) that may be provided by the cloud computing environment316to support the execution of software applications such as the storage controller application324,326. For example, each of the cloud computing instances320,322may execute on an Azure VM, where each Azure VM may include high speed temporary storage that may be leveraged as a cache (e.g., as a read cache). In one embodiment, the cloud computing instances320,322may be embodied as Amazon Elastic Compute Cloud (‘EC2’) instances. In such an example, an Amazon Machine Image (‘AMI’) that includes the storage controller application324,326may be booted to create and configure a virtual machine that may execute the storage controller application324,326. In the example method depicted inFIG.3C, the storage controller application324,326may be embodied as a module of computer program instructions that, when executed, carries out various storage tasks. For example, the storage controller application324,326may be embodied as a module of computer program instructions that, when executed, carries out the same tasks as the controllers110A,110B inFIG.1Adescribed above such as writing data to the cloud-based storage system318, erasing data from the cloud-based storage system318, retrieving data from the cloud-based storage system318, monitoring and reporting of disk utilization and performance, performing redundancy operations, such as RAID or RAID-like data redundancy operations, compressing data, encrypting data, deduplicating data, and so forth. Readers will appreciate that because there are two cloud computing instances320,322that each include the storage controller application324,326, in some embodiments one cloud computing instance320may operate as the primary controller as described above while the other cloud computing instance322may operate as the secondary controller as described above. Readers will appreciate that the storage controller application324,326depicted inFIG.3Cmay include identical source code that is executed within different cloud computing instances320,322such as distinct EC2 instances. Readers will appreciate that other embodiments that do not include a primary and secondary controller are within the scope of the present disclosure. For example, each cloud computing instance320,322may operate as a primary controller for some portion of the address space supported by the cloud-based storage system318, each cloud computing instance320,322may operate as a primary controller where the servicing of I/O operations directed to the cloud-based storage system318are divided in some other way, and so on. In fact, in other embodiments where costs savings may be prioritized over performance demands, only a single cloud computing instance may exist that contains the storage controller application. The cloud-based storage system318depicted inFIG.3Cincludes cloud computing instances340a,340b,340nwith local storage330,334,338. The cloud computing instances340a,340b,340nmay be embodied, for example, as instances of cloud computing resources that may be provided by the cloud computing environment316to support the execution of software applications. The cloud computing instances340a,340b,340nofFIG.3Cmay differ from the cloud computing instances320,322described above as the cloud computing instances340a,340b,340nofFIG.3Chave local storage330,334,338resources whereas the cloud computing instances320,322that support the execution of the storage controller application324,326need not have local storage resources. The cloud computing instances340a,340b,340nwith local storage330,334,338may be embodied, for example, as EC2 M5 instances that include one or more SSDs, as EC2 R5 instances that include one or more SSDs, as EC2 I3 instances that include one or more SSDs, and so on. In some embodiments, the local storage330,334,338must be embodied as solid-state storage (e.g., SSDs) rather than storage that makes use of hard disk drives. In the example depicted inFIG.3C, each of the cloud computing instances340a,340b,340nwith local storage330,334,338can include a software daemon328,332,336that, when executed by a cloud computing instance340a,340b,340ncan present itself to the storage controller applications324,326as if the cloud computing instance340a,340b,340nwere a physical storage device (e.g., one or more SSDs). In such an example, the software daemon328,332,336may include computer program instructions similar to those that would normally be contained on a storage device such that the storage controller applications324,326can send and receive the same commands that a storage controller would send to storage devices. In such a way, the storage controller applications324,326may include code that is identical to (or substantially identical to) the code that would be executed by the controllers in the storage systems described above. In these and similar embodiments, communications between the storage controller applications324,326and the cloud computing instances340a,340b,340nwith local storage330,334,338may utilize iSCSI, NVMe over TCP, messaging, a custom protocol, or in some other mechanism. In the example depicted inFIG.3C, each of the cloud computing instances340a,340b,340nwith local storage330,334,338may also be coupled to block storage342,344,346that is offered by the cloud computing environment316such as, for example, as Amazon Elastic Block Store (‘EBS’) volumes. In such an example, the block storage342,344,346that is offered by the cloud computing environment316may be utilized in a manner that is similar to how the NVRAM devices described above are utilized, as the software daemon328,332,336(or some other module) that is executing within a particular cloud computing instance340a,340b,340nmay, upon receiving a request to write data, initiate a write of the data to its attached EBS volume as well as a write of the data to its local storage330,334,338resources. In some alternative embodiments, data may only be written to the local storage330,334,338resources within a particular cloud computing instance340a,340b,340n. In an alternative embodiment, rather than using the block storage342,344,346that is offered by the cloud computing environment316as NVRAM, actual RAM on each of the cloud computing instances340a,340b,340nwith local storage330,334,338may be used as NVRAM, thereby decreasing network utilization costs that would be associated with using an EBS volume as the NVRAM. In yet another embodiment, high performance block storage resources such as one or more Azure Ultra Disks may be utilized as the NVRAM. The storage controller applications324,326may be used to perform various tasks such as deduplicating the data contained in the request, compressing the data contained in the request, determining where to the write the data contained in the request, and so on, before ultimately sending a request to write a deduplicated, encrypted, or otherwise possibly updated version of the data to one or more of the cloud computing instances340a,340b,340nwith local storage330,334,338. Either cloud computing instance320,322, in some embodiments, may receive a request to read data from the cloud-based storage system318and may ultimately send a request to read data to one or more of the cloud computing instances340a,340b,340nwith local storage330,334,338. When a request to write data is received by a particular cloud computing instance340a,340b,340nwith local storage330,334,338, the software daemon328,332,336may be configured to not only write the data to its own local storage330,334,338resources and any appropriate block storage342,344,346resources, but the software daemon328,332,336may also be configured to write the data to cloud-based object storage348that is attached to the particular cloud computing instance340a,340b,340n. The cloud-based object storage348that is attached to the particular cloud computing instance340a,340b,340nmay be embodied, for example, as Amazon Simple Storage Service (‘S3’). In other embodiments, the cloud computing instances320,322that each include the storage controller application324,326may initiate the storage of the data in the local storage330,334,338of the cloud computing instances340a,340b,340nand the cloud-based object storage348. In other embodiments, rather than using both the cloud computing instances340a,340b,340nwith local storage330,334,338(also referred to herein as ‘virtual drives’) and the cloud-based object storage348to store data, a persistent storage layer may be implemented in other ways. For example, one or more Azure Ultra disks may be used to persistently store data (e.g., after the data has been written to the NVRAM layer). While the local storage330,334,338resources and the block storage342,344,346resources that are utilized by the cloud computing instances340a,340b,340nmay support block-level access, the cloud-based object storage348that is attached to the particular cloud computing instance340a,340b,340nsupports only object-based access. The software daemon328,332,336may therefore be configured to take blocks of data, package those blocks into objects, and write the objects to the cloud-based object storage348that is attached to the particular cloud computing instance340a,340b,340n. Consider an example in which data is written to the local storage330,334,338resources and the block storage342,344,346resources that are utilized by the cloud computing instances340a,340b,340nin 1 MB blocks. In such an example, assume that a user of the cloud-based storage system318issues a request to write data that, after being compressed and deduplicated by the storage controller application324,326results in the need to write 5 MB of data. In such an example, writing the data to the local storage330,334,338resources and the block storage342,344,346resources that are utilized by the cloud computing instances340a,340b,340nis relatively straightforward as 5 blocks that are 1 MB in size are written to the local storage330,334,338resources and the block storage342,344,346resources that are utilized by the cloud computing instances340a,340b,340n. In such an example, the software daemon328,332,336may also be configured to create five objects containing distinct 1 MB chunks of the data. As such, in some embodiments, each object that is written to the cloud-based object storage348may be identical (or nearly identical) in size. Readers will appreciate that in such an example, metadata that is associated with the data itself may be included in each object (e.g., the first 1 MB of the object is data and the remaining portion is metadata associated with the data). Readers will appreciate that the cloud-based object storage348may be incorporated into the cloud-based storage system318to increase the durability of the cloud-based storage system318. In some embodiments, all data that is stored by the cloud-based storage system318may be stored in both: 1) the cloud-based object storage348, and 2) at least one of the local storage330,334,338resources or block storage342,344,346resources that are utilized by the cloud computing instances340a,340b,340n. In such embodiments, the local storage330,334,338resources and block storage342,344,346resources that are utilized by the cloud computing instances340a,340b,340nmay effectively operate as cache that generally includes all data that is also stored in S3, such that all reads of data may be serviced by the cloud computing instances340a,340b,340nwithout requiring the cloud computing instances340a,340b,340nto access the cloud-based object storage348. Readers will appreciate that in other embodiments, however, all data that is stored by the cloud-based storage system318may be stored in the cloud-based object storage348, but less than all data that is stored by the cloud-based storage system318may be stored in at least one of the local storage330,334,338resources or block storage342,344,346resources that are utilized by the cloud computing instances340a,340b,340n. In such an example, various policies may be utilized to determine which subset of the data that is stored by the cloud-based storage system318should reside in both: 1) the cloud-based object storage348, and 2) at least one of the local storage330,334,338resources or block storage342,344,346resources that are utilized by the cloud computing instances340a,340b,340n. One or more modules of computer program instructions that are executing within the cloud-based storage system318(e.g., a monitoring module that is executing on its own EC2 instance) may be designed to handle the failure of one or more of the cloud computing instances340a,340b,340nwith local storage330,334,338. In such an example, the monitoring module may handle the failure of one or more of the cloud computing instances340a,340b,340nwith local storage330,334,338by creating one or more new cloud computing instances with local storage, retrieving data that was stored on the failed cloud computing instances340a,340b,340nfrom the cloud-based object storage348, and storing the data retrieved from the cloud-based object storage348in local storage on the newly created cloud computing instances. Readers will appreciate that many variants of this process may be implemented. Readers will appreciate that various performance aspects of the cloud-based storage system318may be monitored (e.g., by a monitoring module that is executing in an EC2 instance) such that the cloud-based storage system318can be scaled-up or scaled-out as needed. For example, if the cloud computing instances320,322that are used to support the execution of a storage controller application324,326are undersized and not sufficiently servicing the I/O requests that are issued by users of the cloud-based storage system318, a monitoring module may create a new, more powerful cloud computing instance (e.g., a cloud computing instance of a type that includes more processing power, more memory, etc. . . . ) that includes the storage controller application such that the new, more powerful cloud computing instance can begin operating as the primary controller. Likewise, if the monitoring module determines that the cloud computing instances320,322that are used to support the execution of a storage controller application324,326are oversized and that cost savings could be gained by switching to a smaller, less powerful cloud computing instance, the monitoring module may create a new, less powerful (and less expensive) cloud computing instance that includes the storage controller application such that the new, less powerful cloud computing instance can begin operating as the primary controller. The storage systems described above may carry out intelligent data backup techniques through which data stored in the storage system may be copied and stored in a distinct location to avoid data loss in the event of equipment failure or some other form of catastrophe. For example, the storage systems described above may be configured to examine each backup to avoid restoring the storage system to an undesirable state. Consider an example in which malware infects the storage system. In such an example, the storage system may include software resources314that can scan each backup to identify backups that were captured before the malware infected the storage system and those backups that were captured after the malware infected the storage system. In such an example, the storage system may restore itself from a backup that does not include the malware—or at least not restore the portions of a backup that contained the malware. In such an example, the storage system may include software resources314that can scan each backup to identify the presences of malware (or a virus, or some other undesirable), for example, by identifying write operations that were serviced by the storage system and originated from a network subnet that is suspected to have delivered the malware, by identifying write operations that were serviced by the storage system and originated from a user that is suspected to have delivered the malware, by identifying write operations that were serviced by the storage system and examining the content of the write operation against fingerprints of the malware, and in many other ways. Readers will further appreciate that the backups (often in the form of one or more snapshots) may also be utilized to perform rapid recovery of the storage system. Consider an example in which the storage system is infected with ransomware that locks users out of the storage system. In such an example, software resources314within the storage system may be configured to detect the presence of ransomware and may be further configured to restore the storage system to a point-in-time, using the retained backups, prior to the point-in-time at which the ransomware infected the storage system. In such an example, the presence of ransomware may be explicitly detected through the use of software tools utilized by the system, through the use of a key (e.g., a USB drive) that is inserted into the storage system, or in a similar way. Likewise, the presence of ransomware may be inferred in response to system activity meeting a predetermined fingerprint such as, for example, no reads or writes coming into the system for a predetermined period of time. Readers will appreciate that the various components described above may be grouped into one or more optimized computing packages as converged infrastructures. Such converged infrastructures may include pools of computers, storage and networking resources that can be shared by multiple applications and managed in a collective manner using policy-driven processes. Such converged infrastructures may be implemented with a converged infrastructure reference architecture, with standalone appliances, with a software driven hyper-converged approach (e.g., hyper-converged infrastructures), or in other ways. Readers will appreciate that the storage systems described in this disclosure may be useful for supporting various types of software applications. In fact, the storage systems may be ‘application aware’ in the sense that the storage systems may obtain, maintain, or otherwise have access to information describing connected applications (e.g., applications that utilize the storage systems) to optimize the operation of the storage system based on intelligence about the applications and their utilization patterns. For example, the storage system may optimize data layouts, optimize caching behaviors, optimize ‘QoS’ levels, or perform some other optimization that is designed to improve the storage performance that is experienced by the application. As an example of one type of application that may be supported by the storage systems described herein, the storage system306may be useful in supporting artificial intelligence (‘AI’) applications, database applications, XOps projects (e.g., DevOps projects, DataOps projects, MLOps projects, ModelOps projects, PlatformOps projects), electronic design automation tools, event-driven software applications, high performance computing applications, simulation applications, high-speed data capture and analysis applications, machine learning applications, media production applications, media serving applications, picture archiving and communication systems (‘PACS’) applications, software development applications, virtual reality applications, augmented reality applications, and many other types of applications by providing storage resources to such applications. In view of the fact that the storage systems include compute resources, storage resources, and a wide variety of other resources, the storage systems may be well suited to support applications that are resource intensive such as, for example, AI applications. AI applications may be deployed in a variety of fields, including: predictive maintenance in manufacturing and related fields, healthcare applications such as patient data & risk analytics, retail and marketing deployments (e.g., search advertising, social media advertising), supply chains solutions, fintech solutions such as business analytics & reporting tools, operational deployments such as real-time analytics tools, application performance management tools, IT infrastructure management tools, and many others. Such AI applications may enable devices to perceive their environment and take actions that maximize their chance of success at some goal. Examples of such AI applications can include IBM Watson™, Microsoft Oxford™, Google DeepMind™, Baidu Minwa™, and others. The storage systems described above may also be well suited to support other types of applications that are resource intensive such as, for example, machine learning applications. Machine learning applications may perform various types of data analysis to automate analytical model building. Using algorithms that iteratively learn from data, machine learning applications can enable computers to learn without being explicitly programmed. One particular area of machine learning is referred to as reinforcement learning, which involves taking suitable actions to maximize reward in a particular situation. In addition to the resources already described, the storage systems described above may also include graphics processing units (‘GPUs’), occasionally referred to as visual processing unit (‘VPUs’). Such GPUs may be embodied as specialized electronic circuits that rapidly manipulate and alter memory to accelerate the creation of images in a frame buffer intended for output to a display device. Such GPUs may be included within any of the computing devices that are part of the storage systems described above, including as one of many individually scalable components of a storage system, where other examples of individually scalable components of such storage system can include storage components, memory components, compute components (e.g., CPUs, FPGAs, ASICs), networking components, software components, and others. In addition to GPUs, the storage systems described above may also include neural network processors (‘NNPs’) for use in various aspects of neural network processing. Such NNPs may be used in place of (or in addition to) GPUs and may also be independently scalable. As described above, the storage systems described herein may be configured to support artificial intelligence applications, machine learning applications, big data analytics applications, and many other types of applications. The rapid growth in these sort of applications is being driven by three technologies: deep learning (DL), GPU processors, and Big Data. Deep learning is a computing model that makes use of massively parallel neural networks inspired by the human brain. Instead of experts handcrafting software, a deep learning model writes its own software by learning from lots of examples. Such GPUs may include thousands of cores that are well-suited to run algorithms that loosely represent the parallel nature of the human brain. Advances in deep neural networks, including the development of multi-layer neural networks, have ignited a new wave of algorithms and tools for data scientists to tap into their data with artificial intelligence (AI). With improved algorithms, larger data sets, and various frameworks (including open-source software libraries for machine learning across a range of tasks), data scientists are tackling new use cases like autonomous driving vehicles, natural language processing and understanding, computer vision, machine reasoning, strong AI, and many others. Applications of such techniques may include: machine and vehicular object detection, identification and avoidance; visual recognition, classification and tagging; algorithmic financial trading strategy performance management; simultaneous localization and mapping; predictive maintenance of high-value machinery; prevention against cyber security threats, expertise automation; image recognition and classification; question answering; robotics; text analytics (extraction, classification) and text generation and translation; and many others. Applications of AI techniques has materialized in a wide array of products include, for example, Amazon Echo's speech recognition technology that allows users to talk to their machines, Google Translate™ which allows for machine-based language translation, Spotify's Discover Weekly that provides recommendations on new songs and artists that a user may like based on the user's usage and traffic analysis, Quill's text generation offering that takes structured data and turns it into narrative stories, Chatbots that provide real-time, contextually specific answers to questions in a dialog format, and many others. Data is the heart of modern AI and deep learning algorithms. Before training can begin, one problem that must be addressed revolves around collecting the labeled data that is crucial for training an accurate AI model. A full scale AI deployment may be required to continuously collect, clean, transform, label, and store large amounts of data. Adding additional high quality data points directly translates to more accurate models and better insights. Data samples may undergo a series of processing steps including, but not limited to: 1) ingesting the data from an external source into the training system and storing the data in raw form, 2) cleaning and transforming the data in a format convenient for training, including linking data samples to the appropriate label, 3) exploring parameters and models, quickly testing with a smaller dataset, and iterating to converge on the most promising models to push into the production cluster, 4) executing training phases to select random batches of input data, including both new and older samples, and feeding those into production GPU servers for computation to update model parameters, and 5) evaluating including using a holdback portion of the data not used in training in order to evaluate model accuracy on the holdout data. This lifecycle may apply for any type of parallelized machine learning, not just neural networks or deep learning. For example, standard machine learning frameworks may rely on CPUs instead of GPUs but the data ingest and training workflows may be the same. Readers will appreciate that a single shared storage data hub creates a coordination point throughout the lifecycle without the need for extra data copies among the ingest, preprocessing, and training stages. Rarely is the ingested data used for only one purpose, and shared storage gives the flexibility to train multiple different models or apply traditional analytics to the data. Readers will appreciate that each stage in the AI data pipeline may have varying requirements from the data hub (e.g., the storage system or collection of storage systems). Scale-out storage systems must deliver uncompromising performance for all manner of access types and patterns—from small, metadata-heavy to large files, from random to sequential access patterns, and from low to high concurrency. The storage systems described above may serve as an ideal AI data hub as the systems may service unstructured workloads. In the first stage, data is ideally ingested and stored on to the same data hub that following stages will use, in order to avoid excess data copying. The next two steps can be done on a standard compute server that optionally includes a GPU, and then in the fourth and last stage, full training production jobs are run on powerful GPU-accelerated servers. Often, there is a production pipeline alongside an experimental pipeline operating on the same dataset. Further, the GPU-accelerated servers can be used independently for different models or joined together to train on one larger model, even spanning multiple systems for distributed training. If the shared storage tier is slow, then data must be copied to local storage for each phase, resulting in wasted time staging data onto different servers. The ideal data hub for the AI training pipeline delivers performance similar to data stored locally on the server node while also having the simplicity and performance to enable all pipeline stages to operate concurrently. In order for the storage systems described above to serve as a data hub or as part of an AI deployment, in some embodiments the storage systems may be configured to provide DMA between storage devices that are included in the storage systems and one or more GPUs that are used in an AI or big data analytics pipeline. The one or more GPUs may be coupled to the storage system, for example, via NVMe-over-Fabrics (‘NVMe-oF’) such that bottlenecks such as the host CPU can be bypassed and the storage system (or one of the components contained therein) can directly access GPU memory. In such an example, the storage systems may leverage API hooks to the GPUs to transfer data directly to the GPUs. For example, the GPUs may be embodied as Nvidia GPUs and the storage systems may support GPUDirect Storage (‘GDS’) software, or have similar proprietary software, that enables the storage system to transfer data to the GPUs via RDMA or similar mechanism. Although the preceding paragraphs discuss deep learning applications, readers will appreciate that the storage systems described herein may also be part of a distributed deep learning (‘DDL’) platform to support the execution of DDL algorithms. The storage systems described above may also be paired with other technologies such as TensorFlow, an open-source software library for dataflow programming across a range of tasks that may be used for machine learning applications such as neural networks, to facilitate the development of such machine learning models, applications, and so on. The storage systems described above may also be used in a neuromorphic computing environment. Neuromorphic computing is a form of computing that mimics brain cells. To support neuromorphic computing, an architecture of interconnected “neurons” replace traditional computing models with low-powered signals that go directly between neurons for more efficient computation. Neuromorphic computing may make use of very-large-scale integration (VLSI) systems containing electronic analog circuits to mimic neuro-biological architectures present in the nervous system, as well as analog, digital, mixed-mode analog/digital VLSI, and software systems that implement models of neural systems for perception, motor control, or multisensory integration. Readers will appreciate that the storage systems described above may be configured to support the storage or use of (among other types of data) blockchains and derivative items such as, for example, open source blockchains and related tools that are part of the IBM™ Hyperledger project, permissioned blockchains in which a certain number of trusted parties are allowed to access the block chain, blockchain products that enable developers to build their own distributed ledger projects, and others. Blockchains and the storage systems described herein may be leveraged to support on-chain storage of data as well as off-chain storage of data. Off-chain storage of data can be implemented in a variety of ways and can occur when the data itself is not stored within the blockchain. For example, in one embodiment, a hash function may be utilized and the data itself may be fed into the hash function to generate a hash value. In such an example, the hashes of large pieces of data may be embedded within transactions, instead of the data itself. Readers will appreciate that, in other embodiments, alternatives to blockchains may be used to facilitate the decentralized storage of information. For example, one alternative to a blockchain that may be used is a blockweave. While conventional blockchains store every transaction to achieve validation, a blockweave permits secure decentralization without the usage of the entire chain, thereby enabling low cost on-chain storage of data. Such blockweaves may utilize a consensus mechanism that is based on proof of access (PoA) and proof of work (PoW). The storage systems described above may, either alone or in combination with other computing devices, be used to support in-memory computing applications. In-memory computing involves the storage of information in RAM that is distributed across a cluster of computers. Readers will appreciate that the storage systems described above, especially those that are configurable with customizable amounts of processing resources, storage resources, and memory resources (e.g., those systems in which blades that contain configurable amounts of each type of resource), may be configured in a way so as to provide an infrastructure that can support in-memory computing. Likewise, the storage systems described above may include component parts (e.g., NVDIMMs, 3D crosspoint storage that provide fast random access memory that is persistent) that can actually provide for an improved in-memory computing environment as compared to in-memory computing environments that rely on RAM distributed across dedicated servers. In some embodiments, the storage systems described above may be configured to operate as a hybrid in-memory computing environment that includes a universal interface to all storage media (e.g., RAM, flash storage, 3D crosspoint storage). In such embodiments, users may have no knowledge regarding the details of where their data is stored but they can still use the same full, unified API to address data. In such embodiments, the storage system may (in the background) move data to the fastest layer available—including intelligently placing the data in dependence upon various characteristics of the data or in dependence upon some other heuristic. In such an example, the storage systems may even make use of existing products such as Apache Ignite and GridGain to move data between the various storage layers, or the storage systems may make use of custom software to move data between the various storage layers. The storage systems described herein may implement various optimizations to improve the performance of in-memory computing such as, for example, having computations occur as close to the data as possible. Readers will further appreciate that in some embodiments, the storage systems described above may be paired with other resources to support the applications described above. For example, one infrastructure could include primary compute in the form of servers and workstations which specialize in using General-purpose computing on graphics processing units (‘GPGPU’) to accelerate deep learning applications that are interconnected into a computation engine to train parameters for deep neural networks. Each system may have Ethernet external connectivity, InfiniBand external connectivity, some other form of external connectivity, or some combination thereof. In such an example, the GPUs can be grouped for a single large training or used independently to train multiple models. The infrastructure could also include a storage system such as those described above to provide, for example, a scale-out all-flash file or object store through which data can be accessed via high-performance protocols such as NFS, S3, and so on. The infrastructure can also include, for example, redundant top-of-rack Ethernet switches connected to storage and compute via ports in MLAG port channels for redundancy. The infrastructure could also include additional compute in the form of whitebox servers, optionally with GPUs, for data ingestion, pre-processing, and model debugging. Readers will appreciate that additional infrastructures are also be possible. Readers will appreciate that the storage systems described above, either alone or in coordination with other computing machinery may be configured to support other AI related tools. For example, the storage systems may make use of tools like ONXX or other open neural network exchange formats that make it easier to transfer models written in different AI frameworks. Likewise, the storage systems may be configured to support tools like Amazon's Gluon that allow developers to prototype, build, and train deep learning models. In fact, the storage systems described above may be part of a larger platform, such as IBM′ Cloud Private for Data, that includes integrated data science, data engineering and application building services. Readers will further appreciate that the storage systems described above may also be deployed as an edge solution. Such an edge solution may be in place to optimize cloud computing systems by performing data processing at the edge of the network, near the source of the data. Edge computing can push applications, data and computing power (i.e., services) away from centralized points to the logical extremes of a network. Through the use of edge solutions such as the storage systems described above, computational tasks may be performed using the compute resources provided by such storage systems, data may be storage using the storage resources of the storage system, and cloud-based services may be accessed through the use of various resources of the storage system (including networking resources). By performing computational tasks on the edge solution, storing data on the edge solution, and generally making use of the edge solution, the consumption of expensive cloud-based resources may be avoided and, in fact, performance improvements may be experienced relative to a heavier reliance on cloud-based resources. While many tasks may benefit from the utilization of an edge solution, some particular uses may be especially suited for deployment in such an environment. For example, devices like drones, autonomous cars, robots, and others may require extremely rapid processing—so fast, in fact, that sending data up to a cloud environment and back to receive data processing support may simply be too slow. As an additional example, some IoT devices such as connected video cameras may not be well-suited for the utilization of cloud-based resources as it may be impractical (not only from a privacy perspective, security perspective, or a financial perspective) to send the data to the cloud simply because of the pure volume of data that is involved. As such, many tasks that really on data processing, storage, or communications may be better suited by platforms that include edge solutions such as the storage systems described above. The storage systems described above may alone, or in combination with other computing resources, serves as a network edge platform that combines compute resources, storage resources, networking resources, cloud technologies and network virtualization technologies, and so on. As part of the network, the edge may take on characteristics similar to other network facilities, from the customer premise and backhaul aggregation facilities to Points of Presence (PoPs) and regional data centers. Readers will appreciate that network workloads, such as Virtual Network Functions (VNFs) and others, will reside on the network edge platform. Enabled by a combination of containers and virtual machines, the network edge platform may rely on controllers and schedulers that are no longer geographically co-located with the data processing resources. The functions, as microservices, may split into control planes, user and data planes, or even state machines, allowing for independent optimization and scaling techniques to be applied. Such user and data planes may be enabled through increased accelerators, both those residing in server platforms, such as FPGAs and Smart NICs, and through SDN-enabled merchant silicon and programmable ASICs. The storage systems described above may also be optimized for use in big data analytics, including being leveraged as part of a composable data analytics pipeline where containerized analytics architectures, for example, make analytics capabilities more composable. Big data analytics may be generally described as the process of examining large and varied data sets to uncover hidden patterns, unknown correlations, market trends, customer preferences and other useful information that can help organizations make more-informed business decisions. As part of that process, semi-structured and unstructured data such as, for example, internet clickstream data, web server logs, social media content, text from customer emails and survey responses, mobile-phone call-detail records, IoT sensor data, and other data may be converted to a structured form. The storage systems described above may also support (including implementing as a system interface) applications that perform tasks in response to human speech. For example, the storage systems may support the execution intelligent personal assistant applications such as, for example, Amazon's Alexa™, Apple Siri™, Google Voice™, Samsung Bixby™, Microsoft Cortana™, and others. While the examples described in the previous sentence make use of voice as input, the storage systems described above may also support chatbots, talkbots, chatterbots, or artificial conversational entities or other applications that are configured to conduct a conversation via auditory or textual methods. Likewise, the storage system may actually execute such an application to enable a user such as a system administrator to interact with the storage system via speech. Such applications are generally capable of voice interaction, music playback, making to-do lists, setting alarms, streaming podcasts, playing audiobooks, and providing weather, traffic, and other real time information, such as news, although in embodiments in accordance with the present disclosure, such applications may be utilized as interfaces to various system management operations. The storage systems described above may also implement AI platforms for delivering on the vision of self-driving storage. Such AI platforms may be configured to deliver global predictive intelligence by collecting and analyzing large amounts of storage system telemetry data points to enable effortless management, analytics and support. In fact, such storage systems may be capable of predicting both capacity and performance, as well as generating intelligent advice on workload deployment, interaction and optimization. Such AI platforms may be configured to scan all incoming storage system telemetry data against a library of issue fingerprints to predict and resolve incidents in real-time, before they impact customer environments, and captures hundreds of variables related to performance that are used to forecast performance load. The storage systems described above may support the serialized or simultaneous execution of artificial intelligence applications, machine learning applications, data analytics applications, data transformations, and other tasks that collectively may form an AI ladder. Such an AI ladder may effectively be formed by combining such elements to form a complete data science pipeline, where exist dependencies between elements of the AI ladder. For example, AI may require that some form of machine learning has taken place, machine learning may require that some form of analytics has taken place, analytics may require that some form of data and information architecting has taken place, and so on. As such, each element may be viewed as a rung in an AI ladder that collectively can form a complete and sophisticated AI solution. The storage systems described above may also, either alone or in combination with other computing environments, be used to deliver an AI everywhere experience where AI permeates wide and expansive aspects of business and life. For example, AI may play an important role in the delivery of deep learning solutions, deep reinforcement learning solutions, artificial general intelligence solutions, autonomous vehicles, cognitive computing solutions, commercial UAVs or drones, conversational user interfaces, enterprise taxonomies, ontology management solutions, machine learning solutions, smart dust, smart robots, smart workplaces, and many others. The storage systems described above may also, either alone or in combination with other computing environments, be used to deliver a wide range of transparently immersive experiences (including those that use digital twins of various “things” such as people, places, processes, systems, and so on) where technology can introduce transparency between people, businesses, and things. Such transparently immersive experiences may be delivered as augmented reality technologies, connected homes, virtual reality technologies, brain-computer interfaces, human augmentation technologies, nanotube electronics, volumetric displays, 4D printing technologies, or others. The storage systems described above may also, either alone or in combination with other computing environments, be used to support a wide variety of digital platforms. Such digital platforms can include, for example, 5G wireless systems and platforms, digital twin platforms, edge computing platforms, IoT platforms, quantum computing platforms, serverless PaaS, software-defined security, neuromorphic computing platforms, and so on. The storage systems described above may also be part of a multi-cloud environment in which multiple cloud computing and storage services are deployed in a single heterogeneous architecture. In order to facilitate the operation of such a multi-cloud environment, DevOps tools may be deployed to enable orchestration across clouds. Likewise, continuous development and continuous integration tools may be deployed to standardize processes around continuous integration and delivery, new feature rollout and provisioning cloud workloads. By standardizing these processes, a multi-cloud strategy may be implemented that enables the utilization of the best provider for each workload. The storage systems described above may be used as a part of a platform to enable the use of crypto-anchors that may be used to authenticate a product's origins and contents to ensure that it matches a blockchain record associated with the product. Similarly, as part of a suite of tools to secure data stored on the storage system, the storage systems described above may implement various encryption technologies and schemes, including lattice cryptography. Lattice cryptography can involve constructions of cryptographic primitives that involve lattices, either in the construction itself or in the security proof. Unlike public-key schemes such as the RSA, Diffie-Hellman or Elliptic-Curve cryptosystems, which are easily attacked by a quantum computer, some lattice-based constructions appear to be resistant to attack by both classical and quantum computers. A quantum computer is a device that performs quantum computing. Quantum computing is computing using quantum-mechanical phenomena, such as superposition and entanglement. Quantum computers differ from traditional computers that are based on transistors, as such traditional computers require that data be encoded into binary digits (bits), each of which is always in one of two definite states (0 or 1). In contrast to traditional computers, quantum computers use quantum bits, which can be in superpositions of states. A quantum computer maintains a sequence of qubits, where a single qubit can represent a one, a zero, or any quantum superposition of those two qubit states. A pair of qubits can be in any quantum superposition of 4 states, and three qubits in any superposition of 8 states. A quantum computer with n qubits can generally be in an arbitrary superposition of up to 2{circumflex over ( )}n different states simultaneously, whereas a traditional computer can only be in one of these states at any one time. A quantum Turing machine is a theoretical model of such a computer. The storage systems described above may also be paired with FPGA-accelerated servers as part of a larger AI or ML infrastructure. Such FPGA-accelerated servers may reside near (e.g., in the same data center) the storage systems described above or even incorporated into an appliance that includes one or more storage systems, one or more FPGA-accelerated servers, networking infrastructure that supports communications between the one or more storage systems and the one or more FPGA-accelerated servers, as well as other hardware and software components. Alternatively, FPGA-accelerated servers may reside within a cloud computing environment that may be used to perform compute-related tasks for AI and ML jobs. Any of the embodiments described above may be used to collectively serve as a FPGA-based AI or ML platform. Readers will appreciate that, in some embodiments of the FPGA-based AI or ML platform, the FPGAs that are contained within the FPGA-accelerated servers may be reconfigured for different types of ML models (e.g., LSTMs, CNNs, GRUs). The ability to reconfigure the FPGAs that are contained within the FPGA-accelerated servers may enable the acceleration of a ML or AI application based on the most optimal numerical precision and memory model being used. Readers will appreciate that by treating the collection of FPGA-accelerated servers as a pool of FPGAs, any CPU in the data center may utilize the pool of FPGAs as a shared hardware microservice, rather than limiting a server to dedicated accelerators plugged into it. The FPGA-accelerated servers and the GPU-accelerated servers described above may implement a model of computing where, rather than keeping a small amount of data in a CPU and running a long stream of instructions over it as occurred in more traditional computing models, the machine learning model and parameters are pinned into the high-bandwidth on-chip memory with lots of data streaming through the high-bandwidth on-chip memory. FPGAs may even be more efficient than GPUs for this computing model, as the FPGAs can be programmed with only the instructions needed to run this kind of computing model. The storage systems described above may be configured to provide parallel storage, for example, through the use of a parallel file system such as BeeGFS. Such parallel files systems may include a distributed metadata architecture. For example, the parallel file system may include a plurality of metadata servers across which metadata is distributed, as well as components that include services for clients and storage servers. The systems described above can support the execution of a wide array of software applications. Such software applications can be deployed in a variety of ways, including container-based deployment models. Containerized applications may be managed using a variety of tools. For example, containerized applications may be managed using Docker Swarm, Kubernetes, and others. Containerized applications may be used to facilitate a serverless, cloud native computing deployment and management model for software applications. In support of a serverless, cloud native computing deployment and management model for software applications, containers may be used as part of an event handling mechanisms (e.g., AWS Lambdas) such that various events cause a containerized application to be spun up to operate as an event handler. The systems described above may be deployed in a variety of ways, including being deployed in ways that support fifth generation (‘5G’) networks. 5G networks may support substantially faster data communications than previous generations of mobile communications networks and, as a consequence may lead to the disaggregation of data and computing resources as modern massive data centers may become less prominent and may be replaced, for example, by more-local, micro data centers that are close to the mobile-network towers. The systems described above may be included in such local, micro data centers and may be part of or paired to multi-access edge computing (‘MEC’) systems. Such MEC systems may enable cloud computing capabilities and an IT service environment at the edge of the cellular network. By running applications and performing related processing tasks closer to the cellular customer, network congestion may be reduced and applications may perform better. The storage systems described above may also be configured to implement NVMe Zoned Namespaces. Through the use of NVMe Zoned Namespaces, the logical address space of a namespace is divided into zones. Each zone provides a logical block address range that must be written sequentially and explicitly reset before rewriting, thereby enabling the creation of namespaces that expose the natural boundaries of the device and offload management of internal mapping tables to the host. In order to implement NVMe Zoned Name Spaces (‘ZNS’), ZNS SSDs or some other form of zoned block devices may be utilized that expose a namespace logical address space using zones. With the zones aligned to the internal physical properties of the device, several inefficiencies in the placement of data can be eliminated. In such embodiments, each zone may be mapped, for example, to a separate application such that functions like wear levelling and garbage collection could be performed on a per-zone or per-application basis rather than across the entire device. In order to support ZNS, the storage controllers described herein may be configured with to interact with zoned block devices through the usage of, for example, the Linux™ kernel zoned block device interface or other tools. The storage systems described above may also be configured to implement zoned storage in other ways such as, for example, through the usage of shingled magnetic recording (SMR) storage devices. In examples where zoned storage is used, device-managed embodiments may be deployed where the storage devices hide this complexity by managing it in the firmware, presenting an interface like any other storage device. Alternatively, zoned storage may be implemented via a host-managed embodiment that depends on the operating system to know how to handle the drive, and only write sequentially to certain regions of the drive. Zoned storage may similarly be implemented using a host-aware embodiment in which a combination of a drive managed and host managed implementation is deployed. The storage systems described herein may be used to form a data lake. A data lake may operate as the first place that an organization's data flows to, where such data may be in a raw format. Metadata tagging may be implemented to facilitate searches of data elements in the data lake, especially in embodiments where the data lake contains multiple stores of data, in formats not easily accessible or readable (e.g., unstructured data, semi-structured data, structured data). From the data lake, data may go downstream to a data warehouse where data may be stored in a more processed, packaged, and consumable format. The storage systems described above may also be used to implement such a data warehouse. In addition, a data mart or data hub may allow for data that is even more easily consumed, where the storage systems described above may also be used to provide the underlying storage resources necessary for a data mart or data hub. In embodiments, queries the data lake may require a schema-on-read approach, where data is applied to a plan or schema as it is pulled out of a stored location, rather than as it goes into the stored location. The storage systems described herein may also be configured to implement a recovery point objective (‘RPO’), which may be establish by a user, established by an administrator, established as a system default, established as part of a storage class or service that the storage system is participating in the delivery of, or in some other way. A “recovery point objective” is a goal for the maximum time difference between the last update to a source dataset and the last recoverable replicated dataset update that would be correctly recoverable, given a reason to do so, from a continuously or frequently updated copy of the source dataset. An update is correctly recoverable if it properly takes into account all updates that were processed on the source dataset prior to the last recoverable replicated dataset update. In synchronous replication, the RPO would be zero, meaning that under normal operation, all completed updates on the source dataset should be present and correctly recoverable on the copy dataset. In best effort nearly synchronous replication, the RPO can be as low as a few seconds. In snapshot-based replication, the RPO can be roughly calculated as the interval between snapshots plus the time to transfer the modifications between a previous already transferred snapshot and the most recent to-be-replicated snapshot. If updates accumulate faster than they are replicated, then an RPO can be missed. If more data to be replicated accumulates between two snapshots, for snapshot-based replication, than can be replicated between taking the snapshot and replicating that snapshot's cumulative updates to the copy, then the RPO can be missed. If, again in snapshot-based replication, data to be replicated accumulates at a faster rate than could be transferred in the time between subsequent snapshots, then replication can start to fall further behind which can extend the miss between the expected recovery point objective and the actual recovery point that is represented by the last correctly replicated update. The storage systems described above may also be part of a shared nothing storage cluster. In a shared nothing storage cluster, each node of the cluster has local storage and communicates with other nodes in the cluster through networks, where the storage used by the cluster is (in general) provided only by the storage connected to each individual node. A collection of nodes that are synchronously replicating a dataset may be one example of a shared nothing storage cluster, as each storage system has local storage and communicates to other storage systems through a network, where those storage systems do not (in general) use storage from somewhere else that they share access to through some kind of interconnect. In contrast, some of the storage systems described above are themselves built as a shared-storage cluster, since there are drive shelves that are shared by the paired controllers. Other storage systems described above, however, are built as a shared nothing storage cluster, as all storage is local to a particular node (e.g., a blade) and all communication is through networks that link the compute nodes together. In other embodiments, other forms of a shared nothing storage cluster can include embodiments where any node in the cluster has a local copy of all storage they need, and where data is mirrored through a synchronous style of replication to other nodes in the cluster either to ensure that the data isn't lost or because other nodes are also using that storage. In such an embodiment, if a new cluster node needs some data, that data can be copied to the new node from other nodes that have copies of the data. In some embodiments, mirror-copy-based shared storage clusters may store multiple copies of all the cluster's stored data, with each subset of data replicated to a particular set of nodes, and different subsets of data replicated to different sets of nodes. In some variations, embodiments may store all of the cluster's stored data in all nodes, whereas in other variations nodes may be divided up such that a first set of nodes will all store the same set of data and a second, different set of nodes will all store a different set of data. Readers will appreciate that RAFT-based databases (e.g., etcd) may operate like shared-nothing storage clusters where all RAFT nodes store all data. The amount of data stored in a RAFT cluster, however, may be limited so that extra copies don't consume too much storage. A container server cluster might also be able to replicate all data to all cluster nodes, presuming the containers don't tend to be too large and their bulk data (the data manipulated by the applications that run in the containers) is stored elsewhere such as in an S3 cluster or an external file server. In such an example, the container storage may be provided by the cluster directly through its shared-nothing storage model, with those containers providing the images that form the execution environment for parts of an application or service. For further explanation,FIG.3Dillustrates an exemplary computing device350that may be specifically configured to perform one or more of the processes described herein. As shown inFIG.3D, computing device350may include a communication interface352, a processor354, a storage device356, and an input/output (“I/O”) module358communicatively connected one to another via a communication infrastructure360. While an exemplary computing device350is shown inFIG.3D, the components illustrated inFIG.3Dare not intended to be limiting. Additional or alternative components may be used in other embodiments. Components of computing device350shown inFIG.3Dwill now be described in additional detail. Communication interface352may be configured to communicate with one or more computing devices. Examples of communication interface352include, without limitation, a wired network interface (such as a network interface card), a wireless network interface (such as a wireless network interface card), a modem, an audio/video connection, and any other suitable interface. Processor354generally represents any type or form of processing unit capable of processing data and/or interpreting, executing, and/or directing execution of one or more of the instructions, processes, and/or operations described herein. Processor354may perform operations by executing computer-executable instructions362(e.g., an application, software, code, and/or other executable data instance) stored in storage device356. Storage device356may include one or more data storage media, devices, or configurations and may employ any type, form, and combination of data storage media and/or device. For example, storage device356may include, but is not limited to, any combination of the non-volatile media and/or volatile media described herein. Electronic data, including data described herein, may be temporarily and/or permanently stored in storage device356. For example, data representative of computer-executable instructions362configured to direct processor354to perform any of the operations described herein may be stored within storage device356. In some examples, data may be arranged in one or more databases residing within storage device356. I/O module358may include one or more I/O modules configured to receive user input and provide user output. I/O module358may include any hardware, firmware, software, or combination thereof supportive of input and output capabilities. For example, I/O module358may include hardware and/or software for capturing user input, including, but not limited to, a keyboard or keypad, a touchscreen component (e.g., touchscreen display), a receiver (e.g., an RF or infrared receiver), motion sensors, and/or one or more input buttons. I/O module358may include one or more devices for presenting output to a user, including, but not limited to, a graphics engine, a display (e.g., a display screen), one or more output drivers (e.g., display drivers), one or more audio speakers, and one or more audio drivers. In certain embodiments, I/O module358is configured to provide graphical data to a display for presentation to a user. The graphical data may be representative of one or more graphical user interfaces and/or any other graphical content as may serve a particular implementation. In some examples, any of the systems, computing devices, and/or other components described herein may be implemented by computing device350. For further explanation,FIG.3Eillustrates an example of a fleet of storage systems376for providing storage services (also referred to herein as ‘data services’). The fleet of storage systems376depicted inFIG.3includes a plurality of storage systems374a,374b,374c,374d,374nthat may each be similar to the storage systems described herein. The storage systems374a,374b,374c,374d,374nin the fleet of storage systems376may be embodied as identical storage systems or as different types of storage systems. For example, two of the storage systems374a,374ndepicted inFIG.3Eare depicted as being cloud-based storage systems, as the resources that collectively form each of the storage systems374a,374nare provided by distinct cloud services providers370,372. For example, the first cloud services provider370may be Amazon AWS™ whereas the second cloud services provider372is Microsoft Azure™, although in other embodiments one or more public clouds, private clouds, or combinations thereof may be used to provide the underlying resources that are used to form a particular storage system in the fleet of storage systems376. The example depicted inFIG.3Eincludes an edge management service382for delivering storage services in accordance with some embodiments of the present disclosure. The storage services (also referred to herein as ‘data services’) that are delivered may include, for example, services to provide a certain amount of storage to a consumer, services to provide storage to a consumer in accordance with a predetermined service level agreement, services to provide storage to a consumer in accordance with predetermined regulatory requirements, and many others. The edge management service382depicted inFIG.3Emay be embodied, for example, as one or more modules of computer program instructions executing on computer hardware such as one or more computer processors. Alternatively, the edge management service382may be embodied as one or more modules of computer program instructions executing on a virtualized execution environment such as one or more virtual machines, in one or more containers, or in some other way. In other embodiments, the edge management service382may be embodied as a combination of the embodiments described above, including embodiments where the one or more modules of computer program instructions that are included in the edge management service382are distributed across multiple physical or virtual execution environments. The edge management service382may operate as a gateway for providing storage services to storage consumers, where the storage services leverage storage offered by one or more storage systems374a,374b,374c,374d,374n. For example, the edge management service382may be configured to provide storage services to host devices378a,378b,378c,378d,378nthat are executing one or more applications that consume the storage services. In such an example, the edge management service382may operate as a gateway between the host devices378a,378b,378c,378d,378nand the storage systems374a,374b,374c,374d,374n, rather than requiring that the host devices378a,378b,378c,378d,378ndirectly access the storage systems374a,374b,374c,374d,374n. The edge management service382ofFIG.3Eexposes a storage services module380to the host devices378a,378b,378c,378d,378nofFIG.3E, although in other embodiments the edge management service382may expose the storage services module380to other consumers of the various storage services. The various storage services may be presented to consumers via one or more user interfaces, via one or more APIs, or through some other mechanism provided by the storage services module380. As such, the storage services module380depicted inFIG.3Emay be embodied as one or more modules of computer program instructions executing on physical hardware, on a virtualized execution environment, or combinations thereof, where executing such modules causes enables a consumer of storage services to be offered, select, and access the various storage services. The edge management service382ofFIG.3Ealso includes a system management services module384. The system management services module384ofFIG.3Eincludes one or more modules of computer program instructions that, when executed, perform various operations in coordination with the storage systems374a,374b,374c,374d,374nto provide storage services to the host devices378a,378b,378c,378d,378n. The system management services module384may be configured, for example, to perform tasks such as provisioning storage resources from the storage systems374a,374b,374c,374d,374nvia one or more APIs exposed by the storage systems374a,374b,374c,374d,374n, migrating datasets or workloads amongst the storage systems374a,374b,374c,374d,374nvia one or more APIs exposed by the storage systems374a,374b,374c,374d,374n, setting one or more tunable parameters (i.e., one or more configurable settings) on the storage systems374a,374b,374c,374d,374nvia one or more APIs exposed by the storage systems374a,374b,374c,374d,374n, and so on. For example, many of the services described below relate to embodiments where the storage systems374a,374b,374c,374d,374nare configured to operate in some way. In such examples, the system management services module384may be responsible for using APIs (or some other mechanism) provided by the storage systems374a,374b,374c,374d,374nto configure the storage systems374a,374b,374c,374d,374nto operate in the ways described below. In addition to configuring the storage systems374a,374b,374c,374d,374n, the edge management service382itself may be configured to perform various tasks required to provide the various storage services. Consider an example in which the storage service includes a service that, when selected and applied, causes personally identifiable information (PIP) contained in a dataset to be obfuscated when the dataset is accessed. In such an example, the storage systems374a,374b,374c,374d,374nmay be configured to obfuscate PII when servicing read requests directed to the dataset. Alternatively, the storage systems374a,374b,374c,374d,374nmay service reads by returning data that includes the PII, but the edge management service382itself may obfuscate the PII as the data is passed through the edge management service382on its way from the storage systems374a,374b,374c,374d,374nto the host devices378a,378b,378c,378d,378n. The storage systems374a,374b,374c,374d,374ndepicted inFIG.3Emay be embodied as one or more of the storage systems described above with reference toFIGS.1A-3D, including variations thereof. In fact, the storage systems374a,374b,374c,374d,374nmay serve as a pool of storage resources where the individual components in that pool have different performance characteristics, different storage characteristics, and so on. For example, one of the storage systems374amay be a cloud-based storage system, another storage system374bmay be a storage system that provides block storage, another storage system374cmay be a storage system that provides file storage, another storage system374dmay be a relatively high-performance storage system while another storage system374nmay be a relatively low-performance storage system, and so on. In alternative embodiments, only a single storage system may be present. The storage systems374a,374b,374c,374d,374ndepicted inFIG.3Emay also be organized into different failure domains so that the failure of one storage system374ashould be totally unrelated to the failure of another storage system374b. For example, each of the storage systems may receive power from independent power systems, each of the storage systems may be coupled for data communications over independent data communications networks, and so on. Furthermore, the storage systems in a first failure domain may be accessed via a first gateway whereas storage systems in a second failure domain may be accessed via a second gateway. For example, the first gateway may be a first instance of the edge management service382and the second gateway may be a second instance of the edge management service382, including embodiments where each instance is distinct, or each instance is part of a distributed edge management service382. As an illustrative example of available storage services, storage services may be presented to a user that are associated with different levels of data protection. For example, storage services may be presented to the user that, when selected and enforced, guarantee the user that data associated with that user will be protected such that various recovery point objectives (‘RPO’) can be guaranteed. A first available storage service may ensure, for example, that some dataset associated with the user will be protected such that any data that is more than 5 seconds old can be recovered in the event of a failure of the primary data store whereas a second available storage service may ensure that the dataset that is associated with the user will be protected such that any data that is more than 5 minutes old can be recovered in the event of a failure of the primary data store. An additional example of storage services that may be presented to a user, selected by a user, and ultimately applied to a dataset associated with the user can include one or more data compliance services. Such data compliance services may be embodied, for example, as services that may be provided to consumers (i.e., a user) the data compliance services to ensure that the user's datasets are managed in a way to adhere to various regulatory requirements. For example, one or more data compliance services may be offered to a user to ensure that the user's datasets are managed in a way so as to adhere to the General Data Protection Regulation (‘GDPR’), one or data compliance services may be offered to a user to ensure that the user's datasets are managed in a way so as to adhere to the Sarbanes—Oxley Act of 2002 (‘SOX’), or one or more data compliance services may be offered to a user to ensure that the user's datasets are managed in a way so as to adhere to some other regulatory act. In addition, the one or more data compliance services may be offered to a user to ensure that the user's datasets are managed in a way so as to adhere to some non-governmental guidance (e.g., to adhere to best practices for auditing purposes), the one or more data compliance services may be offered to a user to ensure that the user's datasets are managed in a way so as to adhere to a particular clients or organizations requirements, and so on. Consider an example in which a particular data compliance service is designed to ensure that a user's datasets are managed in a way so as to adhere to the requirements set forth in the GDPR. While a listing of all requirements of the GDPR can be found in the regulation itself, for the purposes of illustration, an example requirement set forth in the GDPR requires that pseudonymization processes must be applied to stored data in order to transform personal data in such a way that the resulting data cannot be attributed to a specific data subject without the use of additional information. For example, data encryption techniques can be applied to render the original data unintelligible, and such data encryption techniques cannot be reversed without access to the correct decryption key. As such, the GDPR may require that the decryption key be kept separately from the pseudonymised data. One particular data compliance service may be offered to ensure adherence to the requirements set forth in this paragraph. In order to provide this particular data compliance service, the data compliance service may be presented to a user (e.g., via a GUI) and selected by the user. In response to receiving the selection of the particular data compliance service, one or more storage services policies may be applied to a dataset associated with the user to carry out the particular data compliance service. For example, a storage services policy may be applied requiring that the dataset be encrypted prior to being stored in a storage system, prior to being stored in a cloud environment, or prior to being stored elsewhere. In order to enforce this policy, a requirement may be enforced not only requiring that the dataset be encrypted when stored, but a requirement may be put in place requiring that the dataset be encrypted prior to transmitting the dataset (e.g., sending the dataset to another party). In such an example, a storage services policy may also be put in place requiring that any encryption keys used to encrypt the dataset are not stored on the same system that stores the dataset itself. Readers will appreciate that many other forms of data compliance services may be offered and implemented in accordance with embodiments of the present disclosure. The storage systems374a,374b,374c,374d,374nin the fleet of storage systems376may be managed collectively, for example, by one or more fleet management modules. The fleet management modules may be part of or separate from the system management services module384depicted inFIG.3E. The fleet management modules may perform tasks such as monitoring the health of each storage system in the fleet, initiating updates or upgrades on one or more storage systems in the fleet, migrating workloads for loading balancing or other performance purposes, and many other tasks. As such, and for many other reasons, the storage systems374a,374b,374c,374d,374nmay be coupled to each other via one or more data communications links in order to exchange data between the storage systems374a,374b,374c,374d,374n. The storage systems described herein may support various forms of data replication. For example, two or more of the storage systems may synchronously replicate a dataset between each other. In synchronous replication, distinct copies of a particular dataset may be maintained by multiple storage systems, but all accesses (e.g., a read) of the dataset should yield consistent results regardless of which storage system the access was directed to. For example, a read directed to any of the storage systems that are synchronously replicating the dataset should return identical results. As such, while updates to the version of the dataset need not occur at exactly the same time, precautions must be taken to ensure consistent accesses to the dataset. For example, if an update (e.g., a write) that is directed to the dataset is received by a first storage system, the update may only be acknowledged as being completed if all storage systems that are synchronously replicating the dataset have applied the update to their copies of the dataset. In such an example, synchronous replication may be carried out through the use of I/O forwarding (e.g., a write received at a first storage system is forwarded to a second storage system), communications between the storage systems (e.g., each storage system indicating that it has completed the update), or in other ways. In other embodiments, a dataset may be replicated through the use of checkpoints. In checkpoint-based replication (also referred to as ‘nearly synchronous replication’), a set of updates to a dataset (e.g., one or more write operations directed to the dataset) may occur between different checkpoints, such that a dataset has been updated to a specific checkpoint only if all updates to the dataset prior to the specific checkpoint have been completed. Consider an example in which a first storage system stores a live copy of a dataset that is being accessed by users of the dataset. In this example, assume that the dataset is being replicated from the first storage system to a second storage system using checkpoint-based replication. For example, the first storage system may send a first checkpoint (at time t=0) to the second storage system, followed by a first set of updates to the dataset, followed by a second checkpoint (at time t=1), followed by a second set of updates to the dataset, followed by a third checkpoint (at time t=2). In such an example, if the second storage system has performed all updates in the first set of updates but has not yet performed all updates in the second set of updates, the copy of the dataset that is stored on the second storage system may be up-to-date until the second checkpoint. Alternatively, if the second storage system has performed all updates in both the first set of updates and the second set of updates, the copy of the dataset that is stored on the second storage system may be up-to-date until the third checkpoint. Readers will appreciate that various types of checkpoints may be used (e.g., metadata only checkpoints), checkpoints may be spread out based on a variety of factors (e.g., time, number of operations, an RPO setting), and so on. In other embodiments, a dataset may be replicated through snapshot-based replication (also referred to as ‘asynchronous replication’). In snapshot-based replication, snapshots of a dataset may be sent from a replication source such as a first storage system to a replication target such as a second storage system. In such an embodiment, each snapshot may include the entire dataset or a subset of the dataset such as, for example, only the portions of the dataset that have changed since the last snapshot was sent from the replication source to the replication target. Readers will appreciate that snapshots may be sent on-demand, based on a policy that takes a variety of factors into consideration (e.g., time, number of operations, an RPO setting), or in some other way. The storage systems described above may, either alone or in combination, by configured to serve as a continuous data protection store. A continuous data protection store is a feature of a storage system that records updates to a dataset in such a way that consistent images of prior contents of the dataset can be accessed with a low time granularity (often on the order of seconds, or even less), and stretching back for a reasonable period of time (often hours or days). These allow access to very recent consistent points in time for the dataset, and also allow access to points in time for a dataset that might have just preceded some event that, for example, caused parts of the dataset to be corrupted or otherwise lost, while retaining close to the maximum number of updates that preceded that event. Conceptually, they are like a sequence of snapshots of a dataset taken very frequently and kept for a long period of time, though continuous data protection stores are often implemented quite differently from snapshots. A storage system implementing a data continuous data protection store may further provide a means of accessing these points in time, accessing one or more of these points in time as snapshots or as cloned copies, or reverting the dataset back to one of those recorded points in time. Over time, to reduce overhead, some points in the time held in a continuous data protection store can be merged with other nearby points in time, essentially deleting some of these points in time from the store. This can reduce the capacity needed to store updates. It may also be possible to convert a limited number of these points in time into longer duration snapshots. For example, such a store might keep a low granularity sequence of points in time stretching back a few hours from the present, with some points in time merged or deleted to reduce overhead for up to an additional day. Stretching back in the past further than that, some of these points in time could be converted to snapshots representing consistent point-in-time images from only every few hours. Although some embodiments are described largely in the context of a storage system, readers of skill in the art will recognize that embodiments of the present disclosure may also take the form of a computer program product disposed upon computer readable storage media for use with any suitable processing system. Such computer readable storage media may be any storage medium for machine-readable information, including magnetic media, optical media, solid-state media, or other suitable media. Examples of such media include magnetic disks in hard drives or diskettes, compact disks for optical drives, magnetic tape, and others as will occur to those of skill in the art. Persons skilled in the art will immediately recognize that any computer system having suitable programming means will be capable of executing the steps described herein as embodied in a computer program product. Persons skilled in the art will recognize also that, although some of the embodiments described in this specification are oriented to software installed and executing on computer hardware, nevertheless, alternative embodiments implemented as firmware or as hardware are well within the scope of the present disclosure. In some examples, a non-transitory computer-readable medium storing computer-readable instructions may be provided in accordance with the principles described herein. The instructions, when executed by a processor of a computing device, may direct the processor and/or computing device to perform one or more operations, including one or more of the operations described herein. Such instructions may be stored and/or transmitted using any of a variety of known computer-readable media. A non-transitory computer-readable medium as referred to herein may include any non-transitory storage medium that participates in providing data (e.g., instructions) that may be read and/or executed by a computing device (e.g., by a processor of a computing device). For example, a non-transitory computer-readable medium may include, but is not limited to, any combination of non-volatile storage media and/or volatile storage media. Exemplary non-volatile storage media include, but are not limited to, read-only memory, flash memory, a solid-state drive, a magnetic storage device (e.g., a hard disk, a floppy disk, magnetic tape, etc.), ferroelectric random-access memory (“RAM”), and an optical disc (e.g., a compact disc, a digital video disc, a Blu-ray disc, etc.). Exemplary volatile storage media include, but are not limited to, RAM (e.g., dynamic RAM). One or more embodiments may be described herein with the aid of method steps illustrating the performance of specified functions and relationships thereof. The boundaries and sequence of these functional building blocks and method steps have been arbitrarily defined herein for convenience of description. Alternate boundaries and sequences can be defined so long as the specified functions and relationships are appropriately performed. Any such alternate boundaries or sequences are thus within the scope and spirit of the claims. Further, the boundaries of these functional building blocks have been arbitrarily defined for convenience of description. Alternate boundaries could be defined as long as the certain significant functions are appropriately performed. Similarly, flow diagram blocks may also have been arbitrarily defined herein to illustrate certain significant functionality. To the extent used, the flow diagram block boundaries and sequence could have been defined otherwise and still perform the certain significant functionality. Such alternate definitions of both functional building blocks and flow diagram blocks and sequences are thus within the scope and spirit of the claims. One of average skill in the art will also recognize that the functional building blocks, and other illustrative blocks, modules and components herein, can be implemented as illustrated or by discrete components, application specific integrated circuits, processors executing appropriate software and the like or any combination thereof. While particular combinations of various functions and features of the one or more embodiments are expressly described herein, other combinations of these features and functions are likewise possible. The present disclosure is not limited by the particular examples disclosed herein and expressly incorporates these other combinations. For further explanation,FIG.4sets forth a flow chart illustrating an example method of storage cache management according to some embodiments of the present disclosure. The example method depicted inFIG.4may be implemented in one or more storage systems such as the storage systems described above with references toFIGS.1A-3E. In particular, the example method depicted inFIG.4(as well as example methods described in other figures) may be implemented in a storage system that includes, among other memory and storage resources, Intel Optane memory or other byte addressable and/or high-performance non-volatile memory technologies. For example, the methods may be implemented in a storage system that includes Optane-based SSDs. In such embodiments, while reading data from such Optane-based SSDs may be relatively fast, reading data from such Optane-based SSDs may require a large amount of processor or I/O memory bus resources relative to reading data from DRAM. As such, it may be desirable to use DRAM as a cache where reading data from the DRAM cache is preferable to reading data from Optane-based SSDs in order to conserve processing or other resources. Although not explicitly depicted inFIG.4, the example method depicted inFIG.4may be carried out by one or more of the storage system controllers in the storage system. The method ofFIG.4includes identifying402, among a plurality of storage items that are stored in the storage system, storage items having an access count404above a first threshold406to generate a set of storage items408. Identifying402storage items having an access count404above a first threshold406may be carried out, for example, by comparing the access count404for each storage item stored in the storage system to the first threshold406. In order to facilitate such a comparison, each time a storage system400accesses (e.g., reads) a storage item such as a data block, the storage system400may increment a counter for the storage item. The counter may be stored as metadata that is associated with the storage item, stored in a separate data structure, or retained in some other way. Storage items having an access count404exceeding the first threshold406may then be flagged or otherwise identified as having an access count404above a first threshold406. In some examples, the access count does not need to be exact. For example, if a storage system fails to increment the counter in response to accessing a storage item, the storage system may still compare the access count with the first threshold. In some examples, a separate data structure may be created for the set storage items408, where the data structure includes the access count404associated with each storage item included in the set of storage items408. The first threshold406may be a set value, or in other examples, the first threshold406may be selected by an administrator or other user of the storage system400, or may be calculated by the storage system400. The first threshold406may be related to the size of a cache for the storage system400or other feature of the storage system400. Additionally, the first threshold406may be related to the access count404of each item currently in the cache. For example, the first threshold406may be set to be equal to the lowest access count404of the storage items in the cache. Thus, only storage items having an access count404greater than storage items in the cache would be included in the set of storage items408. The method ofFIG.4also includes identifying410, among the set of storage items408(i.e., the set of storage items408having an access count404above a first threshold406), storage items having an updated access count412above a second threshold414to generate a subset of storage items416. For each storage item in the set of storage items408, the updated access count412is dependent upon a number of accesses subsequent to generating the set of storage items408. Stated differently, the updated access count412can represent the number of times that a storage item was accessed (e.g., read) after the point-in-time where the storage item was included in the set of storage items408whose initial access count exceeded a first threshold406. In the example method depicted inFIG.4, identifying410storage items having an updated access count412above the second threshold414may be carried out by analyzing the updated access count412for each storage item included in the set of storage items408to determine which storage item have an updated access count412that exceeds the second threshold414. Given that the updated access count412can be dependent on the number of data accesses to a storage item in the set of storage items408after the set of storage items408is generated, the updated access count412may be tracked by a counter for each storage item that is incremented each time that the particular storage item is accessed after the set of storage items was generated. In such an example, the updated access count412may therefore be a number that different from the initial access count404. In fact, while an initial burst of accesses may cause a particular storage item to have a high initial access count404, a storage item that has an initial access count404that is above a first threshold406as well as an updated access count412above the second threshold414indicates more sustained frequent access to the storage item—especially relative to a storage item that only has an initial access count404that is above a first threshold406but does not have an updated access count412above the second threshold414. In the example method depicted inFIG.4, the second threshold414may be a set value, or in other examples, the second threshold414may be selected by an administrator or other user of the storage system400, or even calculated by the storage system400itself. Additionally, in some examples the second threshold414may be related to the access count404of at least one storage item currently in the cache. For example, the second threshold414may be set to be equal to the lowest access count404of the storage items in the cache. Thus, only storage items having an access count404greater than storage items in the cache would be included in the subset of storage items416. In some examples, a separate data structure may be created for the subset of storage items416to include the updated access count412for each storage item and an identity of each storage item included in the subset of storage items416. Readers will appreciate that in some embodiments the first threshold406and the second threshold414may have values that are dependent on one another whereas in other embodiments the first threshold406and the second threshold414may be independent of one another. For example, the first threshold406and the second threshold414may have the same value. Or in some examples the second threshold414may be a fixed ratio of the first threshold406. In still other examples, the second threshold414may be a value unrelated to the first threshold406. The example method depicted inFIG.4also includes adding418at least one storage item of the subset of storage items416to a cache. Adding418at least one storage item of the subset of storage items416may be carried out, for example, by copying a storage item from persistent storage to cache memory. For example, if the storage item is stored on an SSD in the storage system and the cache is embodied as random access memory, the storage item may be copied from the SSD to the random access memory. In another example, the storage item may be stored at a remote location and copying the storage item from persistent storage to cache memory may comprise the receiving the storage item from remote storage and copying the storage item to random access memory. In some examples, at least one storage item in the subset of storage items416may be copied as part of a larger storage operation. For example, the storage system may copy storage items near the storage item in the subset of storage items416. For example, the storage system may copy a larger of unit of storage that includes the storage item to the cache rather than just the storage item. For further explanation, the method ofFIG.5sets forth a flow chart illustrating another example method of storage cache management according to embodiments of the present disclosure. The method ofFIG.5is similar to the method ofFIG.4in that the method ofFIG.5also includes identifying402storage items having an access count404above a first threshold406to generate a set of storage items408, identifying410storage items having an updated access count412above a second threshold414to generate a subset of storage items416, and adding418the storage items of the subset of storage items416to a cache. The method ofFIG.5differs from the method ofFIG.4in that the method ofFIG.5also includes, subsequent to generating the set of storage items408and prior to identifying410the subset of storage items416, decaying502the access count404for at least one storage item of the plurality of storage items to generate a decayed access count504for at least one storage item of the plurality of storage items. Decaying502the access count404for at least one storage item of the plurality of storage items may be carried out by modifying the access count404for each storage item to reduce the access count404associated with a respective storage item. Different methods for decaying502the access count404for at least one storage item of the plurality of storage item may be implemented. In one example, the access count404for a storage item may be decayed by setting the access count404to zero. In another example, each access count404may decay by subtracting a set amount from a respective access count404. In yet another example, each access count404may be decayed proportionally. For example, each access count404may be halved. In some examples, the access count404for at least one storage item of the plurality of storage items is updated, while in other examples, the access count404may be decayed only for those storage items included in the set of storage items408. In some examples, the parameters to the decay function may be a constant or values selected by a user. In other examples, the amount of decay may be dependent upon an operating condition of the storage system400. For example, an amount to subtract for each access count404may be dependent upon an amount of time since the cache was last updated or some other value. In such examples, as the time between cache updates increases, the amount of decay will increase as well. The method ofFIG.5also differs from the method ofFIG.4in that the method ofFIG.5also includes incrementing506the decayed access count504for at least one storage item of the plurality of storage items to generate the updated access count412. Incrementing506the decayed access count504may be carried out by tracking access to each of the storage items and incrementing a counter each time a storage item is accessed. For example, each time a storage item is read, either from local or remote storage, the access counter may be updated. In some examples, if a storage item is read from a cache, the corresponding storage item in storage may have an access count404associated with the storage item updated to reflect that the storage item is being read, even though the storage item is being read from the cache instead of storage. By decaying502the access count404and then incrementing506the decayed access count504for subsequent accesses, the method biases the cache to recent storage access. If storage items were accessed frequently in the past, but not accessed since the counter was decayed, the access counter for that storage item will be relatively low compared to storage items that were accessed recently. The amount of decay may determine the bias for recent accesses. For consideration of only recent accesses the counters may be zeroed so that no prior accesses are considered. Or in other examples, the counters may be multiplied by a set value. Such a method may take into account past access weighted according to the set value. For further explanation, the method ofFIG.6sets forth a flow chart illustrating another example method of storage cache management according to embodiments of the present disclosure. The method ofFIG.6is similar to the methods ofFIGS.4-5in that the method ofFIG.6also includes identifying402storage items having an access count404above a first threshold406to generate a set of storage items408, identifying410storage items having an updated access count412above a second threshold414to generate a subset of storage items416, and adding418the storage items of the subset of storage items416to a cache. The method depicted inFIG.6is also similar to the method ofFIG.5in that it also includes, subsequent to generating the set of storage items408and prior to identifying the subset of storage items416, decaying502the access count404for at least one of the plurality of storage items to generate a decayed access count504for each of the plurality of storage items and incrementing506the decayed access count504for each of the plurality of storage items to generate the updated access count412. The method ofFIG.6differs from the method ofFIG.5in that, in the method ofFIG.6, decaying502the access count404for at least one storage item of the plurality of storage items to generate a decayed access count504for at least one storage item of the plurality of storage items includes zeroing602the access count404for at least one storage item of the plurality of storage items. Zeroing602the access count404may be carried out by setting a value for each counter to zero. Setting the access count404to zero allows the storage system400to consider only an access count404that occurs after generating the set of storage items408. Thus, the historical access of the storage items would not be considered when generating the subset of storage items408. For further explanation, the method ofFIG.7sets forth a flow chart illustrating another example method of storage cache management according to embodiments of the present disclosure. The method ofFIG.7is similar to the methods described herein in that the method ofFIG.7also includes identifying402storage items having an access count404above a first threshold406, identifying410storage items having an updated access count412above a second threshold414, and adding418the storage items of the subset of storage items416to a cache. The method ofFIG.7differs from other methods described in this disclosure in that, in the method ofFIG.7, identifying402storage items to generate a set of storage items is performed at an interval independent of access to the storage items702by a storage system400. An interval independent of access to the storage items702by the storage system400may be defined as an interval that is unrelated to a particular access operation (e.g., a read) of the storage items, but is instead part of a regular storage operation by the storage system400regardless of the particular access operations that are serviced by the storage system. In some examples the interval independent of access to the storage items702may be constant, or in other examples, the interval independent of access to the storage items702may vary based on a characteristic of the storage system400. For example, the interval independent of access to the storage items702may vary dynamically based on a current load of the storage system400. For further explanation, the method ofFIG.8sets forth a flow chart illustrating another example method of storage cache management according to embodiments of the present disclosure. The method ofFIG.8is similar to other methods described in the present disclosure in that the method ofFIG.8also includes identifying402storage items having an access count404above a first threshold406, identifying410storage items having an updated access count412above a second threshold414, and adding418the storage items of the subset of storage items416to a cache, where identifying402storage items to generate a set of storage items408is performed at an interval702independent of access to the storage items by a storage system. The method ofFIG.8differs from the methods described above in that, in the method ofFIG.8, the interval is a set time interval802. A set time interval802is a period of time between a first generation of a set of storage items408and a second generation of a set of storage items408. For example, a user or administrator may specify a set time interval802between generation of a set of storage items408. A higher set time interval802will result in fewer updates to the cache but will minimize processor usage while a lower interval may result in more frequent updates to the cache and increased processor usage. In one example, the set time interval802may be within the range of ten seconds to thirty seconds. In some examples, the set time interval802may be set dynamically. For examples, during periods of high processor usage the set time interval802may increase while during periods of low processor usage the set time interval802may decrease. Or, in another example, the set time interval802may vary dynamically based on the number of storage items added to the cache previously. If a large number of storage items were previously added to the cache, the set time interval802may be decreased while if a low number of storage items were previously added the set time interval802may be increased. A time at which the subset of storage items416is generated may be spaced apart from a time that the set of storage items408is generated by an interval less than the set time interval802. For example, if the set time interval802is thirty seconds, then the generation of the subset of storage items416may be performed less than thirty seconds after the generation of the set of storage items408. In some examples, the interval between generating the set of storage items408and the generation of the subset of storage items416may be proportional to the set time interval802. For example, if the set time interval802is 30 seconds and the proportion is one third, then the subset of storage items416may be generated ten seconds after the set of storage items408is generated. For further explanation, the method ofFIG.9sets forth a flow chart illustrating another example method of storage cache management according to embodiments of the present disclosure. The method ofFIG.9is similar to the method ofFIG.7in that the method ofFIG.9also includes identifying402, among a plurality of storage items, storage items having an access count404above a first threshold406to generate a set of storage items408; identifying410, among the set of storage items408, storage items having an updated access count412above a second threshold414to generate a subset of storage items416, wherein, for each storage item, the updated access count412is dependent upon a number of accesses subsequent to generating the set of storage items408; and adding418the storage items of the subset of storage items416to a cache, wherein identifying402storage items to generate a set of storage items408is performed at an interval702independent of access to the storage items by a storage system. The method ofFIG.9differs from the method ofFIG.7in that, in the method ofFIG.9, the interval702is an interval dependent on cache pressure902. An interval dependent on cache pressure902sets an interval based on how effective the cache is. If the cache is effective such that a relatively high number of input/output operations are directed to the cache, then the interval dependent on cache pressure902may be longer than when cache is less effective. In some examples, the interval dependent on cache pressure902may be determined based on the number of input/output operations directed to the cache falling below a threshold. For example, as long as the number of input/output operations directed to the cache exceed the threshold, there is no need to look for additional storage items to load to the cache. However, once the number of input/output operations fall below the threshold, the cache may benefit from being updated using the described method. Traditional cache systems may be designed to reduce input/output latency when requesting data from a lower tier of storage. This reduction in latency may come at the price of CPU load as the CPU attempts to predict what data will be accessed soon. In the embodiments of the present disclosure, the management of the storage cache is performed to reduce CPU load rather than minimize I/O latency. In place of focusing on what will be accessed soon, embodiments of the present disclosure focus on storage items that will be accessed repeatedly. The embodiments do so in a manner that is efficient for the CPU so that the CPU usage can be reduced. One or more embodiments may be described herein with the aid of method steps illustrating the performance of specified functions and relationships thereof. The boundaries and sequence of these functional building blocks and method steps have been arbitrarily defined herein for convenience of description. Alternate boundaries and sequences can be defined so long as the specified functions and relationships are appropriately performed. Any such alternate boundaries or sequences are thus within the scope and spirit of the claims. Further, the boundaries of these functional building blocks have been arbitrarily defined for convenience of description. Alternate boundaries could be defined as long as the certain significant functions are appropriately performed. Similarly, flow diagram blocks may also have been arbitrarily defined herein to illustrate certain significant functionality. To the extent used, the flow diagram block boundaries and sequence could have been defined otherwise and still perform the certain significant functionality. Such alternate definitions of both functional building blocks and flow diagram blocks and sequences are thus within the scope and spirit of the claims. One of average skill in the art will also recognize that the functional building blocks, and other illustrative blocks, modules and components herein, can be implemented as illustrated or by discrete components, application specific integrated circuits, processors executing appropriate software and the like or any combination thereof. While particular combinations of various functions and features of the one or more embodiments are expressly described herein, other combinations of these features and functions are likewise possible. The present disclosure is not limited by the particular examples disclosed herein and expressly incorporates these other combinations.
225,317
11860781
DETAILED DESCRIPTION A computing system may include processor cores, memory devices, input/output (I/O) devices, accelerators, graphics processing units (GPUs), digital signal processors (DSPs), crypto engines, direct memory access (DMA) engines, and other components that may communicate with one another via one or more interconnect fabrics. The computing system may include a system memory which may be implemented using memory modules comprising DRAMs, SDRAMs, DDRs, SSDs, or other suitable memory devices. The computing system may also include multiple levels of cache memories. Generally, each processor core may include one or more integrated lower-level caches (e.g., Level 1 and Level 2), and a system level cache (e.g., SLC) may be shared by different agents. The agents can be the processor cores, DMA controllers, GPUs, DSPs, or other devices that have access to the system memory. In some implementations, the SLC can be part of an interconnect fabric that is communicatively coupled to all the agents, and may also be called a fabric cache. The SLC can be a higher-level cache, e.g., a Level 3 (L3) cache, or a Level 4 (L4) cache based on the implementation. The SLC can be a direct mapped cache, a set-associative cache, or a fully associative cache. When a memory transaction (e.g., a write transaction or a read transaction) is issued by a requesting agent to access a memory location, the SLC can be checked to determine whether a corresponding entry exists in the SLC (e.g., after a cache-miss on the lower-level caches). If the corresponding entry does not exist in the SLC, a cache line can be allocated to load the contents of the memory location from the system memory to the SLC. When another requesting agent issues a memory write transaction to access the same memory location at a later point in time, the memory write transaction can be performed on the cache line at the corresponding memory location in the SLC instead of accessing the system memory. However, the memory write transaction can cause incoherency between the contents of the SLC and the system memory for that memory location. In some implementations, a write back (WB) or a write through (WT) policy can be used to ensure that the contents of a memory location in the system memory and the SLC are coherent. When the SLC is a WT cache, the data for a memory write transaction is written to the SLC as well as to the system memory when the write transaction is issued. When the SLC is a WB cache, the data for the memory write transaction is written in the cache line corresponding to the memory location, and a dirty bit of the cache line can be set to indicate that the contents of the SLC and the system memory are not coherent for that memory location. At a later point in time, when the cache line with the dirty bit is evicted to make room for another cache line, the contents of the dirty cache line are copied to the system memory. In some implementations, the system memory (e.g., DRAM) may utilize a page buffer to execute the read or write memory transactions. The page buffer may also be called a row buffer, or a working buffer, and is used to temporarily store a memory page being accessed from the system memory. For example, upon receiving a transaction, the system memory may read a memory page corresponding to a transaction address into the page buffer, perform a read or write of the memory location corresponding to the transaction address, and write the memory page back into the system memory. Each memory page may include multiple memory locations based on the implementation of the system memory. In some implementations, the system memory may include multiple DRAM arrays. Each DRAM array may include a page buffer, which can hold a page or a row of the DRAM array at a time. Each memory access may include opening or downloading a page to perform the write or read operation, and writing the page back to the DRAM array. Writing the page to the DRAM array may include a pre-charging operation which may include closing the page in the page buffer, uploading the page back to the DRAM array, and preparing the DRAM array for fetching the next page. Opening and closing a memory page can incur a much higher latency than accessing the data from the page buffer. Thus, when multiple transactions are performed, a higher memory bandwidth can be achieved by accessing locations within a memory page that is opened as compared to accessing different memory pages. In some implementations, the DRAM arrays can be part of a DRAM bank, and multiple such DRAM banks can correspond to a rank. In some systems, the system memory may include dual in-line memory modules (DIMMs) comprising multiple such ranks. Some computing systems may use an SLC to cache data frequently accessed by the components of the computing system. The SLC can be provided by a third-party vendor as an intellectual property (IP) block, and thus it may not be possible to modify certain features of the SLC such as the cache update policy. For example, if the SLC is designed as a WB cache, the SLC may not have native support to perform write-through operations. Thus, when a memory write transaction is performed by the SLC, the status of a cache line corresponding to the transaction address can be updated from clean to dirty. When this dirty cache line is evicted, the contents of the cache line can be written back to the DRAM. When memory write transactions with sequential addresses are issued, the SLC may be written serially, and the status of each cache line corresponding to the sequential addresses can be updated to dirty. However, the cache lines with the dirty status may be evicted in a random order, which may cause corresponding memory locations in the DRAM to be updated in the random order. Updating the memory locations in the DRAM in the random order may cause different pages corresponding to the different memory locations to be opened and closed. Thus, updating the memory locations in the DRAM in a random order may trigger multiple pre-charging events which can be very costly in terms of power consumption and latency, and can adversely affect the DRAM performance. The technique described herein can be used to improve the utilization and performance of the system memory using a clean engine when the SLC is a WB cache. The clean engine may include a write cleaner circuit coupled to the interconnect fabric in between a requesting agent and the SLC. The write cleaner circuit can be used to cause a system memory to be updated around the same time as the SLC is written for certain memory write transactions. Thus, some embodiments can allow implementing the write-through functionality with an SLC that is a WB cache. For example, instead of waiting for the SLC to evict a cache line and update the corresponding system memory location when the cache is full, the write cleaner can cause an explicit clean of the cache line corresponding to the memory location which updates the status of the cache line from dirty to clean, and updates the system memory location around the same time of the write transaction. The cache line can be evicted silently at a later point in time, without needing to update the corresponding system memory location at the time of the eviction. In some embodiments, the write cleaner circuit can be used to efficiently perform updates to the DRAM for memory write transactions with sequential addresses, which can improve the utilization and performance of the DRAM. For example, the sequential addresses may be mapped to the same memory page (or row) in the DRAM, which can allow a single update of the DRAM from the page buffer for those memory write transactions. In some embodiments, the write cleaner circuit may identify memory write transactions having sequential addresses that have been issued to access memory locations corresponding to a memory page in the DRAM, and generate respective clean requests with the same address and cache attributes as the corresponding memory write transaction. The SLC may receive the respective clean requests and update the status of each cache line corresponding to the respective memory location from dirty to clean. The SLC may send update requests to the DRAM based on the clean requests. The DRAM may download the contents of the memory page corresponding to the sequential addresses to the page buffer based on the update requests and update the contents of the memory page based on the corresponding memory locations in the SLC. The DRAM can perform a single upload of the updated memory page from the page buffer to the DRAM for the memory write transactions with the sequential addresses instead of performing multiple DRAM updates at random times, which can drastically improve the DRAM utilization and performance by minimizing the pre-charging of the corresponding DRAM cells. In some embodiments, the requesting agents can mark each memory write transaction having a sequential address within a time period so that the write cleaner circuit can identify the transactions having sequential addresses and generate corresponding clean requests. In other embodiments, the write cleaner circuit can compare the address of each memory write transaction with a sequential address range to identify the memory write transactions that have the sequential addresses. In some implementations, the WB SLC can be a distributed cache comprising a set of SLC blocks. In various embodiments, different functionalities of the clean engine can be distributed along the data path between the requesting agents and the distributed WB SLC. For example, the decision logic for determining whether a clean request is to be generated for a memory write transaction can be separate from the clean request generation circuitry of the write cleaner. In such implementations, the component making the decision can provide an indication in the write request to instruct the write cleaner to generate a clean request. In some embodiments, each requesting agent can be associated with its own respective decision logic for determining whether a clean request is to be generated for a memory write transaction. The decision logic can be integrated as part of the requesting agent, or be coupled to the requesting agent along the datapath to the WB SLC. In some embodiments, each block of the distributed WB SLC can be coupled to a respective clean request generation circuitry of the write cleaner for generating a respective clean request. Thus, some embodiments can be used to optimize the performance of the system memory in a computer system having a WB SLC by implementing write-through functionality for multiple write transactions to memory locations in a memory page. The techniques described herein can be used with bus protocols such as Advanced Microcontroller Bus Architecture (AMBA) Advanced eXtensible Interface (AXI) Coherency Extensions (ACE), Coherent Hub Interface (CHI), or Compute Express Link (CXL), among others. In the following description, various embodiments will be described. For purposes of explanation, specific configurations and details are set forth in order to provide a thorough understanding of the embodiments. However, it will also be apparent to one skilled in the art that the embodiments may be practiced without the specific details. Furthermore, well-known features may be omitted or simplified in order not to obscure the embodiments being described. FIG.1illustrates a computing system100comprising a requesting agent102, a write-back (WB) cache memory104, and a system memory106. Note that the WB cache memory104may also be coupled to additional requesting agents which are not shown inFIG.1for ease of discussion. In some examples, the computing system100may be configured to support applications such as artificial intelligence (AI), cloud computing, web hosting, gaming, networking, or high-performance computing (HPC), among others. The requesting agent102may include a processor, a DMA controller, a DSP, a GPU, a co-processor, or other integrated circuits which may be configured to issue memory transactions. In some implementations, the processor may include one or more lower level of integrated caches (e.g., L1 and L2), and the WB cache memory104can be a higher-level cache (e.g., L3 cache). The requesting agent102, the WB cache memory104, and the system memory106may communicate with one another via an interconnect fabric based on any suitable protocol such as ACE, CHI, or CXL. In some implementations, the WB cache memory104can be a system level cache (SLC), which may be used to maintain coherency among multiple processors coupled to a coherent interconnect fabric, and may also be called a fabric cache. The WB cache memory104can be a direct mapped cache, a set-associative cache, or a fully associative cache. FIG.2illustrates a communication diagram200to show the interactions among different components of a computing system to perform memory write transactions having sequential addresses. The computing system may include a requesting agent202, an SLC204, and a DRAM206. For example, the requesting agent202can be an example of the requesting agent102, the SLC204can be an example of the WB cache memory104, and the DRAM206can be an example of the system memory106of the computing system100inFIG.1. In step208, the requesting agent202may issue a first memory write transaction to write data into a first memory location corresponding to a transaction address. The first memory write transaction may be received by the SLC204. In step210, the SLC204may update a cache line1 corresponding to the first memory location by writing the data for the first memory write transaction and changing the status of the cache line1 to dirty. In some cases, the cache line1 may have been allocated for the transaction address prior to the first memory write transaction. In step212, the SLC204may send a response to the requesting agent202indicating completion of the first memory write transaction. In step214, at a later point in time the cache line1 may be evicted when a new cache line is allocated in the SLC204for another transaction. Since the status of the cache line1 indicates dirty, the SLC204may send a first update request to the DRAM206in step216to copy the contents of the cache line1 to the first memory location in the DRAM206. In step218, the DRAM206may receive the first update request to update the first memory location. In step220, the DRAM206may access a page buffer to perform the first memory write operation. For example, the DRAM206may download a memory page from the memory array corresponding to the first memory location into the page buffer, update the first memory location based on the contents of the cache line1, and upload the memory page back to the memory array in the DRAM206. In some instances, memory write transactions with sequential addresses may be issued within a certain time period by one or more requesting agents. For example, in step222, the requesting agent202may issue a second memory write transaction having an address that is sequential to the first memory write transaction. In step224, the SLC204may update a cache line2 corresponding to the second memory location by writing the data for the second memory write transaction and changing the status of the cache line2 to dirty. In some cases, the cache line2 may have been allocated prior to the second memory write transaction. In step226, the SLC204may send a response to the requesting agent202indicating completion of the second memory write transaction. In step228, at a later point in time the cache line2 may be evicted when a new cache line is allocated in the SLC204for another transaction. Since the status of the cache line2 indicates dirty, the SLC204may send a second update request to the DRAM206in step230to copy the contents of the cache line2 to the second memory location in the DRAM206. In step232, the DRAM206may receive the second update request to update the second memory location. In step234, the DRAM206may access the page buffer again to perform the second memory write operation. For example, the DRAM206may download the memory page from the memory array again corresponding to the second memory location into the page buffer, update the second memory location based on the contents of the cache line2, and upload the memory page back to the memory array in the DRAM206. In some examples, the cache line1 and the cache line2 may be updated in a sequential order for the memory write transactions having sequential addresses; however, the cache line1 and the cache line2 may be evicted at random times. For example, other cache lines may be evicted in between the eviction of the cache line1 and the cache line2 as a result of other memory transactions that access the SLC204. In such cases, the memory locations in the DRAM206may be updated at random times based on the timing of the corresponding cache line evictions. Due to the eviction of other unrelated cache lines, the corresponding memory locations may need to be updated in the DRAM206, and the DRAM206may have to download and upload other memory pages in the page buffer that do not correspond to the memory locations for the sequential addresses. For example, the DRAM206may download and upload a first memory page corresponding to the first memory location for the first memory write transaction for step218. Next, the DRAM206may download and upload a second memory page corresponding to a third memory location for a third memory write transaction without the sequential address. Next, the DRAM206may download and upload the first memory page again for the second memory write transaction for step232. As discussed previously, uploading of each memory page can be costly and adversely impact the performance of the DRAM206. FIG.3illustrates a computing system300comprising a write cleaner circuit that can be used to provide write-through functionality for updating a system memory, according to some embodiments. The computing system300may include a requesting agent302, a WB cache memory304, a system memory306, and a write cleaner308. Note that the computing system300may include other components which are not shown here for ease of discussion. The requesting agent302can be similar to the requesting agent102, the WB cache memory304can be similar to the WB cache memory104, and the system memory306can be similar to the system memory106discussed with reference toFIG.1. In various implementations, the WB cache memory304can be a direct mapped cache, an N-way (e.g., N can be 2, 4, or higher) set-associative cache, or a fully associative cache. In some embodiments, the write cleaner308can be an integrated circuit that is a distinct component or part of another component in the computing system300. In some embodiments, different functionalities of the write cleaner308can be split up as distinct components or integrated with other components of the computing system300. In some embodiments, the write cleaner308can be coupled to the WB cache memory304and the requesting agent302via an interconnect fabric. The write cleaner308may also be coupled to other requesting agents (not shown) via the interconnect fabric. The write cleaner308may be configured to determine that a memory write transaction from the requesting agent302to access a memory location in the system memory306triggers a clean request. For example, the requesting agent302may mark a memory write transaction to indicate that a clean request is to be generated if the memory write transaction is part of a set of memory write transactions having sequential addresses. The write cleaner308may determine that the clean request is to be generated based on the indication in the memory write transaction. In some implementations, the write cleaner308may determine that the clean request is to be generated based on a comparison of the transaction address with an address range, e.g., to determine whether the transaction address lies within the address range for sequential addresses. The write cleaner308may also be configured to generate a clean request to the WB cache memory304based on the memory write transaction. The clean request may include the address and cache attributes of the memory write transaction issued by the requesting agent302to have the WB cache memory304perform an explicit clean of a cache line corresponding to the memory location. The WB cache memory304may update the status of the cache line from dirty to clean based on the clean request, and update the memory location in the system memory306. In some embodiments, functionality of the write cleaner308may be distributed along the data path between the requesting agents and the WB cache memory304. For example, the decision logic that determines whether a clean request is to be generated can be separate from the clean request generation circuitry of the write cleaner308. In some embodiments, the decision logic to determine whether a clean request is to be generated for a memory write transaction issued by a requesting agent can be part of that requesting agent. Furthermore, the clean request generation circuitry that is configured to generate a clean request to the WB cache memory304based on the memory write transaction can be coupled to the interconnect fabric along the datapath before the WB cache memory304. In some embodiments, the WB cache memory304can be a distributed cache comprising multiple SLC blocks. Each SLC block may be coupled to a respective clean request generation circuitry, which is coupled to the interconnect fabric. In some examples, a stream of memory write transactions corresponding to a long memory write transaction (e.g., a transaction with a large data transfer size) may be mapped to multiple caches lines in the SLC cache. For example, each cache line may be part of a respective SLC block which can be accessed using the same transaction identifier associated with the long memory write transaction. A long memory write transaction can be identified based on a comparison of the transaction size with a threshold. The requesting agent issuing a long memory write transaction may be aware that this memory write transaction will be performed on multiple cache lines in a set corresponding to multiple memory locations in the same memory page, and may only mark the last write transaction in that set to be cleaned (e.g., clear the corresponding dirty bit). The generation circuitry of the write cleaner308coupled to the SLC block with the last cache line may generate the clean request, which may cause a simultaneous update of the memory locations of the memory page in the system memory306based on updated data of the stream of memory write transactions for the long memory write, even though only one clean request was generated. In some embodiments, each requesting agent may include or be coupled to a respective decision circuitry for indicating whether a clean request is to be generated for a corresponding memory write transaction. The decision circuitry may be configured to determine whether a clean request is to be generated for a memory write transaction based on a comparison of the transaction address with a set of sequential addresses, or a comparison of the transaction size with the threshold for a long transaction. The memory write transaction may be received by the interconnect fabric via a port coupled to another fabric that is configured to arbitrate among a set of requesting agents for transmitting the memory write transactions. In some implementations, the computing system300may include multiple such ports coupled to the interconnect fabric, and each port may be fed by a corresponding smaller fabric configured to arbitrate among a corresponding set of requesting agents. In some embodiments, each of these ports can be coupled to the interconnect fabric via a respective decision circuitry implemented as a distinct component. For example, in such cases, the decision circuitry may be configured to determine whether a clean request is to be generated for a memory write transaction based on a comparison of the transaction address with a set of sequential addresses, or a comparison of the transaction size with the threshold, and provide an indication in the memory write transaction based on this determination. In some implementations, AXI USER bit(s) can be used to provide the indication corresponding to the memory write transaction. In some embodiments, the decision circuitry can be configured to determine whether a clean request is to be generated only for specific ports by comparing the transaction address with an address range associated with each of those ports. For example, certain ports of the interconnect fabric may be coupled to other integrated circuit devices (e.g., a host processor or a data buffer), which may rely on explicit cache clean operations for specific transactions. Thus, instead of waiting for the eviction of the cache line by the WB cache memory304to update the memory location in the system memory306, the write cleaner308can force the update of the memory location in the system memory306around the same time as the update of the cache line in the WB cache memory304is performed. Forcing the system memory update using the write cleaner308can provide more benefits for memory transactions having sequential addresses, as discussed with reference toFIG.4. FIG.4illustrates a communication diagram400to show the interactions among different components of a computing system to perform memory write transactions having sequential addresses using a write cleaner circuit, according to some embodiments. As discussed with reference toFIG.3, a requesting agent450can be an example of the requesting agent302, a write cleaner452can be an example of the write cleaner308, an SLC454can be an example of the WB cache memory304, and a DRAM456can be an example of the system memory306of the computing system300inFIG.3. In step402, the requesting agent450may issue a first memory write transaction to write data into a first memory location corresponding to a transaction address. The first memory write transaction may include an address, a source identifier, a target identifier, a transaction identifier, read/write indication, and other suitable attributes. In some embodiments, the requesting agent450may mark the first memory write transaction to indicate that the first memory write transaction is part of a set of memory write transactions having sequential addresses. In some implementations, AXI USER bits can be used to mark a memory write transaction from the set of memory write transactions having sequential addresses. In step404, the write cleaner452may detect the first memory write transaction and determine that the first memory write transaction includes a sequential address that corresponds to a memory page in the DRAM456. The write cleaner452may make a record of the first memory write transaction by saving the information (e.g., certain attributes) associated with the first memory write transaction. For example, the write cleaner452may save the transaction address, read/write indication, source identifier, target identifier, transaction identifier, and/or other suitable attributes of the first memory write transaction that can be used to generate a clean request associated with the first memory write transaction. The write cleaner452may identify the sequential address based on the indication in the first memory write transaction. In some embodiments, instead of identifying the sequential address based on the indication in the transaction, the write cleaner circuit452may compare the transaction address with an address range corresponding to the memory page, and may determine that the first memory write transaction includes a sequential address that corresponds to the same memory page if the transaction address lies within the address range. In step406, the first memory write transaction may be received by the SLC454. The SLC454may update a cache line1 corresponding to the first memory location by writing the data for the first memory write transaction and changing the status of the cache line1 to dirty. For example, the cache line1 may have been allocated before the first memory write transaction (e.g., allocated in the cache from a previous transaction). Thus, performing the first memory write transaction by the SLC454may make the contents of the memory location in the SLC454and the DRAM456incoherent. In step408, the SLC454may send a first response to the requesting agent450indicating completion of the first memory write transaction or another type of acknowledgment in response to performing the first memory write transaction. In step410, the write cleaner452may detect the first response from the SLC454, which may indicate that the SLC454has performed the first memory write transaction. In step412, the write cleaner452may generate a first clean request to the SLC454based on the first memory write transaction. The first clean request may include an indication to clear the dirty bit and some of the same attributes as the first memory write transaction (e.g., transaction address, read/write indication, target identifier). In some examples, the first clean request may include a new source identifier, and a new transaction identifier indicating a cache clean transaction. As an example, if the interconnect fabric is based on the CHI or ACE protocol, the CleanShared transaction can be used for the first clean request to indicate a cache clean operation. In step414, the SLC454may receive the first clean request from the write cleaner452. In response to the first clean request, the SLC454may request the DRAM456to update the first memory location by generating a first update request to the DRAM456in step416, and update the status of the cache line1 from dirty to clean to clear the dirty bit. Thus, generating the first clean request using the write cleaner452can enable the SLC454, which is a WB cache, to behave like a WT cache and force an update of the DRAM456without waiting for the cache line1 to evict at a later point in time. In step418, the DRAM456may receive the first update request to update the first memory location. In step420, the DRAM456may access the page buffer to perform the first memory write operation. As discussed with reference toFIG.2, the DRAM456may download the memory page corresponding to the first memory location into the page buffer and update the contents of the memory page with updated data from the SLC454for the first memory write transaction. The memory page may be held in the page buffer until another page needs to be downloaded into the page buffer for another memory transaction. In step422, the requesting agent450(or another requesting agent) may issue a second memory write transaction to write data into a second memory location corresponding to a transaction address that is sequential to the transaction address for the first memory write transaction. In some embodiments, the requesting agent450may mark the second memory write transaction to indicate that the second memory write transaction is part of a set of memory write transactions having sequential addresses. The second memory write transaction may also include an address, a source identifier, a target identifier, read/write indication, a transaction identifier, and other suitable attributes. In step424, the write cleaner452may detect the second memory write transaction and determine that the second memory write transaction includes a sequential address that corresponds to the same memory page in the DRAM456as the first memory write transaction. The write cleaner452may make a record of the second memory write transaction by saving the information associated with the second memory write transaction. For example, the write cleaner452may save the transaction address, read/write indication, target identifier, source identifier, transaction identifier, and/or other suitable attributes of the second memory write transaction that can be used to generate a clean request associated with the second memory write transaction. The write cleaner452may identify the sequential address based on the indication in the second memory write transaction or by comparing the transaction address with the address range corresponding to the memory page. In step426, the second memory write transaction may be received by the SLC454. The SLC454may update the cache line2 corresponding to the second memory location by writing the data for the second memory write transaction and changing the status of the cache line2 to dirty. For example, the cache line2 may have been allocated before the second memory write transaction. Thus, performing the second memory write transaction by the SLC454may make the contents of the memory location in the SLC454and the DRAM456incoherent. In step428, the SLC454may send a second response to the requesting agent450indicating completion of the second memory write transaction or another type of acknowledgment in response to performing the second memory write transaction. In step430, the write cleaner452may detect the second response from the SLC454, which may indicate that the SLC454has performed the second memory write transaction. In step432, the write cleaner452may generate a second clean request to the SLC454based on the second memory write transaction. The second clean request may include an indication to clear the dirty bit and some of the same attributes as the second memory write transaction (e.g., transaction address, read/write indication, target identifier). In some examples, the second clean request may include a new source identifier, and a new transaction identifier indicating a cache clean transaction. In step434, in response to the second clean request, the SLC454may request the DRAM456to update the second memory location by generating a second update request to the DRAM456in step436, and update the status of the cache line2 from dirty to clean to clear the dirty bit. In step438, the DRAM456may receive the second update request to update the second memory location. In step440, the DRAM456may access the page buffer to perform the second memory write operation. Since the second memory write transaction includes the sequential address, the page buffer may already contain the contents of the memory page for the second memory write transaction and any other subsequent memory transactions that correspond to the same memory page that the DRAM456was operating on for the first memory write transaction. Thus, the memory page may not need to be uploaded to the DRAM456for each memory write transaction from the set of memory write transactions having sequential addresses. Hence, the number of uploads to the DRAM456can be drastically reduced for the memory transactions having sequential addresses, which can improve the performance of the DRAM456. In step442, the SLC454may perform a silent eviction of the cache line1 to make room for another cache line. The silent eviction may not require an update request to the DRAM456since the dirty bit of the cache line1 has already been cleared and the contents at the memory location are coherent. In step444, the SLC454204may perform a silent eviction of the cache line2 to make room for another cache line. Similarly, the silent eviction may not require an update request to the DRAM456since the dirty bit of the cache line2 has already been cleared. FIG.5illustrates an example block diagram for a write cleaner500, according to some embodiments. The write cleaner500can be an example of the write cleaner308inFIG.3. The write cleaner500can be an integrated circuit implemented using registers, comparators, ports, buses, state machines, or other suitable circuits. The write cleaner500may include a requesting agent interface502, a memory504, a transaction analyzer506, a clean request generator508, and a cache interface510. In various embodiments, one or more of the components of the write cleaner500can be separated as distinct components, or be integrated with a requesting agent. The requesting agent interface502may be used to communicate with one or more requesting agents via an interconnect fabric. For example, the requesting agent interface502may be used to communicate with the requesting agent202. In some embodiments, the requesting agent interface502may be used to intercept the memory transactions issued by the one or more requesting agents to the WB cache memory304. The memory504may be used to store an address range504a, transaction records504b, and any other useful information. The address range504acan be used by the write cleaner500to identify whether a memory transaction is part of a set of memory transactions that have sequential addresses or belong to a port. The address range504acan be programmed by the system software using a configuration interface. In some examples, the address range504amay correspond to an address window associated with the system memory306. In some embodiments, the memory504may also store a threshold which can be used to identify a long transaction. The transaction records504bmay store information associated with certain memory transactions issued by the requesting agents to the WB cache memory304, which can be used to generate the clean requests. The transaction analyzer506may be configured to analyze a transaction intercepted by the requesting agent interface502and determine whether a memory write transaction from the requesting agent302to access a memory location triggers a clean request. In some examples, the transaction analyzer506may include the decision circuitry configured to determine whether a memory write transaction from the requesting agent302to access a memory location triggers a clean request based on an indication in the memory write transaction. For example, the requesting agent302may mark a memory write transaction as a candidate for generating the clean request if the memory write transaction is part of a set of memory write transactions having sequential addresses. The memory write transaction may have been issued to access a memory location corresponding to a memory page in the system memory306. In some embodiments, the decision circuitry may be configured to determine whether a long memory write transaction from the requesting agent302to access a set of memory locations triggers a clean request based on a comparison of the transaction size with a threshold. The long memory write transaction may have been issued to access the set of memory locations corresponding to the same memory page in the system memory306. In some embodiments, the transaction analyzer506may determine that a memory write transaction from the requesting agent302to access a memory location triggers a clean request based on a comparison of the transaction address with the address range504a. The address range504acan be associated with a set of sequential addresses, or one or more ports. If the transaction analyzer506determines that the memory write transaction is a candidate for generating the clean request, the information associated with the memory write transaction can be stored in the transaction records504b. The transaction analyzer506may also be configured to detect a response from the WB cache memory304, similar to the response described with reference to the steps408and428inFIG.4. The clean request generator508may include the generation circuitry configured to generate a clean request based on the transaction information stored in the transaction records504b. The clean request may include an indication to clear the dirty bit and some of the transaction attributes such as a transaction address, read/write indication, and a target identifier based on the transaction records504b. As an example, for the CHI or ACE protocol, the CleanShared transaction can be used for the clean request to indicate a cache clean operation for the WB cache memory304. The cache interface510may be used to communicate with the WB cache memory304via the interconnect fabric. For example, the write cleaner510may send the clean request to the WB cache memory304via the cache interface510. In different embodiments, one or more components of the write cleaner500, e.g., the transaction analyzer506, the clean request generator508, the requesting agent interface502and the memory504, can be distinct components or be integrated with other components in the computing system with appropriate interfaces to communicate with the corresponding requesting agent, port, interconnect fabric, and/or the WB cache memory304. FIG.6illustrates a flow chart600for a method performed by an IC device to generate clean requests using a WB cache memory, according to some embodiments. The IC device can be part of the computing system300described with reference toFIG.3. In step602, the method includes issuing, by a requesting agent, a first memory write transaction to access a first memory location. The requesting agent can be the requesting agent302, which issues a first memory write transaction to access a first memory location. As an example, the requesting agent can be a processor. In step604, the method includes determining, by a write cleaner circuit, that the first memory write transaction triggers a first clean request. The write cleaner circuit can be the write cleaner308or the write cleaner500, which can determine that the first memory write transaction triggers a first clean request. The first memory transaction can be received by the write cleaner308using the requesting agent interface502. The transaction analyzer506may analyze the first memory write transaction and determine that the first memory write transaction to access the first memory location triggers a first clean request based on an indication in the first memory write transaction. For example, the first memory write transaction may be part of a set of memory write transactions having sequential addresses, and may be marked by the requesting agent302for generating the first clean request. The transaction analyzer506may store the information associated with the first memory write transaction in the transaction records504b. In step606, the method includes generating, by the write cleaner circuit, the first clean request to a write-back (WB) cache memory based on the first memory write transaction. The clean request generator508may generate the first clean request based on the transaction information stored in the transaction records504b. The first clean request may include an indication to clear the dirty bit and some of the transaction attributes such as a transaction address, read/write indication, and a target identifier based on the transaction records504b. In step608, the method includes in response to the first clean request, by the WB cache memory, requesting a system memory to update the first memory location, and updating a status of a first cache line corresponding to the first memory location from dirty to clean. The WB cache memory304, in response to the first clean request, may request the system memory306to update the first memory location, and update the status of the cache line1 corresponding to the first memory location from dirty to clean. Updating of the first memory location in the system memory306may include downloading contents of a memory page corresponding to the first memory location in a page buffer associated with the system memory306, and updating the contents of the memory page with updated data from the WB cache memory304for the first memory write transaction. The method may be performed for each memory write transaction having a sequential address issued by the one or more requesting agents to access memory locations corresponding to the same memory page in the system memory306. The system memory306may download the contents of the memory page to the page buffer once, and update the contents of the memory page for multiple memory locations in the WB cache memory304corresponding to the write transactions having the sequential addresses. When no additional memory write transactions are issued within a time period to access a memory location corresponding to the same memory page, the updated contents of the memory page can be written back to the system memory306. Thus, for memory transactions having sequential addresses issued within a time period, a single upload of the contents of page buffer to the system memory306may suffice. At a later point in time, each cache line corresponding to those memory write transactions can be evicted silently without causing an additional update of the respective memory location in the system memory306. Thus, generating clean requests using the write cleaner308can allow controlling the updates of the memory locations in the system memory in a predictable order so that uploading of the updated contents of the page buffer to the system memory306can be minimized and memory bandwidth can be improved. FIG.7illustrates an example of a computing device700. Functionality and/or several components of the computing device700may be used without limitation with other embodiments disclosed elsewhere in this disclosure, without limitations. As an example, the computing device700can be part of the computing system300. A computing device700may facilitate processing of packets and/or forwarding of packets from the computing device700to another device. As referred to herein, a “packet” or “network packet” may refer to a variable or fixed unit of data. In some instances, a packet may include a packet header and a packet payload. The packet header may include information associated with the packet, such as the source, destination, quality of service parameters, length, protocol, routing labels, error correction information, etc. In certain implementations, one packet header may indicate information associated with a series of packets, such as a burst transaction. In some implementations, the computing device700may be the recipient and/or generator of packets. In some implementations, the computing device700may modify the contents of the packet before forwarding the packet to another device. The computing device700may be a peripheral device coupled to another computer device, a switch, a router or any other suitable device enabled for receiving and forwarding packets. In one example, the computing device700may include processing logic702, a configuration module704, a management module706, a bus interface module708, memory710, and a network interface module712. These modules may be hardware modules, software modules, or a combination of hardware and software. In certain instances, modules may be interchangeably used with components or engines, without deviating from the scope of the disclosure. The computing device700may include additional modules, which are not illustrated here. In some implementations, the computing device700may include fewer modules. In some implementations, one or more of the modules may be combined into one module. One or more of the modules may be in communication with each other over a communication channel714. The communication channel714may include one or more busses, meshes, matrices, fabrics, a combination of these communication channels, or some other suitable communication channel. The processing logic702may include application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), systems-on-chip (SoCs), network processing units (NPUs), processors configured to execute instructions or any other circuitry configured to perform logical arithmetic and floating point operations. Examples of processors that may be included in the processing logic702may include processors developed by ARM®, MIPS®, AMID®, Qualcomm®, and the like. In certain implementations, processors may include multiple processing cores, wherein each processing core may be configured to execute instructions independently of the other processing cores. Furthermore, in certain implementations, each processor or processing core may implement multiple processing threads executing instructions on the same processor or processing core, while maintaining logical separation between the multiple processing threads. Such processing threads executing on the processor or processing core may be exposed to software as separate logical processors or processing cores. In some implementations, multiple processors, processing cores or processing threads executing on the same core may share certain resources, such as for example busses, level 1 (L1) caches, and/or level 2 (L2) caches. The instructions executed by the processing logic702may be stored on a computer-readable storage medium, for example, in the form of a computer program. The computer-readable storage medium may be non-transitory. In some cases, the computer-readable medium may be part of the memory710. The memory710may include either volatile or non-volatile, or both volatile and non-volatile types of memory. The memory710may, for example, include random access memory (RAM), read only memory (ROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), flash memory, and/or some other suitable storage media. In some cases, some or all of the memory710may be internal to the computing device700, while in other cases some or all of the memory may be external to the computing device700. The memory710may store an operating system comprising executable instructions that, when executed by the processing logic702, provides the execution environment for executing instructions providing networking functionality for the computing device700. The memory may also store and maintain several data structures and routing tables for facilitating the functionality of the computing device700. In some implementations, the configuration module704may include one or more configuration registers. Configuration registers may control the operations of the computing device700. In some implementations, one or more bits in the configuration register can represent certain capabilities of the computing device700. Configuration registers may be programmed by instructions executing in the processing logic702, and/or by an external entity, such as a host device, an operating system executing on a host device, and/or a remote device. The configuration module704may further include hardware and/or software that control the operations of the computing device700. In some implementations, the management module706may be configured to manage different components of the computing device700. In some cases, the management module706may configure one or more bits in one or more configuration registers at power up, to enable or disable certain capabilities of the computing device700. In certain implementations, the management module706may use processing resources from the processing logic702. In other implementations, the management module706may have processing logic similar to the processing logic702, but segmented away or implemented on a different power plane than the processing logic702. The bus interface module708may enable communication with external entities, such as a host device and/or other components in a computing system, over an external communication medium. The bus interface module708may include a physical interface for connecting to a cable, socket, port, or other connection to the external communication medium. The bus interface module708may further include hardware and/or software to manage incoming and outgoing transactions. The bus interface module708may implement a local bus protocol, such as Peripheral Component Interconnect (PCI) based protocols, Non-Volatile Memory Express (NVMe), Advanced Host Controller Interface (AHCI), Small Computer System Interface (SCSI), Serial Attached SCSI (SAS), Serial AT Attachment (SATA), Parallel ATA (PATA), some other standard bus protocol, or a proprietary bus protocol. The bus interface module708may include the physical layer for any of these bus protocols, including a connector, power management, and error handling, among other things. In some implementations, the computing device700may include multiple bus interface modules for communicating with multiple external entities. These multiple bus interface modules may implement the same local bus protocol, different local bus protocols, or a combination of the same and different bus protocols. The network interface module712may include hardware and/or software for communicating with a network. This network interface module712may, for example, include physical connectors or physical ports for wired connection to a network, and/or antennas for wireless communication to a network. The network interface module712may further include hardware and/or software configured to implement a network protocol stack. The network interface module712may communicate with the network using a network protocol, such as for example TCP/IP, Infiniband, RoCE, Institute of Electrical and Electronics Engineers (IEEE) 802.11 wireless protocols, User Datagram Protocol (UDP), Asynchronous Transfer Mode (ATM), token ring, frame relay, High Level Data Link Control (HDLC), Fiber Distributed Data Interface (FDDI), and/or Point-to-Point Protocol (PPP), among others. In some implementations, the computing device700may include multiple network interface modules, each configured to communicate with a different network. For example, in these implementations, the computing device700may include a network interface module for communicating with a wired Ethernet network, a wireless 802.11 network, a cellular network, an Infiniband network, etc. The various components and modules of the computing device700, described above, may be implemented as discrete components, as a System on a Chip (SoC), as an ASIC, as an NPU, as an FPGA, or any combination thereof. In some embodiments, the SoC or other component may be communicatively coupled to another computing system to provide various services such as traffic monitoring, traffic shaping, computing, etc. In some embodiments of the technology, the SoC or other component may include multiple subsystems. The modules described herein may be software modules, hardware modules or a suitable combination thereof. If the modules are software modules, the modules can be embodied on a non-transitory computer readable medium and processed by a processor in any of the computer systems described herein. It should be noted that the described processes and architectures can be performed either in real-time or in an asynchronous mode prior to any user interaction. The modules may be configured in the manner suggested inFIG.7, and/or functions described herein can be provided by one or more modules that exist as separate modules and/or module functions described herein can be spread over multiple modules. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense. It will, however, be evident that various modifications and changes may be made thereunto without departing from the broader spirit and scope of the disclosure as set forth in the claims. Other variations are within the spirit of the present disclosure. Thus, while the disclosed techniques are susceptible to various modifications and alternative constructions, certain illustrated embodiments thereof are shown in the drawings and have been described above in detail. It should be understood, however, that there is no intention to limit the disclosure to the specific form or forms disclosed, but on the contrary, the intention is to cover all modifications, alternative constructions, and equivalents falling within the spirit and scope of the disclosure, as defined in the appended claims. The use of the terms “a” and “an” and “the” and similar referents in the context of describing the disclosed embodiments (especially in the context of the following claims) are to be construed to cover both the singular and the plural, unless otherwise indicated herein or clearly contradicted by context. The terms “comprising,” “having,” “including,” and “containing” are to be construed as open-ended terms (i.e., meaning “including, but not limited to,”) unless otherwise noted. The term “connected” is to be construed as partly or wholly contained within, attached to, or joined together, even if there is something intervening. Recitation of ranges of values herein are merely intended to serve as a shorthand method of referring individually to each separate value falling within the range, unless otherwise indicated herein and each separate value is incorporated into the specification as if it were individually recited herein. All methods described herein can be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by context. The use of any and all examples, or exemplary language (e.g., “such as”) provided herein, is intended merely to better illuminate embodiments of the disclosure and does not pose a limitation on the scope of the disclosure unless otherwise claimed. No language in the specification should be construed as indicating any non-claimed element as essential to the practice of the disclosure. Disjunctive language such as the phrase “at least one of X, Y, or Z,” unless specifically stated otherwise, is intended to be understood within the context as used in general to present that an item, term, etc., may be either X, Y, or Z, or any combination thereof (e.g., X, Y, and/or Z). Thus, such disjunctive language is not generally intended to, and should not, imply that certain embodiments require at least one of X, at least one of Y, or at least one of Z to each be present. Various embodiments of this disclosure are described herein, including the best mode known to the inventors for carrying out the disclosure. Variations of those embodiments may become apparent to those of ordinary skill in the art upon reading the foregoing description. The inventors expect skilled artisans to employ such variations as appropriate and the inventors intend for the disclosure to be practiced otherwise than as specifically described herein. Accordingly, this disclosure includes all modifications and equivalents of the subject matter recited in the claims appended hereto as permitted by applicable law. Moreover, any combination of the above-described elements in all possible variations thereof is encompassed by the disclosure unless otherwise indicated herein or otherwise clearly contradicted by context.
58,717
11860782
DETAILED DESCRIPTION The following detailed description refers to the accompanying drawings. Wherever convenient, the same reference numbers are used in the drawings and the following description to refer to the same or similar parts. While several illustrative embodiments are described herein, modifications, adaptations and other implementations are possible. For example, substitutions, additions or modifications may be made to the components illustrated in the drawings, and the illustrative methods described herein may be modified by substituting, reordering, removing, or adding steps to the disclosed methods. Accordingly, the following detailed description is not limited to the disclosed embodiments and examples. Instead, the proper scope is defined by the appended claims. Processor Architecture As used throughout this disclosure, the term “hardware chip” refers to a semiconductor wafer (such as silicon or the like) on which one or more circuit elements (such as transistors, capacitors, resistors, and/or the like) are formed. The circuit elements may form processing elements or memory elements. A “processing element” refers to one or more circuit elements that, together, perform at least one logic function (such as an arithmetic function, a logic gate, other Boolean operations, or the like). A processing element may be a general-purpose processing element (such as a configurable plurality of transistors) or a special-purpose processing element (such as a particular logic gate or a plurality of circuit elements designed to perform a particular logic function). A “memory element” refers to one or more circuit elements that can be used to store data. A “memory element” may also be referred to as a “memory cell.” A memory element may be dynamic (such that electrical refreshes are required to maintain the data store), static (such that data persists for at least some time after power loss), or non-volatile memories. Processing elements may be joined to form processor subunits. A “processor subunit” may thus comprise a smallest grouping of processing elements that may execute at least one task or instructions (e.g., of a processor instruction set). For example, a subunit may comprise one or more general-purpose processing elements configured to execute instructions together, one or more general-purpose processing elements paired with one or more special-purpose processing elements configured to execute instructions in a complementary fashion, or the like. The processor subunits may be arranged on a substrate (e.g., a wafer) in an array. Although the “array” may comprise a rectangular shape, any arrangement of the subunits in the array may be formed on the substrate. Memory elements may be joined to form memory banks. For example, a memory bank may comprise one or more lines of memory elements linked along at least one wire (or other conductive connection). Furthermore, the memory elements may be linked along at least one addition wire in another direction. For example, the memory elements may be arranged along wordlines and bitlines, as explained below. Although the memory bank may comprise lines, any arrangement of the elements in the bank may be used to form the bank on the substrate. Moreover, one or more banks may be electrically joined to at least one memory controller to form a memory array. Although the memory array may comprise a rectangular arrangement of the banks, any arrangement of the banks in the array may be formed on the substrate. As further used throughout this disclose, a “bus” refers to any communicative connection between elements of a substrate. For example, a wire or a line (forming an electrical connection), an optical fiber (forming an optical connection), or any other connection conducting communications between components may be referred to as a “bus.” Conventional processors pair general-purpose logic circuits with shared memories. The shared memories may store both instruction sets for execution by the logic circuits as well as data used for and resulting from execution of the instruction sets. As described below, some conventional processors use a caching system to reduce delays in performing pulls from the shared memory; however, conventional caching systems remain shared. Conventional processors include central processing units (CPUs), graphics processing units (GPUs), various application-specific integrated circuits (ASICs), or the like.FIG.1shows an example of a CPU, andFIG.2shows an example of a GPU. As shown inFIG.1, a CPU100may comprise a processing unit110that includes one or more processor subunits, such as processor subunit120aand processor subunit120b. Although not depicted inFIG.1, each processor subunit may comprise a plurality of processing elements. Moreover, the processing unit110may include one or more levels of on-chip cache. Such cache elements are generally formed on the same semiconductor die as processing unit110rather than being connected to processor subunits120aand120bvia one or more buses formed in the substrate containing processor subunits120aand120band the cache elements. An arrangement directly on the same die, rather than being connected via buses, is common for both first-level (L1) and second-level (L2) caches in conventional processors. Alternatively, in older processors, L2 caches were shared amongst processor subunits using back-side buses between the subunits and the L2 caches. Back-side buses are generally larger than front-side buses, described below. Accordingly, because cache is to be shared with all processor subunits on the die, cache130may be formed on the same die as processor subunits120aand120bor communicatively coupled to processor subunits120aand120bvia one or more back-side buses. In both embodiments without buses (e.g., cache is formed directly on-die) as well as embodiments using back-side buses, the caches are shared between processor subunits of the CPU. Moreover, processing unit110communicates with shared memory140aand memory140b. For example, memories140aand140bmay represent memory banks of shared dynamic random access memory (DRAM). Although depicted with two banks, most conventional memory chips include between eight and sixteen memory banks. Accordingly, processor subunits120aand120bmay use shared memories140aand140bto store data that is then operated upon by processor subunits120aand120b. This arrangement, however, results in the buses between memories140aand140band processing unit110acting as a bottleneck when the clock speeds of processing unit110exceed data transfer speeds of the buses. This is generally true for conventional processors, resulting in lower effective processing speeds than the stated processing speeds based on clock rate and number of transistors. As shown inFIG.2, similar deficiencies persist in GPUs. A GPU200may comprise a processing unit210that includes one or more processor subunits (e.g., subunits220a,220b,220c,220d,220e,220f,220g,220h,220i,220j,220k,220l,220m,220n,220o, and220p). Moreover, the processing unit210may include one or more levels of on-chip cache and/or register files. Such cache elements are generally formed on the same semiconductor die as processing unit210. Indeed, in the example ofFIG.2, cache210is formed on the same die as processing unit210and shared amongst all of the processor subunits, while caches230a,230b,230c, and230dare formed on a subset of the processor subunits, respectively, and dedicated thereto. Moreover, processing unit210communicates with shared memories250a,250b,250c, and250d. For example, memories250a,250b,250c, and250dmay represent memory banks of shared DRAM. Accordingly, the processor subunits of processing unit210may use shared memories250a,250b,250c, and250dto store data that is then operated upon by the processor subunits. This arrangement, however, results in the buses between memories250a,250b,250c, and250dand processing unit210acting as a bottleneck, similar to the bottleneck described above for CPUs. Overview of Disclosed Hardware Chins FIG.3Ais a diagrammatic representation of an embodiment depicting an exemplary hardware chip300. Hardware chip300may comprise a distributed processor designed to mitigate the bottlenecks described above for CPUs, GPUs, and other conventional processors. A distributed processor may include a plurality of processor subunits distributed spatially on a single substrate. Moreover, as explained above, in distributed processors of the present disclosure, corresponding memory banks are also spatially distributed on the substrate. In some embodiments, a distributed processor may be associated with a set of instructions, and each one of the processor subunits of the distributed processor may be responsible for performing one or more tasks included in the set of instructions. As depicted inFIG.3A, hardware chip300may comprise a plurality of processor subunits, e.g., logic and control subunits320a,320b,320c,320d,320e,320f,320g, and320h. As further depicted inFIG.3A, each processor subunit may have a dedicated memory instance. For example, logic and control subunit320ais operably connected to dedicated memory instance330a, logic and control subunit320bis operably connected to dedicated memory instance330b, logic and control subunit320cis operably connected to dedicated memory instance330c, logic and control subunit320dis operably connected to dedicated memory instance330d, logic and control subunit320eis operably connected to dedicated memory instance330e, logic and control subunit320fis operably connected to dedicated memory instance330f, logic and control subunit320gis operably connected to dedicated memory instance330g, and logic and control subunit320his operably connected to dedicated memory instance330h. AlthoughFIG.3Adepicts each memory instance as a single memory bank, hardware chip300may include two or more memory banks as a dedicated memory instance for a processor subunit on hardware chip300. Furthermore, althoughFIG.3Adepicts each processor subunit as comprising both a logic component and a control for the dedicated memory bank(s), hardware chip300may use controls for the memory banks that are separate, at least in part, from the logic components. Moreover, as depicted inFIG.3A, two or more processor subunits and their corresponding memory banks may be grouped, e.g., into processing groups310a,310b,310c, and310d. A “processing group” may represent a spatial distinction on a substrate on which hardware chip300is formed. Accordingly, a processing group may include further controls for the memory banks in the group, e.g., controls340a,340b,340c, and340d. Additionally or alternatively, a “processing group” may represent a logical grouping for the purposes of compiling code for execution on hardware chip300. Accordingly, a compiler for hardware chip300(further described below) may divide an overall set of instructions between the processing groups on hardware chip300. Furthermore, host350may provide instructions, data, and other input to hardware chip300and read output from the same. Accordingly, a set of instructions may be executed entirely on a single die, e.g., the die hosting hardware chip300. Indeed, the only communications off-die may include the loading of instructions to hardware chip300, any input sent to hardware chip300, and any output read from hardware chip300. Accordingly, all calculations and memory operations may be performed on-die (on hardware chip300) because the processor subunits of hardware chip300communicate with dedicated memory banks of hardware chip300. FIG.3Bis a diagrammatic representation of an embodiment depicting another exemplary hardware chip300′. Although depicted as an alternative to hardware chip300, the architecture depicted inFIG.38may be combined, at least in part, with the architecture depicted inFIG.3A. As depicted inFIG.38, hardware chip300′ may comprise a plurality of processor subunits, e.g., processor subunits350a,350b.350c, and350d. As further depicted inFIG.38, each processor subunit may have a plurality of dedicated memory instances. For example, processor subunit350ais operably connected to dedicated memory instances330aand330b, processor subunit350bis operably connected to dedicated memory instances330cand330d, processor subunit350cis operably connected to dedicated memory instances330eand330f, and processor subunit350dis operably connected to dedicated memory instances330gand330h. Moreover, as depicted inFIG.3B, the processor subunits and their corresponding memory banks may be grouped, e.g., into processing groups310a,310b,310c, and310d. As explained above, a “processing group” may represent a spatial distinction on a substrate on which hardware chip300′ is formed and/or a logical grouping for the purposes of compiling code for execution on hardware chip300′. As further depicted inFIG.3B, the processor subunits may communicate with each other via buses. For example, as shown inFIG.3B, processor subunit350amay communicate with processor subunit350bvia bus360a, with processor subunit350cvia bus360c, and with processor subunit350dvia bus360f. Similarly, processor subunit350bmay communicate with processor subunit350avia bus360a(as described above), with processor subunit350cvia bus360e, and with processor subunit350dvia bus360d. In addition, processor subunit350cmay communicate with processor subunit350avia bus360c(as described above), with processor subunit350bvia bus360e(as described above), and with processor subunit350dvia bus360b. Accordingly, processor subunit350dmay communicate with processor subunit350avia bus360f(as described above), with processor subunit350bvia bus360d(as described above), and with processor subunit350cvia bus360b(as described above). One of ordinary skill will understand that fewer buses than depicted inFIG.3Bmay be used. For example, bus360emay be eliminated such that communications between processor subunit350band350cpass through processor subunit350aand/or350d. Similarly, bus360may be eliminated such that communications between processor subunit350aand processor subunit350dpass through processor subunit350bor350c. Moreover, one of ordinary skill will understand that architectures other than those depicted inFIGS.3A and3Bmay be used. For example, an array of processing groups, each with a single processor subunit and memory instance, may be arranged on a substrate. Processor subunits may additionally or alternatively form part of controllers for corresponding dedicated memory banks, part of controllers for memory mats of corresponding dedicated memory, or the like. In view of the architecture described above, hardware chips300and300′ may provide significant increases in efficiency for memory-intensive tasks as compared with traditional architectures. For example, database operations and artificial intelligence algorithms (such as neural networks) are examples of memory-intensive tasks for which traditional architectures are less efficient than hardware chips300and300′. Accordingly, hardware chips300and300′ may be referred to as database accelerator processors and/or artificial intelligence accelerator processors. Configuring the Disclosed Hardware Chips The hardware chip architecture described above may be configured for execution of code. For example, each processor subunit may individually execute code (defining a set of instructions) apart from other processor subunits in the hardware chip. Accordingly, rather than relying on an operating system to manage multithreading or using multitasking (which is concurrency rather than parallelism), hardware chips of the present disclosure may allow for processor subunits to operate fully in parallel. In addition to a fully parallel implementation described above, at least some of the instructions assigned to each processor subunit may be overlapping. For example, a plurality of processor subunits on a distributed processor may execute overlapping instructions as, for example, an implementation of an operating system or other management software, while executing non-overlapping instructions in order to perform parallel tasks within the context of the operating system or other management software. FIG.4depicts an exemplary process400for executing a generic command with processing group410. For example, processing group410may comprise a portion of a hardware chip of the present disclosure, e.g., hardware chip300, hardware chip300′, or the like. As depicted inFIG.4, a command may be sent to processor subunit430, which is paired with dedicated memory instance420. An external host (e.g., host350) may send the command to processing group410for execution. Alternatively, host350may have sent an instruction set including the command for storage in memory instance420such that processor subunit430may retrieve the command from memory instance420and execute the retrieved command. Accordingly, the command may be executed by processing element440, which is a generic processing element configurable to execute the received command. Moreover, processing group410may include a control460for memory instance420. As depicted inFIG.4, control460may perform any reads and/or writes to memory instance420required by processing element440when executing the received command. After execution of the command, processing group410may output the result of the command, e.g., to the external host or to a different processing group on the same hardware chip. In some embodiments, as depicted inFIG.4, processor subunit430may further include an address generator450. An “address generator” may comprise a plurality of processing elements that are configured to determine addresses in one or more memory banks for performing reads and writes and may also perform operations on the data located at the determined addresses (e.g., addition, subtraction, multiplication, or the like). For example, address generator450may determine addresses for any reads or writes to memory. In one example, address generator450may increase efficiency by overwriting a read value with a new value determined based on the command when the read value is no longer needed. Additionally or alternatively, address generator450may select available addresses for storage of results from execution of the command. This may allow for scheduling of result read-off for a later clock cycle, when it is more convenient for the external host. In another example, address generator450may determine addresses to read from and write to during a multi-cycle calculation, such as a vector or matrix multiply-accumulate calculation. Accordingly, address generator450may maintain or calculate memory addresses for reading data and writing intermediate results of the multi-cycle calculation such that processor subunit430may continue processing without having to store these memory addresses. FIG.5depicts an exemplary process500for executing a specialized command with processing group510. For example, processing group510may comprise a portion of a hardware chip of the present disclosure, e.g., hardware chip300, hardware chip300′, or the like. As depicted inFIG.5, a specialized command (e.g., a multiply-accumulate command) may be sent to processing element530, which is paired with dedicated memory instance520. An external host (e.g., host350) may send the command to processing element530for execution. Accordingly, the command may be executed at a given signal from the host by processing element530, a specialized processing element configurable to execute particular commands (including the received command). Alternatively, processing element530may retrieve the command from memory instance520for execution. Thus, in the example ofFIG.5, processing element530is a multiply-accumulate (MAC) circuit configured to execute MAC commands received from the external host or retrieved from memory instance520. After execution of the command, processing group410may output the result of the command, e.g., to the external host or to a different processing group on the same hardware chip. Although depicted with a single command and a single result, a plurality of commands may be received or retrieved and executed, and a plurality of results may be combined on processing group510before output. Although depicted as a MAC circuit inFIG.5, additional or alternative specialized circuits may be included in processing group510. For example, a MAX-read command (which returns the max value of a vector) a MAX0-read command (a common function also termed a rectifier, which returns the entire vector but also does MAX with 0), or the like may be implemented. Although depicted separately, the generalized processing group410ofFIG.4and the specialized processing group510ofFIG.5may be combined. For example, a generic processor subunit may be coupled to one or more specialized processor subunits to form a processor subunit. Accordingly, the generic processor subunit may be used for all instructions not executable by the one or more specialized processor subunits. One of ordinary skill will understand that neural network implementation and other memory-intensive tasks may be handled with specialized logic circuits. For example, database queries, packet inspection, string comparison, and other functions may increase in efficiency if executed by the hardware chips described herein. A Memory-Based Architecture for Distributed Processing On hardware chips consistent with the present disclosure, dedicated buses may transfer data between processor subunits on the chip and/or between the processor subunits and their corresponding dedicated memory banks. The use of dedicated buses may reduce arbitration costs because competing requests are either not possible or easily avoided using software rather than hardware. FIG.6schematically depicts a diagrammatic representation of a processing group600. Processing group60may be for use in a hardware chip, e.g., hardware chip300, hardware chip300′, or the like. Processor subunit610may be connected via buses630to memory620. Memory620may comprise a Randomly Accessible Memory (RAM) element that stores data and code for execution by processor subunit610. In some embodiments, memory620may be an N-Way memory (wherein N is a number equal to or larger than 1 that implies the number of segments in an interleaved memory620). Because processor subunit610is coupled to memory620dedicated to processor subunit610via bus630, N may be kept relatively small without compromising the execution performance. This represents an improvement over conventional multiway register files or caches where a lower N generally results in lower execution performance, and a higher N generally results in large area and power loss. The size of memory620, the number of ways, and the width of bus630may be adjusted to meet the requirements of tasks and application implementations of a system using processing group600according to, for instance, the size of data involved in the task or tasks. Memory element620may comprise one or more types of memory known in the art, e.g., volatile memory (such as RAM, DRAM, SRAM, phase-change RAM (PRAM), magnetoresistive RAM (MRAM), resistive RAM (ReRAM), or the like) or non-volatile memory (such as flash or ROM). According to some embodiments, a portion of memory element620may comprise a first memory type, while another portion may comprise another memory type. For instance, the code region of a memory element620may comprise a ROM element, while a data region of the memory element620may comprise a DRAM element. Another example for such partitioning is storing the weights of a neural network in flash while storing the data for calculation in DRAM. Processor subunit610comprises a processing element640that may comprise a processor. The processor can be pipelined or not pipelined, a customized Reduced Instruction Set Computing (RISC) element or other processing scheme, implemented on any commercial Integrated Circuit (IC) known in the art (such as ARM, ARC, RISC-V, etc.), as appreciated by one of ordinary skill. Processing element640may comprise a controller that, in some embodiments, includes an Arithmetic Logic Unit (ALU) or other controller. According to some embodiments, processing element640, which executes received or stored code, may comprise a generic processing element and, therefore, be flexible and capable of performing a wide variety of processing operations. Non-dedicated circuitry typically consumes more power than specific-operation-dedicated circuitry when comparing the power consumed during performance for a specific operation. Therefore, when performing specific complex arithmetic calculations, processing element640may consume more power and perform less efficiently than dedicated hardware. Therefore, according to some embodiments, a controller of processing element640may be designed to perform specific operations (e.g., addition or “move” operations). In one example, the specific operations may be performed by one or more accelerators650. Each accelerator may be dedicated and programmed to perform a specific calculation (such as multiplication, floating point vector operations, or the like). By using accelerator(s), the average power consumed per calculation per processor subunit may be lowered, and the calculation throughput henceforth increases. Accelerator(s)650may be chosen according to an application that the system is designed to implement (e.g., execution of neural networks, execution of database queries, or the like). Accelerator(s)650may be configured by processing element640and may operate in tandem therewith for lowering power consumption and accelerating calculations and computations. The accelerator may additionally or alternatively be used to transfer data between memory and MUXs/DEMUXs/input/output ports (e.g., MUX650and DEMUX660) of processing group600, such as a smart DMA (direct memory access) peripheral. Accelerator(s)650may be configured to perform a variety of functions. For instance, one accelerator may be configured to perform 16-bit floating point calculation or 8-bit integer calculations, which are often used in neural networks. Another example of an accelerator function is a 32-bit floating point calculation, which is often used during a training stage of a neural network. Yet another example of an accelerator function is query processing, such as that used in databases. In some embodiments, accelerator(s)650may comprise specialized processing elements to perform these functions and/or may be configured according to configuration data, stored on the memory element620, such that it may be modified. Accelerator(s)650may additionally or alternatively implement a configurable scripted list of memory movements to time movements of data to/from memory620or to/from other accelerators and/or inputs/outputs. Accordingly, as explained further below, all the data movement inside the hardware chip using processing group600may use software synchronization rather than hardware synchronization. For example, an accelerator in one processing group (e.g., group600) may transfer data from its input to its accelerator every tenth cycle and then output data at the next cycle, thereby letting the information flow from the memory of the processing group to another one. As further depicted inFIG.6, in some embodiments, processing group600may further comprise at least one input multiplexer (MUX)660connected to its input port and at least one output DEMUX670connected to its output port. These MUXs/DEMUXs may be controlled by control signals (not shown) from processing element640and/or from one of accelerator(s)650, determined according to a current instruction being carried out by processing element640and/or the operation executed by an accelerator of accelerator(s)650. In some scenarios, processing group600may be required (according to a predefined instruction from its code memory) to transfer data from its input port to its output port. Accordingly, one or more of the input MUXs (e.g., MUX660) may be directly connected via one or more buses to an output DEMUX (e.g., DEMUX670), in addition to each of the DEMUXs/MUXs being connected to processing element640and accelerator(s)650. The processing group600ofFIG.6may be arrayed to form a distributed processor, for example, as depicted inFIG.7A. The processing groups may be disposed on substrate710to form an array. In some embodiments, substrate710may comprise a semiconductor substrate, such as silicon. Additionally or alternatively, substrate710may comprise a circuit board, such as a flexible circuit board. As depicted inFIG.7A, substrate710may include, disposed thereon, a plurality of processing groups, such as processing group600. Accordingly, substrate710includes a memory array that includes a plurality of banks, such as banks720a,720b,720c,720d,720e,720f,720g, and720h. Furthermore, substrate710includes a processing array that may include a plurality of processor subunits, such as subunits730a,730b,730c,730d,730e,730f,730g, and730h. Furthermore, as explained above, each processing group may include a processor subunit and one or more corresponding memory banks dedicated to the processor subunit. Accordingly, as depicted inFIG.7A, each subunit is associated with a corresponding, dedicated memory bank, e.g.; Processor subunit730ais associated with memory bank720a, processor subunit730bis associated with memory bank720b, processor subunit730cis associated with memory bank720c, processor subunit730dis associated with memory bank720d, processor subunit730eis associated with memory bank720e, processor subunit730fis associated with memory bank720f, processor subunit730gis associated with memory bank720g, processor subunit730his associated with memory bank720h. To allow each processor subunit to communicate with its corresponding, dedicated memory bank(s), substrate710may include a first plurality of buses connecting one of the processor subunits to its corresponding, dedicated memory bank(s). Accordingly, bus740aconnects processor subunit730ato memory bank720a, bus740bconnects processor subunit730bto memory bank720b, bus740cconnects processor subunit730cto memory bank720c, bus740dconnects processor subunit730dto memory bank720d, bus740econnects processor subunit730eto memory bank720e, bus740fconnects processor subunit730fto memory bank720f, bus740gconnects processor subunit730gto memory bank720g, and bus740hconnects processor subunit730hto memory bank720h. Moreover, to allow each processor subunit to communicate with other processor subunits, substrate710may include a second plurality of buses connecting one of the processor subunits to another of the processor subunits. In the example ofFIG.7A, bus750aconnects processor subunit730ato processor subunit750e, bus750bconnects processor subunit730ato processor subunit750b, bus750cconnects processor subunit730bto processor subunit750f, bus750dconnects processor subunit730bto processor subunit750c, bus750econnects processor subunit730cto processor subunit750g, bus750fconnects processor subunit730cto processor subunit750d, bus750gconnects processor subunit730dto processor subunit750h, bus750hconnects processor subunit730hto processor subunit750g, bus750iconnects processor subunit730gto processor subunit750g, and bus750jconnects processor subunit730fto processor subunit750e. Accordingly, in the example arrangement shown inFIG.7A, the plurality of logic processor subunits is arranged in at least one row and at least one column. The second plurality of buses connect each processor subunit to at least one adjacent processor subunit in the same row and to at least one adjacent processor subunit in the same column.FIG.7Amay be referred to as a “partial file connection.” The arrangement shown inFIG.7Amay be modified to form a “full tile connection.” A full tile connection includes additional buses connecting diagonal processor subunits. For example, the second plurality of buses may include additional buses between processor subunit730aand processor subunit730f, between processor subunit730band processor subunit730e, between processor subunit730band processor subunit730g, between processor subunit730cand processor subunit730f, between processor subunit730cand processor subunit730h, and between processor subunit730dand processor subunit730g. A full tile connection may be used for convolution calculations, in which data and results stored in a near processor subunit are used. For example, during convolutional image processing, each processor subunit may receive a tile of the image (such as a pixel or a group of pixels). In order to calculate the convolution results, each processor subunit may acquire data from all eight adjacent processor subunits, each of which have received a corresponding tile. In a partial tile connection, the data from the diagonal adjacents may be passed through other adjacent processor subunits connected to the processor subunit. Accordingly, the distributed processor on a chip may be an artificial intelligence accelerator processor. In a specific example of a convolutional calculation, an N×M image may be divided across a plurality of processor subunits. Each processor subunit may perform a convolution with an A×B filter on its corresponding tile. To perform the filtering on one or more pixels on a boundary between tiles, each processor subunit may require data from neighboring processor subunits having tiles including pixels on the same boundary. Accordingly, the code generated for each processor subunit configures the subunit to calculate the convolutions and pull from one of the second plurality of buses whenever data is needed from an adjacent subunit. Corresponding commands to output data to the second plurality of buses are provided to the subunits to ensure proper timing of needed data transfers. The partial tile connection ofFIG.7Amay be modified to be an N-partial tile connection. In this modification, the second plurality of buses may further connect each processor subunit to processor subunits within a threshold distance of the processor subunit (e.g., within n processor subunits) in the four directions along which the buses ofFIG.7Arun (i.e., up, down, left, and right). A similar modification may be made to the full-tile connection (to result in an N-full tile connection) such that the second plurality of buses further connects each processor subunit to processor subunits within a threshold distance of the processor subunit (e.g., within n processor subunits) in the four directions along which the buses ofFIG.7Arun in additional to the two diagonal directions. Other arrangements are possible. For example, in the arrangement shown inFIG.7B, bus750aconnects processor subunit730ato processor subunit730d, bus750bconnects processor subunit730ato processor subunit730b, bus750cconnects processor subunit730bto processor subunit730c, and bus750dconnects processor subunit730cto processor subunit730d. Accordingly, in the example arrangement shown inFIG.7B, the plurality of processor subunits is arranged in a star pattern. The second plurality of buses connect each processor subunit to at least one adjacent processor subunit within the star pattern. Further arrangements (not shown) are possible. For example, a neighbor connection arrangement may be used such that the plurality of processor subunits is arranged in one or more lines (e.g., similar to that depicted inFIG.7A). In a neighbor connection arrangement, the second plurality of buses connect each processor subunit to a processor subunit to the left in the same line, to a processor subunit to the right in the same line, to the processor subunits both to the left and to the right in the same line, etc. In another example, an N-linear connection arrangement may be used. In an N-linear connection arrangement, the second plurality of buses connect each processor subunit to processor subunits within a threshold distance of the processor subunit (e.g., within n processor subunits). The N-linear connection arrangement may be used with the line array (described above), the rectangular array (depicted inFIG.7A), the elliptical array (depicted inFIG.7B), or any other geometrical array. In yet another example, an N-log connection arrangement may be used. In an N-log connection arrangement, the second plurality of buses connect each processor subunit to processor subunits within a threshold power of two distance of the processor subunit (e.g., within 2″ processor subunits). The N-log connection arrangement may be used with the line array (described above), the rectangular array (depicted inFIG.7A), the elliptical array (depicted inFIG.7B), or any other geometrical array. Any of the connection schemes described above may be combined for use in the same hardware chip. For example, a full tile connection may be used in one region while a partial tile connection is used in another region. In another example, an N-linear connection arrangement may be used in one region while an N-full tile connection is used in another region. Alternatively to or in addition with dedicated buses between processor subunits of the memory chip, one or more shared buses may be used to interconnect all (or a subset of) the processor subunits of a distributed processor. Collisions on the shared buses may still be avoided by timing data transfers on the shared buses using code executed by the processor subunits, as explained further below. Additionally with or alternatively to shared buses, configurable buses may be used to dynamically connect processor subunits to form groups of processors units connected to separated buses. For example, the configurable buses may include transistors or other mechanisms that may be controlled by processor subunit to direct data transfers to a selected processor subunit. In bothFIGS.7A and7B, the plurality of processor subunits of the processing array is spatially distributed among the plurality of discrete memory banks of the memory array. In other alternative embodiments (not shown), the plurality of processor subunits may be clustered in one or more regions of the substrate, and the plurality of memory banks may be clustered in one or more other regions of the substrate. In some embodiments, a combination of spatial distribution and clustering may be used (not shown). For example, one region of the substrate may include a cluster of processor subunits, another region of the substrate may include a cluster of memory banks, and yet another region of the substrate may include processing arrays distributed amongst memory banks. One of ordinary skill will recognize that arraying processor groups600on a substrate is not an exclusive embodiment. For example, each processor subunit may be associated with at least two dedicated memory banks. Accordingly, processing groups310a,310b,310c, and310dofFIG.3Bmay be used in lieu of or in combination with processing group600to form the processing array and the memory array. Other processing groups including, for example, three, four, or more dedicated memory banks (not shown) may be used. Each of the plurality of processor subunits may be configured to execute software code associated with a particular application independently, relative to other processor subunits included in the plurality of processor subunits. For example, as explained below, a plurality of sub-series of instructions may be grouped as machine code and provided to each processor subunit for execution. In some embodiments, each dedicated memory bank comprises at least one dynamic random access memory (DRAM). Alternatively, the memory banks may comprise a mix of memory types, such as static random access memory (SRAM), DRAM, Flash or the like. In conventional processors, data sharing between processor subunits is usually performed with shared memory. Shared memory typically requires a large portion of chip area and/or performed a bus that is managed by additional hardware (such as arbiters). The bus results in bottlenecks, as described above. In addition, the shared memory, which may be external to the chip, typically includes cache coherency mechanisms and more complex caches (e.g., L1 cache, L2 cache, and shared DRAM) in order to provide accurate and up-to-date data to the processor subunits. As explained further below, the dedicated buses depicted inFIGS.7A and7Ballow for hardware chips that are free of hardware management (such as arbiters). Moreover, the use of dedicated memories as depicted inFIGS.7A and7Ballow for the elimination of complex caching layers and coherency mechanism. Instead, in order to allow each processor subunit to access data calculated by other processor subunits and/or stored in memory banks dedicated to the other processor subunits, buses are provided whose timing is performed dynamically using code individually executed by each processor subunit. This allows for elimination of most, if not all, bus management hardware as conventionally used. Moreover, complex caching mechanisms are replaced with direct transfers over these buses, resulting in lower latency times during memory reads and writes. Memory-Based Processing Arrays As depicted inFIGS.7A and7B, a memory chip of the present disclosure may operate independently. Alternatively, memory chips of the present disclosure may be operably connected with one or more additional integrated circuits, such as a memory device (e.g., one or more DRAM banks), a system-on-a-chip, a field-programmable gate array (FPGA), or other processing and/or memory chip. In such embodiments, tasks in a series of instructions executed by the architecture may be divided (e.g., by a compiler, as described below) between processor subunits of the memory chip and any processor subunits of the additional integrated circuit(s). For example, the other integrated circuits may comprise a host (e.g., host350ofFIG.3A) that inputs instructions and/or data to the memory chip and receives output therefrom. In order to interconnect memory chips of the present disclosure with one or more additional integrated circuits, the memory chip may include a memory interface, such as a memory interface complying with a Joint Elcectron Device Engineering Council (JEDEC) standard or any of its variants. The one or more additional integrated circuits may then connect to the memory interface. Accordingly, if the one or more additional integrated circuits are connected to a plurality of memory chips of the present disclosure, data may be shared between the memory chips through the one or more additional integrated circuits. Additionally or alternatively, the one or more additional integrated circuits may include buses to connect to buses on the memory chips of the present disclosure such that the one or more additional integrated circuits may execute code in tandem with the memory chips of the present disclosure. In such embodiments, the one or more additional integrated circuits further assist with distributed processing even though they may be on different substrates than the memory chips of the present disclosure. Furthermore, memory chips of the present disclosure may be arrayed in order to form an array of distributed processors. For example, one or more buses may connect a memory chip770ato an additional memory chip770b, as depicted inFIG.7C. In the example ofFIG.7C, memory chip770aincludes processor subunits with one or more corresponding memory banks dedicated to each processor subunit, e.g.: Processor subunit730ais associated with memory bank720a, processor subunit730bis associated with memory bank720b, processor subunit730cis associated with memory bank720c, and processor subunit730fis associated with memory bank720d. Buses connect each processor subunit to its corresponding memory bank. Accordingly, bus740aconnects processor subunit730ato memory bank720a, bus740bconnects processor subunit730bto memory bank720b, bus740cconnects processor subunit730eto memory bank720c, and bus740dconnects processor subunit730fto memory bank720d. Moreover, bus750aconnects processor subunit730ato processor subunit750e, bus750bconnects processor subunit730ato processor subunit750b, bus750cconnects processor subunit730bto processor subunit750f, and bus750dconnects processor subunit730cto processor subunit750f. Other arrangements of memory chip770amay be used, for example, as described above. Similarly, memory chip770bincludes processor subunits with one or more corresponding memory banks dedicated to each processor subunit, e.g.: Processor subunit730cis associated with memory bank720e, processor subunit730dis associated with memory bank720f, processor subunit730gis associated with memory bank720g, and processor subunit730his associated with memory bank720h. Buses connect each processor subunit to its corresponding memory bank. Accordingly, bus740cconnects processor subunit730cto memory bank720e, bus740fconnects processor subunit730dto memory bank720f, bus740gconnects processor subunit730gto memory bank720g, and bus740hconnects processor subunit730hto memory bank720h. Moreover, bus750gconnects processor subunit730cto processor subunit750g, bus750hconnects processor subunit730dto processor subunit750h, bus750iconnects processor subunit730cto processor subunit750d, and bus750jconnects processor subunit730gto processor subunit750h. Other arrangements of memory chip770bmay be used, for example, as described above. The processor subunits of memory chip770aand770bmay be connected using one or more buses. Accordingly, in the example ofFIG.7C, bus750emay connect processor subunit730bof memory chip770aand processor subunit730cof memory chip770b, and bus750fmay connect processor subunit730fof memory chip770aand processor subunit730cof memory770b. For example, bus750emay serve as an input bus to memory chip770b(and thus an output bus for memory chip770a) while bus750fmay serve as an input bus to memory chip770a(and thus an output bus for memory chip770b) or vice versa. Alternatively, buses750cand750fmay both server as two-way buses between memory chips770aand770b. Buses750eand750fmay include direct wires or may be interleaved on a high-speed connection in order to reduce the pins used for the inter-chip interface between memory chip770aand integrated circuit770b. Moreover, any of the connection arrangements described above used in the memory chip itself may be used to connect the memory chip to one or more additional integrated circuits. For example, memory chip770aand770bmay be connected using a full-tile or partial-tile connection rather than only two buses as shown inFIG.7C. Accordingly, although depicted using buses750eand750f, architecture760may include fewer buses or additional buses. For example, a single bus between processor subunits730band730cor between processor subunits730fand730cmay be used. Alternatively, additional buses, e.g., between processor subunits730band730d, between processor subunits730fand730d, or the like, may be used. Furthermore, although depicted as using a single memory chip and an additional integrated circuit, a plurality of memory chips may be connected using buses as explained above. For example, as depicted in the example ofFIG.7C, memory chips770a,770b,770c, and770dare connected in an array. Each memory chip includes processor subunits and dedicated memory banks similar to the memory chips described above. Accordingly, a description of these components is not repeated here. In the example ofFIG.7C, memory chips770a,770b,770c, and770dare connected in a loop. Accordingly, bus750aconnects memory chips770aand770d, bus750cconnects memory chips770aand770b, bus750cconnects memory chips770band770c, and bus750gconnects memory chips770cand770d. Although memory chips770a,770b.770c, and770dmay be connected with full-tile connections, partial-tile connections, or other connection arrangements, the example ofFIG.7Callows for fewer pin connections between memory chips770a,770b,770c, and770d. Relatively Large Memories Embodiments of the present disclosure may use dedicated memories of relatively large size as compared with shared memories of conventional processors. The use of dedicated memories rather than shared memories allows for gains in efficiency to continue without tapering off with memory increases. This allows for memory-intensive tasks such as neural network processing and database queries to be performed more efficiently than in conventional processors, where the efficiency gains of increasing shared memory taper off due to the von Neumann bottleneck. For example, in distributed processors of the present disclosure, a memory array disposed on the substrate of the distributed processor may include a plurality of discrete memory banks. Each of the discrete memory banks may have a capacity greater than one megabyte, as well as a processing array disposed on the substrate, including a plurality of processor subunits. As explained above, each one of the processor subunits may be associated with a corresponding, dedicated one of the plurality of discrete memory banks. In some embodiments, the plurality of processor subunits may be spatially distributed among the plurality of discrete memory banks within the memory array. By using dedicated memories of at least one megabyte, rather than shared caches of a few megabytes for a large CPU or GPU, the distributed processors of the present disclosure gain efficiencies that are not possible in conventional systems due to the von Neumann bottleneck in CPUs and GPUs. Different memories may be used as the dedicated memories. For example, each dedicated memory bank may comprise at least one DRAM bank. Alternatively, each dedicated memory bank may comprise at least one static random access memory bank. In other embodiments, different types of memories may be combined on a single hardware chip. As explained above, each dedicated memory may be at least one megabyte. Accordingly, each dedicated memory bank may be the same size or at least two of the plurality of memory banks may have different sizes. Moreover, as described above, the distributed processor may include a first plurality of buses, each connecting one of the plurality of processor subunits to a corresponding, dedicated memory bank and a second plurality of buses, each connecting one of the plurality of processor subunits to another one of the plurality of processor subunits. Synchronization Using Software As explained above, hardware chips of the present disclosure may manage data transfers using software rather than hardware. In particular, because the timings of transfers on the buses, mads and writes to the memories, and calculations of the processor subunits are set by the sub-series of instructions executed by the processor subunits, hardware chips of the present disclosure may execute code to prevent collisions on the buses. Accordingly, hardware chips of the present disclosure may avoid hardware mechanisms conventionally used to manage data transfers (such as network controllers within in a chip, packet parsers and packets transferors between processor subunits, bus arbitrators, a plurality of buses to avoid arbitration, or the like). If hardware chips of the present disclosure transferred data conventionally, connecting N processor subunits with buses would require bus arbitration or wide MUXs controlled by an arbiter. Instead, as described above, embodiments of the present disclosure may use a bus that is only a wire, an optical cable, or the like between processor subunits, where the processor subunits individually execute code to avoid collision on the buses. Accordingly, embodiments of the present disclosure may preserve space on the substrate as well as materials cost and efficiency losses (e.g., due to power and time consumption by arbitration). The efficiency and space gains are even greater when compared to other architectures using first-in-first-out (FIFO) controllers and/or mailboxes. Furthermore, as explained above, each processor subunit may include one or more accelerators in addition to one or more processing elements. In some embodiments, the accelerator(s) may read and write from the buses rather than the processing element(s). In such embodiments, additional efficiency may be obtained by allowing the accelerator(s) to transmit data during the same cycle in which the processing element(s) perform one or more calculations. Such embodiments, however, require additional materials for the accelerator(s). For example, additional transistors may be required for fabrication of the accelerator(s). The code also may account for the internal behavior, including timing and latencies, of the processor subunits (e.g., including the processing elements and/or accelerators forming part of the processor subunit). For example, a compiler (as described below) may perform pre-processing that accounts for the timing and latencies when generating the sub-series of instructions that control the data transfers. In one example, a plurality of processor subunits may be assigned a task of calculating a neural network layer containing a plurality of neurons fully-connected to a previous layer of a larger plurality of neurons. Assuming data of the previous layer is evenly spread between the plurality of processor subunits, one way to perform the calculation may be to configure each processor subunit to transmit the data of the previous layer to the main bus in turn and then each processor subunit will multiply this data by the weight of the corresponding neuron that the subunit implements. Because each processor subunit calculates more than one neuron, each processor subunit will transmit the data of the previous layer a number of times equal to the number of neurons. Thus, the code of each processor subunit is not the same as the code for other processor subunits because the subunits will transmit at different times. In some embodiments, a distributed processor may comprise a substrate (e.g., a semiconductor substrate, such as silicon and/or a circuit board, such as a flexible circuit board) with a memory array disposed on the substrate, the memory array including a plurality of discrete memory banks, and a processing array disposed on the substrate, the processing array including a plurality of processor subunits, as depicted. e.g., inFIGS.7A and7B. As explained above, each one of the processor subunits may be associated with a corresponding, dedicated one of the plurality of discrete memory banks. Moreover, as depicted, e.g., inFIGS.7A and7B, the distributed processor may further comprise a plurality of buses, each one of the plurality of buses connecting one of the plurality of processor subunits to at least another one of the plurality of processor subunits. As explained above, the plurality of buses may be controlled in software. Accordingly, the plurality of buses may be free of timing hardware logic components such that data transfers between processor subunits and across corresponding ones of the plurality of buses are uncontrolled by timing hardware logic components. In one example, the plurality of buses may be free of bus arbiters such that data transfers between processor subunits and across corresponding ones of the plurality of buses are uncontrolled by bus arbiters. In some embodiments, as depicted, e.g., inFIGS.7A and7B, the distributed processor may further comprise a second plurality of buses connecting one of the plurality of processor subunits to a corresponding, dedicated memory bank. Similar to the plurality of buses described above, the second plurality of buses may be free of timing hardware logic components such that data transfers between processor subunits and corresponding, dedicated memory banks are uncontrolled by timing hardware logic components. In one example, the second plurality of buses may be free of bus arbiters such that data transfers between processor subunits and corresponding, dedicated memory banks are uncontrolled by bus arbiters. As used herein, the phrase “free of” does not necessarily imply the absolute absence of components, such as timing hardware logic components (e.g., bus arbiters, arbitration trees, FIFO controllers, mailboxes, or the like). Such components may still be included in a hardware chip described as “free of” those components. Instead, the phrase “free of” refers to the function of the hardware chip; that is, a hardware chip “free of” timing hardware logic components controls the timing of its data transfers without use of the timing hardware logic components, if any, included therein. For example, a hardware chip that executes code including sub-series of instructions that control data transfers between processor subunits of the hardware chip, even if the hardware chip includes timing hardware logic components as a secondary precaution to protect against collisions due to errors in the executed code. As explained above, the plurality of buses may comprise at least one of wires or optical fibers between corresponding ones of the plurality of processor subunits. Accordingly, in one example, a distributed processor free of timing hardware logic components may include only wires or optical fibers without bus arbiters, arbitration trees, FIFO controllers, mailboxes, or the like. In some embodiments, the plurality of processor subunits is configured to transfer data across at least one of the plurality of buses in accordance with code executed by the plurality of processor subunits. Accordingly, as explained below, a compiler may organize sub-series of instructions, each sub-series comprising code executed by a single processor subunit. The sub-series instructions may instruct the processor subunit when to transfer data onto one of the buses and when to retrieve data from the buses. When the sub-series are executed in tandem across the distributed processor, the timing of transfers between the processor subunits may be governed by the instructions to transfer and retrieve included in the sub-series. Thus, the code dictates timing of data transfers across at least one of the plurality of buses. The compiler may generate code to be executed by a single processor subunit. Additionally, the compiler may generate code to be executed by groups of processor subunits. In some cases, the compiler may treat all the processor subunits together as if they were one super-processor (e.g., a distributed processor), and the compiler may generate code for execution by that defined super-processor/distributed processor. As explained above and depicted inFIGS.7A and7B, the plurality of processor subunits may be spatially distributed among the plurality of discrete memory banks within the memory array. Alternatively, the plurality of processor subunits may be clustered in one or more regions of the substrate, and the plurality of memory banks may be clustered in one or more other regions of the substrate. In some embodiments, a combination of spatial distribution and clustering may be used, as explained above. In some embodiments, a distributed processor may comprise a substrate (e.g., a semiconductor substrate, including silicon and/or a circuit board, such as a flexible circuit board) with a memory array disposed on the substrate, the memory array including a plurality of discrete memory banks. A processing array may also be disposed on the substrate, the processing array including a plurality of processor subunits, as depicted, e.g., inFIGS.7A and7B. As explained above, each one of the processor subunits may be associated with a corresponding, dedicated one of the plurality of discrete memory banks. Moreover, as depicted, e.g., inFIGS.7A and7B, the distributed processor may further comprise a plurality of buses, each one of the plurality of buses connecting one of the plurality of processor subunits to a corresponding, dedicated one of the plurality of discrete memory banks. As explained above, the plurality of buses may be controlled in software. Accordingly, the plurality of buses may be free of timing hardware logic components such that data transfers between a processor subunit and a corresponding, dedicated one of the plurality of discrete memory banks and across a corresponding one of the plurality of buses are not controlled by timing hardware logic components. In one example, the plurality of buses may be free of bus arbiters such that data transfers between processor subunits and across corresponding ones of the plurality of buses are uncontrolled by bus arbiters. In some embodiments, as depicted, e.g., inFIGS.7A and7B, the distributed processor may further comprise a second plurality of buses connecting one of the plurality of processor subunits to at least another one of the plurality of processor subunits. Similar to the plurality of buses described above, the second plurality of buses may be free of timing hardware logic components such that data transfers between processor subunits and corresponding, dedicated memory banks are uncontrolled by timing hardware logic components. In one example, the second plurality of buses may be free of bus arbiters such that data transfers between processor subunits and corresponding, dedicated memory banks are uncontrolled by bus arbiters. In some embodiments, the distributed processor may use a combination of software timing with hardware timing components. For example, a distributed processor may comprise a substrate (e.g., a semiconductor substrate, including silicon and/or a circuit board, such as a flexible circuit board) with a memory array disposed on the substrate, the memory array including a plurality of discrete memory banks. A processing array may also be disposed on the substrate, the processing army including a plurality of processor subunits, as depicted, e.g., inFIGS.7A and7B. As explained above, each one of the processor subunits may be associated with a corresponding, dedicated one of the plurality of discrete memory banks. Moreover, as depicted, e.g., inFIGS.7A and7B, the distributed processor may further comprise a plurality of buses, each one of the plurality of buses connecting one of the plurality of processor subunits to at least another one of the plurality of processor subunits. Moreover, as explained above, the plurality of processor subunits may be configured to execute software that controls timing of data transfers across the plurality of buses to avoid colliding data transfers on at least one of the plurality of buses. In such an example, the software may control the timing of the data transfers, but the transfers themselves may be controlled, at least in part, by one or more hardware components. In such embodiments, the distributed processor may further comprise a second plurality of buses connecting one of the plurality of processor subunits to a corresponding, dedicated memory bank. Similar to the plurality of buses described above, the plurality of processor subunits may be configured to execute software that controls timing of data transfers across the second plurality of buses to avoid colliding data transfers on at least one of the second plurality of buses. In such an example, as explained above, the software may control the timing of the data transfers, but the transfers themselves may be controlled, at least in part, by one or more hardware components. Division of Code As explained above, hardware chips of the present disclosure may execute code in parallel across processor subunits included on a substrate forming the hardware chip. Additionally, hardware chips of the present disclosure may perform multitasking. For example, hardware chips of the present disclosure may perform area multitasking, in which one group of processor subunits of the hardware chip execute one task (e.g., audio processing) while another group of processor subunits of the hardware chip execute another task (e.g., image processing). In another example, hardware chips of the present disclosure may perform timing multitasking, in which one or more processor subunits of the hardware chip execute one task during a first period of time and another task during a second period of time. A combination of area and timing multitasking may also be used such that one task may be assigned to a first group of processor subunits during a first period of time while another task may be assigned to a second group of processor subunits during the first period of time, after which a third task may be assigned to processor subunits included in the first group and the second group during a second period of time. In order to organize machine code for execution on memory chips of the present disclosure, machine code may be divided between processor subunits of the memory chip. For example, a processor on a memory chip may comprise a substrate and a plurality of processor subunits disposed on the substrate. The memory chip may further comprise a corresponding plurality of memory banks disposed on the substrate, each one of the plurality processor subunits being connected to at least one dedicated memory bank not shared by any other processor subunit of the plurality of processor subunits. Each processor subunit on the memory chip may be configured to execute a series of instructions independent from other processor subunits. Each series of instructions may be executed by configuring one or more general processing elements of the processor subunit in accordance with code defining the series of instructions and/or by activating one or more special processing elements (e.g., one or more accelerators) of the processor subunit in accordance with a sequence provided in the code defining the series of instructions. Accordingly, each series of instructions may define a series of tasks to be performed by a single processor subunit. A single task may comprise an instruction within an instruction set defined by the architecture of one or more processing elements in the processor subunit. For example, the processor subunit may include particular registers, and a single task may push data onto a register, pull data from a register, perform an arithmetic function on data within a register, perform a logic operation on data within a register, or the like. Moreover, the processor subunit may be configured for any number of operands, such as a 0-operand processor subunit (also called a “stack machine”), a 1-operand processor subunit (also called an accumulator machine), a 2-operand processor subunit (such as a RISC), a 3-operand processor subunit (such as a complex instruction set computer (CISC)), or the like. In another example, the processor subunit may include one or more accelerators, and a single task may activate an accelerator to perform a specific function, such as a MAC function, a MAX function, a MAX-0 function, or the like. The series of instructions may further include tasks for reading and writing from the dedicated memory banks of the memory chip. For example, a task may include writing a piece of data to a memory bank dedicated to the processor subunit executing the task, reading a piece of data from a memory bank dedicated to the processor subunit executing the task, or the like. In some embodiments, the reading and writing may be performed by the processor subunit in tandem with a controller of the memory bank. For example, the processor subunit may execute a read or write task by sending a control signal to the controller to perform the read or write. In some embodiments, the control signal may include a particular address to use for reads and writes. Alternatively, the processor subunit may defer to the memory controller to select an available address for the reads and writes. Additionally or alternatively, the reading and writing may be performed by one or more accelerators in tandem with a controller of the memory bank. For example, the accelerators may generate the control signals for the memory controller, similar to how the processor subunit generates control signals, as described above. In any of the embodiments described above, an address generator may also be used to direct the reads and writes to specific addresses of a memory bank. For example, the address generator may comprise a processing element configured to generate memory addresses for reads and writes. The address generator may be configured to generate addresses in order to increase efficiency, e.g., by writing results of a later calculation to the same address as the results of a former calculation that are no longer needed. Accordingly, the address generator may generate the controls signals for the memory controller, cither in response to a command from the processor subunit (e.g., from a processing element included therein or from one or more accelerator(s) therein) or in tandem with the processor subunit. Additionally or alternatively, the address generator may generate the addresses based on some configuration or registers for example generating a nested loop structure to iterate on certain addresses in the memory at a certain pattern. In some embodiments, each series of instructions may comprise a set of machine code defining a corresponding series of tasks. Accordingly, the series of tasks described above may be encapsulated within machine code comprising the series of instructions. In some embodiments, as explained below with respect toFIG.8, the series of tasks may be defined by a compiler configured to distribute a higher-level series of tasks amongst the plurality of logic circuits as a plurality of series of tasks. For example, the compiler may generate the plurality of series of tasks based on the higher-level series of tasks such that the processor subunits, executing each corresponding series of tasks in tandem, perform the same function as outlined by the higher-level series of tasks. As explained further below, the higher-level series of tasks may comprise a set of instructions in a human-readable programming language. Correspondingly, the series of tasks for each processor subunit may comprise lower-level series of tasks, each of which comprises a set of instructions in a machine code. As explained above with respect toFIGS.7A and7B, the memory chip may further comprise a plurality of buses, each bus connecting one of the plurality of processor subunits to at least one other of the plurality of processor subunits. Moreover, as explained above, data transfers on the plurality of buses may be controlled using software. Accordingly, data transfers across at least one of the plurality of buses may be predefined by the series of instructions included in a processor subunit connected to the at least one of the plurality of buses. Therefore, one of the tasks included in the series of instructions may include outputting data to one of the buses or pulling data from one of the buses. Such tasks may be executed by a processing element of the processor subunit or by one or more accelerators included in the processor subunit. In the latter embodiment, the processor subunit may perform a calculation or send a control signal to a corresponding memory bank in the same cycle during which accelerator(s) pull data from or place data on one of the buses. In one example, the series of instructions included in the processor subunit connected to the at least one of the plurality of buses may include a sending task that comprises a command for the processor subunit connected to the at least one of the plurality of buses to write data to the at least one of the plurality of buses. Additionally or alternatively, the series of instructions included in the processor subunit connected to the at least one of the plurality of buses may include a receiving task that comprises a command for the processor subunit connected to the at least one of the plurality of buses to read data from the at least one of the plurality of buses. Additionally or alternatively to distribution of code amongst processor subunits, data may be divided between memory banks of the memory chip. For example, as explained above, a distributed processor on a memory chip may comprise a plurality of processor subunits disposed on the memory chip and a plurality of memory banks disposed on the memory chip. Each one of the plurality of memory banks may be configured to store data independent from data stored in other ones of the plurality of memory banks, and each one of the plurality of processor subunits may be connected to at least one dedicated memory bank from among the plurality of memory banks. For example, each processor subunit may have access to one or more memory controllers of one or more corresponding memory banks dedicated to the processor subunit, and no other processor subunit may have access to these corresponding one or more memory controllers. Accordingly, the data stored in each memory bank may be unique to the dedicated processor subunit. Moreover, the data stored in each memory bank may be independent of the memory stored in other memory banks because no memory controllers may be shared between memory banks. In some embodiments, as described below with respect toFIG.8, the data stored in each of the plurality of memory banks may be defined by a compiler configured to distribute data amongst the plurality of memory banks. Moreover, the compiler may be configured to distribute data defined in a higher-level series of tasks amongst the plurality of memory banks using a plurality of lower-level tasks distributed amongst corresponding processor subunits. As explained further below, the higher-level series of tasks may comprise a set of instructions in a human-readable programming language. Correspondingly, the series of tasks for each processor subunit may comprise lower-level series of tasks, each of which comprises a set of instructions in a machine code. As explained above with respect toFIGS.7A and7B, the memory chip may further comprise a plurality of buses, each bus connecting one of the plurality of processor subunits to one or more corresponding, dedicated memory banks from among the plurality of memory banks. Moreover, as explained above, data transfers on the plurality of buses may be controlled using software. Accordingly, data transfers across a particular one of the plurality of buses may be controlled by a corresponding processor subunit connected to the particular one of the plurality of buses. Therefore, one of the tasks included in the series of instructions may include outputting data to one of the buses or pulling data from one of the buses. As explained above, such tasks may be executed by (i) a processing element of the processor subunit or (ii) one or more accelerators included in the processor subunit. In the latter embodiment, the processor subunit may perform a calculation or use buses connecting the processor subunit to other processor subunits in the same cycle during which accelerator(s) pull data from or place data on one of the buses connected to the one or more corresponding, dedicated memory banks. Therefore, in one example, the series of instructions included in the processor subunit connected to the at least one of the plurality of buses may include a sending task. The sending task may comprise a command for the processor subunit connected to the at least one of the plurality of buses to write data to the at least one of the plurality of buses for storage in the one or more corresponding, dedicated memory banks. Additionally or alternatively, the series of instructions included in the processor subunit connected to the at least one of the plurality of buses may include a receiving task. The receiving task may comprise a command for the processor subunit connected to the at least one of the plurality of buses to read data from the at least one of the plurality of buses for storage in the one or more corresponding, dedicated memory banks. Accordingly, the sending and receiving tasks in such embodiments may comprise control signals that are sent, along the at least one of the plurality of buses, to one or more memory controllers of the one or more corresponding, dedicated memory banks. Moreover, the sending and receiving tasks may be executed by one portion of the processing subunit (e.g., by one or more accelerators thereof) concurrently with a calculation or other task executed by another portion of the processing subunit (e.g., by one or more different accelerators thereof). An example of such a concurrent execution may include a MAC-relay command, in which receiving, multiplying, and sending are executed in tandem. In addition to distributing data amongst the memory banks, particular portions of data may be duplicated across different memory banks. For example, as explained above, a distributed processor on a memory chip may comprise a plurality of processor subunits disposed on the memory chip and a plurality of memory banks disposed on the memory chip. Each one of the plurality of processor subunits may be connected to at least one dedicated memory bank from among the plurality of memory banks, and each memory bank of the plurality of memory banks may be configured to store data independent from data stored in other ones of the plurality of memory banks. Moreover, at least some of the data stored in one particular memory bank from among the plurality of memory banks may comprise a duplicate of data stored in at least another one of the plurality of memory banks. For example, a number, string, or other type of data used in the series of instructions may be stored in a plurality of memory banks dedicated to different processor subunits rather than being transferred from one memory bank to other processor subunits in the memory chip. In one example, parallel string matching may use data duplication described above. For example, a plurality of strings may be compared to the same string. A conventional processor would compare each string in the plurality to the same string in sequence. On a hardware chip of the present disclosure, the same string may be duplicated across the memory banks such that the processor subunits may compare a separate string in the plurality to the duplicated string in parallel. In some embodiments, as described below with respect toFIG.8, the at least some data duplicated across the one particular memory bank from among the plurality of memory banks and the at least another one of the plurality of memory banks is defined by a compiler configured to duplicate data across memory banks. Moreover, the compiler may be configured to duplicate the at least some data using a plurality of lower-level tasks distributed amongst corresponding processor subunits. Duplication of data may be useful for certain tasks that re-use the same portions of data across different calculations. By duplicating these portions of data, the different calculations may be distributed amongst processor subunits of the memory chip for parallel execution while each processor subunit may store the portions of data in, and access the stored portions from, a dedicated memory bank (rather than pushing and pulling the portions of data across buses connecting the processor subunits). In one example, the at least some data duplicated across the one particular memory bank from among the plurality of memory banks and the at least another one of the plurality of memory banks may comprise weights of a neural network. In this example, each node in the neural network may be defined by at least one processor subunit from among the plurality of processor subunits. For example, each node may comprise machine code executed by the at least one processor subunit defining the node. In this example, duplication of the weights may allow each processor subunit to execute machine code to effect, at least in part, a corresponding node while only accessing one or more dedicated memory banks (rather than performing data transfers with other processor subunits). Because the timing of reads and writes to the dedicated memory bank(s) are independent of other processor subunits while the timing of data transfers between processor subunits requires timing synchronization (e.g., using software, as explained above), duplication of memory to avoid data transfers between processor subunits may produce further efficiencies in overall execution. As explained above with respect toFIGS.7A and7B, the memory chip may further comprise a plurality of buses, each bus connecting one of the plurality of processor subunits to one or more corresponding, dedicated memory banks from among the plurality of memory banks. Moreover, as explained above, data transfers on the plurality of buses may be controlled using software. Accordingly, data transfers across a particular one of the plurality of buses may be controlled by a corresponding processor subunit connected to the particular one of the plurality of buses. Therefore, one of the tasks included in the series of instructions may include outputting data to one of the buses or pulling data from one of the buses. As explained above, such tasks may be executed by (i) a processing element of the processor subunit or (ii) one or more accelerators included in the processor subunit. As further explained above, such tasks may include a sending task and/or a receiving tasks that comprise control signals that are sent, along the at least one of the plurality of buses, to one or more memory controllers of the one or more corresponding, dedicated memory banks. FIG.8depicts a flowchart of a method800for compiling a series of instructions for execution on an exemplary memory chip of the present disclosure, e.g., as depicted inFIGS.7A and7B. Method800may be implemented by any conventional processor, whether generic or special-purpose. Method800may be executed as a portion of a computer program forming a compiler. As used herein, a “compiler” refers to any computer program that converts a higher-level language (e.g., a procedural language, such as C, FORTRAN, BASIC, or the like; an object-oriented language, such as Java, C++, Pascal, Python, or the like: etc.) to a lower-level language (e.g., assembly code, object code, machine code, or the like). The compiler may allow a human to program a series of instructions in a human-readable language, which is then converted to a machine-executable language. At step810, the processor may assign tasks associated with the series of instructions to different ones of the processor subunits. For example, the series of instructions may be divided into subgroups, the subgroups to be executed in parallel across the processor subunits. In one example, a neural network may be divided into its nodes, and one or more nodes may be assigned to separate processor subunits. In this example, each subgroup may comprise a plurality of nodes connected across different layers. Thus, a processor subunit may implement a node from a first layer of the neural network, a node from a second layer connected to the node from the first layer implemented by the same processor subunit, and the like. By assigning nodes based on their connections, data transfers between the processor subunits may be lessened, which may result in greater efficiency, as explained above. As explained above depicted inFIGS.7A and7B, the processor subunits may be spatially distributed among the plurality of memory banks disposed on the memory chip. Accordingly, the assignment of tasks may be, at least in part, a spatial divisional as well as a logical division. At step820, the processor may generate tasks to transfer data between pairs of the processor subunits of the memory chip, each pair of processor subunits being connected by a bus. For example, as explained above, the data transfers may be controlled using software. Accordingly, processor subunits may be configured to push and pull data on buses at synchronized times. The generated tasks may thus include tasks for performing this synchronized pushing and pulling of data. As explained above, step820may include pre-processing to account for the internal behavior, including timing and latencies, of the processor subunits. For example, the processor may use known times and latencies of the processor subunits (e.g., the time to push data to a bus, the time to pull data from a bus, the latency between a calculation and a push or pull, or the like) to ensure that the generated tasks synchronize. Therefore, the data transfers comprising at least one push by one or more processor subunits and at least one pull by one or more processor subunits may occur simultaneously rather than incurring a delay due to timing differences between the processor subunits, latencies of the processor subunits, or the like. At step830, the processor may group the assigned and generated tasks into the plurality of groups of sub-series instructions. For example, the sub-series instructions may each comprise a series of tasks for execution by a single processor subunit. Therefore, each of the plurality of groups of sub-series instructions may correspond to a different one of the plurality of processor sub-units. Accordingly, steps810,820, and830may result in dividing the series of instructions into a plurality of groups of sub-series instructions. As explained above, step820may ensure that any data transfers between the different groups are synchronized. At step840, the processor may generate machine code corresponding to each of the plurality of groups of subs-series instructions. For example, the higher-level code representing sub-series instructions may be converted to lower-level code, such as machine code, executable by corresponding processor subunits. At step850, the processor may assign the generated machine code corresponding to each of the plurality of groups of subs-series instructions to a corresponding one of the plurality of processor subunits in accordance with the division. For example, the processor may label each sub-series instructions with an identifier of the corresponding processor subunit. Thus, when the sub-series instructions are uploaded to a memory chip for execution (e.g., by host350ofFIG.3A), each sub-series may configure a correct processor subunit. In some embodiments, assigning tasks associated with the series of instructions to the different ones of the processor subunits may depend, at least in part, on a spatial proximity between two or more of the processor subunits on the memory chip. For example, as explained above, efficiency may be increased by lessening the number of data transfers between processor subunits. Accordingly, the processor may minimize data transfers that move data across more than two of processor subunits. Therefore, the processor may use a known layout of the memory chip in combination with one or more optimization algorithms (such as a greedy algorithm) in order to assign sub-series to processor subunits in a way that maximizes (at least locally) adjacent transfers and minimizes (at least locally) transfers to non-neighboring processor subunits. Method800may include further optimizations for the memory chips of the present disclosure. For example, the processor may group data associated with the series of instructions based on the division and assign the data to the memory banks in accordance with the grouping. Accordingly, the memory banks may hold data used for the sub-series instructions assigned to each processor subunit to which each memory bank is dedicated. In some embodiments, grouping the data may include determining at least a portion of the data to duplicate in two or more of the memory banks. For example, as explained above, some data may be used across more than one sub-series instructions. Such data may be duplicated across the memory banks dedicated to the plurality of processor subunits to which the different sub-series instructions are assigned. This optimization may further reduce data transfers across processor subunits. The output of method800may be input to a memory chip of the present disclosure for execution. For example, a memory chip may comprise a plurality of processor subunits and a corresponding plurality of memory banks, each processor subunit being connected to at least one memory bank dedicated to the processor subunit, and the processor subunits of the memory chip may be configured to execute the machine code generated by method800. As explained above with respect toFIG.3A, host350may input the machine code generated by method800to the processor subunits for execution. Sub-Banks and Sub-Controllers In conventional memory banks, controllers are provided at the bank level. Each bank includes a plurality of mats, which are typically arranged in a rectangular manner but may be arranged in any geometrical shape. Each mat includes a plurality of memory cells, which are also typically arranged in a rectangular manner but may be arranged in any geometrical shape. Each cell may store a single bit of data (e.g., depending on whether the cell is retained at a high voltage or a low voltage). An example of this conventional architecture is depicted inFIGS.9and10. As shown inFIG.9, at the bank level, a plurality of mats (e.g., mats930-1,930-2,940-1, and940-2) may form bank900. In a conventional rectangular organization, bank900may be controlled across global wordlines (e.g., wordline950) and global bitlines (e.g., bitline960). Accordingly, row decoder910may select the correct wordline based on an incoming control signal (e.g., a request for a read from an address, a request for a write to an address, or the like) and global sense amplifier920(and/or a global column decoder, not shown inFIG.9) may select the correct bitline based on the control signal. Amplifier920may also amplify any voltage levels from a selected bank during a read operation. Although depicted as using a row decoder for initial selecting and performing amplification along columns, a bank may additionally or alternatively use a column decoder for initial selecting and perform amplification along rows. FIG.10depicts an example of a mat1000. For example, mat100may form a portion of a memory bank, such as bank900ofFIG.9. As depicted inFIG.10, a plurality of cells (e.g., cells1030-1,1030-2, and1030-3) may form mat1000. Each cell may comprise a capacitor, a transistor, or other circuitry that stores at least one bit of data. For example, a cell may comprise a capacitor that is charged to represent a ‘1’ and discharged to represent a ‘0’ or may comprise a flip-flop having a first state representing a ‘1’ and a second state representing a ‘0.’ A conventional mat may comprise, for example, 512 bits by 512 bits. In embodiments where mat1000forms a portion of MRAM, ReRAM, or the like, a cell may comprise a transistor, resistor, capacitor or other mechanism for isolating an ion or portion of a material that stores at least one bit of data. For example, a cell may comprise a electrolyte ion, a portion of chalcogenide glass, or the like, having a first state representing a ‘1’ and a second state representing a ‘0.’ As further depicted inFIG.10, in a conventional rectangular organization, mat1000may be controlled across local wordlines (e.g., wordline1040) and local bitlines (e.g., bitline1050). Accordingly, wordline drivers (e.g., wordline driver1020-1,1020-2, . . . ,1020-x) may control the selected wordline to perform a read, write, or refresh based on a control signal from a controller associated with the memory bank of which mat1000forms a part (e.g., a request for a read from an address, a request for a write to an address, a refresh signal). Moreover, local sense amplifiers (e.g., local amplifiers1010-1,1010-2, . . . ,1010-x) and/or local column decoders (not shown inFIG.10) may control the selected bitline to perform a read, write, or refresh. The local sense amplifiers may also amplify any voltage levels from a selected cell during a read operation. Although depicted as using a wordline driver for initial selecting and performing amplification along columns, a mat may instead use a bitline driver for initial selecting and perform amplification along rows. As explained above, a large number of mats are duplicated to form a memory bank. Memory banks may be grouped to form a memory chip. For example, a memory chip may comprise eight to thirty-two memory banks. Accordingly, pairing processor subunits with memory banks on a conventional memory chip may result in only eight to thirty two processor subunits. Accordingly, embodiments of the present disclosure may include memory chips with additional sub-bank hierarchy. These memory chips of the present disclosure may then include processor subunits with memory sub-banks used as the dedicated memory banks paired with the processor subunits allowing for a larger number of sub processors, which may then achieve higher parallelism and performance of in-memory computing. In some embodiments of the present disclosure, the global row decoder and global sense amplifier of bank900may be replaced with sub-bank controllers. Accordingly, rather than sending control signals to a global row decoder and a global sense amplifier of the memory bank, a controller of the memory bank may direct the control signal to the appropriate sub-bank controller. The direction may be controlled dynamically or may be hard-wired (e.g., via one or more logic gates). In some embodiments, fuses may be used to indicate the controller of each sub bank or mat whether to block or pass the control signal to the appropriate sub-bank or mat. In such embodiments, faulty sub-banks may thus be deactivated using the fuses. In one example of such embodiments, a memory chip may include a plurality of memory banks, each memory bank having a bank controller and a plurality of memory sub-banks, each memory sub-bank having a sub-bank row decoder and a sub-bank column decoder for allowing reads and writes to locations on the memory sub-bank. Each sub-bank may comprise a plurality of memory mats, each memory mat having a plurality of memory cells and may have internally local row decoders, column decoders, and/or local sense amplifiers. The sub-bank row decoders and the sub-bank column decoders may process read and write requests from the bank controller or from a sub-bank processor subunit used for in memory computations on the sub-bank memory, as described below. Additionally, each memory sub-bank may further have a controller configured to determine whether to process read requests and write requests from the bank controller and/or to forward them to the next level (e.g., of row and column decoders on a mat) or to block the requests, e.g., to allow an internal processing element or processor subunit to access the memory. In some embodiments, the bank controller may be synchronized to a system clock. However, the sub-bank controllers may be not synchronized to the system clock. As explained above, the use of sub-banks may allow for the inclusion of a larger number processor subunits in the memory chip than if processor subunits were paired with memory banks of conventional chips. Accordingly, each sub-bank may further have a processor subunit using the sub-bank as a dedicated memory. As explained above, the processor subunit may comprise a RISC, a CISC, or other general-purpose processing subunit and/or may comprise one or more accelerators. Additionally, the processor subunit may include an address generator, as explained above. In any of the embodiments described above, each processor subunit may be configured to access a sub-bank dedicated to the processor subunit using the row decoder and the column decoder of the sub-bank without using the bank controller. The processor sub-unit associated with the sub-bank may also handle the memory mats (including the decoder and memory redundancy mechanisms, described below) and/or determine whether a read or write request from an upper level (e.g., the bank level or the memory level) is forwarded and handled accordingly. In some embodiments, the sub-bank controller may further include a register that stores a state of the sub-bank. Accordingly, the sub-bank controller may return an error if the sub-bank controller receives a control signal from the memory controller while the register indicates that the sub-bank is in use. In embodiments where each sub-bank further includes a processor subunit, the register may indicate an error if the processor subunit in the sub-bank is accessing the memory in conflict with an external request from the memory controller. FIG.11shows an example of another embodiment of a memory bank using sub-bank controllers. In the example ofFIG.11, bank1100has a row decoder1110, a column decoder1120, and a plurality of memory sub-banks (e.g., sub-banks1170a,1170b, and1170c) with sub-bank controllers (e.g., controllers1130a,1130b, and1130c), the sub-bank controllers may include address resolvers (e.g., resolvers1140a,1140b, and1140c), which may determine whether to pass a request to one or more sub-banks controlled by the sub-bank controller. The sub-bank controllers may further include one or more logic circuits (e.g., logic150sa,1150b, and1150c). For example, a logic circuit comprising one or more processing elements may allow for one or more operations, such as refreshing of cells in the sub-bank, clearing of cells in the sub-bank, or the like, to be performed without processing requests externally from bank1100. Alternatively, the logic circuit may comprise a processor subunit, as explained above, such that the processor sub-unit has any sub-banks controlled by the sub-bank controller as corresponding, dedicated memory. In the example ofFIG.11, logic1150amay have sub-bank1170aas a corresponding, dedicated memory, logic1150bmay have sub-bank1170bas a corresponding, dedicated memory, and logic1150cmay have sub-bank1170cas a corresponding, dedicated memory. In any of the embodiments described above, the logic circuits may have buses to the sub-banks, e.g., buses1131a,1131b, or1131c. As further depicted inFIG.11, the sub-bank controllers may each include a plurality of decoders, such as a sub-bank row decoder and a sub-bank column decoder for allowing reads and writes, either by a processing element or processor subunit or by a higher-level memory controller issuing commands, to locations on the memory sub-bank(s). For example, sub-bank controller1130aincludes decoders1160a,1160b, and1160c, sub-bank controller1130bincludes decoders1160d,1160e, and1160f, and sub-bank controller1130cincludes decoders1160g,1160h, and1160i. The sub-bank controllers may, based on a request from bank row decoder1110, select a wordline using the decoders included in the sub-bank controllers. The described system may allow a processing element or processor subunit of the sub-bank to access the memory without interrupting other banks and even other sub-banks, thereby allowing each sub-bank processor subunit to perform memory computations in parallel with the other sub-bank processor subunits. Furthermore, each sub-bank may comprise a plurality of memory mats, each memory mat having a plurality of memory cells. For example, sub-bank1170aincludes mats1190a-1,1190a-2, . . . ,1190a-x; sub-bank11706includes mats1190b-1,1190b-2, . . . ,1190b-x; and sub-bank1170cincludes mats1190c-1,1190c-2, . . . ,1190c-3. As further depicted inFIG.11, each sub-bank may include at least one decoder. For example, sub-bank1170aincludes decoder1180a, sub-bank1170bincludes decoder1180b, and sub-bank1170cincludes decoder1180c. Accordingly, bank column decoder1120may select a global bitline (e.g., bitline1121aor1121b) based on external requests while the sub-bank selected by bank row decoder1110may use its column decoder to select a local bitline (e.g., bitline1181aor1181b) based on local requests from the logic circuit to which the sub-bank is dedicated. Accordingly, each processor subunit may be configured to access a sub-bank dedicated to the processor subunit using the row decoder and the column decoder of the sub-bank without using the bank row decoder and the bank column decoder. Thus, each processor subunit may access a corresponding sub-bank without interrupting other sub-banks. Moreover, sub-bank decoders may reflect accessed data to the bank decoders when the request to the sub-bank is external to the processor subunit. Alternatively, in embodiments where each sub-bank has only one row of memory mats, the local bitlines may be the bitlines of the mat rather than bitlines of the sub-bank. A combination of embodiments using sub-bank row decoders and sub-bank column decoders with the embodiment depicted inFIG.11may be used. For example, the bank row decoder may be eliminated but the bank column decoder retained and local bitlines used. FIG.12shows an example of an embodiment of a memory sub-bank1200having a plurality of mats. For example, sub-bank1200may represent a portion of sub-bank1100ofFIG.11or may represent an alternative implementation of a memory bank. In the example ofFIG.12, sub-bank1200includes a plurality of mats (e.g., mats1240aand1240b). Moreover, each mat may include a plurality of cells. For example, mat1240aincludes cells1260a-1,1260a-2, . . . ,1260a-x, and mat1240bincludes cells1260b-1,1260b-2, . . . ,1260b-x. Each mat may be assigned a range of addresses that will be assigned to the memory cells of the mat. These addresses may be configured at production such that mats may be shuffled around and such that faulted mats may be deactivated and left unused (e.g., using one or more fuses, as explained further below). Sub-bank1200receives read and write requests from memory controller1210. Although not depicted inFIG.12, requests from memory controller1210may be filtered through a controller of sub-bank1200and directed to an appropriate mat of sub-bank1200for address resolution. Alternatively, at least a portion (e.g., higher bits) of an address of a request from memory controller1210may be transmitted to all mats of sub-bank1200(e.g., mats1240aand1240b) such that each mat may process the full address and the request associated with the address only if the mat's assigned address range includes the address specified in the command. Similar to the sub-bank direction described above, the mat determination may be dynamically controlled or may be hardwired. In some embodiments, fuses may be used to determine the address range for each mat, also allowing for disabling of faulty mats by assigning an illegal address range. Mats may additionally or alternatively be disabled by other common methods or connection of fuses. In any of the embodiments described above, each mat of the sub-bank may include a row decoder (e.g., row decoder1230aor1230b) for selection of a wordline in the mat. In some embodiments, each mat may further include fuses and comparators (e.g.,1220aand1220b). As described above, the comparators may allow each mat to determine whether to process an incoming request, and the fuses may allow each mat to deactivate if faulty. Alternatively, row decoders for the bank and/or sub-bank may be used rather than a row decoder in each mat. Furthermore, in any of the embodiments described above, a column decoder included in the appropriate mat (e.g., column decoder1250aor1250b) may select a local bitline (e.g., bitline1251or1253). The local bitline may be connected to a global bitline of the memory bank. In embodiments where the sub-bank has local bitlines of its own, the local bitline of the cell may be further connected to the local bitline of the sub-bank. Accordingly, data in the selected cell may be read through the column decoder (and/or sense amplifier) of the cell, then through the column decoder (and/or sense amplifier) of the sub-bank (in embodiments including a sub-bank column decoder and/or sense amplifier), and then through the column decoder (and/or sense amplifier) of the bank. Mat1200may be duplicated and arrayed to form a memory bank (or a memory sub-bank). For example, a memory chip of the present disclosure may comprise a plurality of memory banks, each memory bank having a plurality of memory sub-banks, and each memory sub-bank having a sub-bank controller for processing reads and writes to locations on the memory sub-bank. Furthermore, each memory sub-bank may comprise a plurality of memory mats, each memory mat having a plurality of memory cells and having a mat row decoder and a mat column decoder (e.g., as depicted inFIG.12). The mat row decoders and the mat column decoders may process read and write requests from the sub-bank controller. For example, the mat decoders may receive all requests and determine (e.g., using a comparator) whether to process the request based on a known address range of each mat, or the mat decoders may only receive requests within the known address range based on selection of a mat by the sub-bank (or bank) controller. Controller Data Transfers Any of the memory chips of the present disclosure may also share data using memory controllers (or sub-bank controllers or mat controllers) in addition to sharing data using processing subunits. For example, a memory chip of the present disclosure may comprise a plurality of memory banks (e.g., an SRAM bank, a DRAM bank, or the like), each memory bank having a bank controller, a row decoder, and a column decoder for allowing reads and writes to locations on the memory bank, as well as a plurality of buses connecting each controller of the plurality of bank controllers to at least one other controller of the plurality of bank controllers. The plurality of buses may be similar to the buses connecting the processing subunits, as described above, but connecting the bank controllers directly rather than through the processing subunits. Furthermore, although described as connecting the bank controllers, buses may additionally or alternatively connect sub-bank controllers and/or mat controllers. In some embodiments, the plurality of buses may be accessed without interruption of data transfers on main buses of the memory banks connected to one or more processor subunits. Accordingly, a memory bank (or sub-bank) may transmit data to or from a corresponding processor subunit in the same clock cycle as transmitting data to or from a different memory bank (or sub-bank). In embodiments where each controller is connected to a plurality of other controllers, the controllers may be configurable for selection of one other of the other controllers for sending or receiving of data. In some embodiments, each controller may be connected to at least one neighboring controller (e.g., pairs of spatially adjacent controllers may be connected to one another). Redundant Logic in Memory Circuits The disclosure is generally directed to a memory chip with primary logic portions for on-chip data processing. The memory chip may include redundant logic portions, which may replace defective primary logic portions to increase the fabrication yield of the chip. Thus, the chip may include on-chip components that allow a configuration of logic blocks in the memory chip based on individual testing of the logic portions. This feature of the chip may increase yields because a memory chip with larger areas dedicated to logic portions is more susceptible to fabrication failures. For example, DRAM memory chips with large redundant logic portions may be susceptible to fabrication issues that reduce yield. However, implementing redundant logic portions may result in increased yield and reliability because it provides a manufacturer or user of DRAM memory chips to turn on or off full logic portions while maintaining the ability of high parallelism. It should be noted that here and throughout the disclosure, example of certain memory types (such as DRAM) may be identified in order to facilitate the explanation of disclosed embodiments. It is to be understood, however, that in such instances the identified memory types are not intended to be limiting. Rather, memory types such as DRAM, Flash, SRAM, ReRAM, PRAM, MRAM, ROM, or any other memory may be used together with the disclosed embodiments even if fewer examples are specifically identified in a certain section of the disclosure. FIG.13is a block diagram of an exemplary memory chip1300, consistent with disclosed embodiments. Memory chip1300may be implemented as a DRAM memory chip. Memory chip1300may also be implemented as any type of memory volatile or non-volatile, such as Flash, SRAM, ReRAM, PRAM, and/or MRAM, etc. Memory chip1300may include a substrate1301in which an address manager1302, a memory array1304including a plurality of memory banks,1304(a,a) to1304(z,z), a memory logic1306, a business logic1308, and a redundant business logic1310are disposed. Memory logic1306and business logic1308may constitute primary logic blocks, while redundant business logic1310may constitute redundant blocks. In addition, memory chip1300may include configuration switches, which may include deactivation switches1312, and an activation switches1314. Deactivation switches1312and activation switches1314may also be disposed in the substrate1301. In this Application, memory logic1306, business logic1308, and redundant business logic1310may also be collectively referred to as the “logic blocks.” Address manager1302may include row and column decoders or other type of memory auxiliaries. Alternatively, or additionally, address manager1302may include a microcontroller or processing unit. In some embodiments, as shown inFIG.13, memory chip1300may include a single memory army1304that may arrange the plurality of memory blocks in a two-dimensional array on substrate1301. In other embodiments, however, memory chip1300may include multiple memory arrays1304and each of the memory arrays1304may arrange memory blocks in different configurations. For example, memory blocks in at least one of the memory arrays (also known as memory banks) may be arranged in a radial distribution to facilitate routing between address manager1302or memory logic1306to the memory blocks. Business logic1308may be used to do the in-memory computation of an application that is not related to the logic used to manage the memory itself. For example, business logic1308may implement functions related to AI such as floating, integer, or MAC operations used as activation functions. In addition, business logic1308may implement data base related functions like min, max, sort, count, among others. Memory logic1306may perform tasks related to memory management, including (but not limited to) read, write, and refresh operations. Therefore, business logic may be added in one or more of the bank level, mats level, or a group of mats level. Business logic1308may have one or more address outputs and one or more data inputs/outputs. For instance, business logic1308can address by row\column lines to address manager1302. In certain embodiments, however, the logic blocks may be additionally or alternatively addressed via data inputs\outputs. Redundant business logic1310may be a replicate of business logic1308. In addition, redundant business logic1310may be connected to deactivation switches1312and/or activation switches1314, which may include small fuse\anti-fuse, and used for logic disabling or enabling one of the instances (e.g., an instance which is connected by default) and enable one of the other logic blocks (e.g., an instance which is disconnected by default). In some embodiments, as further described in connection toFIG.15, the redundancy of blocks may be local within a logic block, such as business logic1308. In some embodiments, the logic blocks in memory chip1300may be connected to subsets of memory array1304with dedicated buses. For example, a set of memory logic1306, business logic1308, and redundant business logic1310may be connected to the first row of memory blocks in memory array1304(i.e., memory blocks1304(a,a) to1304(a,z)). The dedicated buses may allow associated logic blocks to quickly access data from the memory blocks without requirements of opening communication lines through, for example, address manager1302. Each of the plurality of primary logic blocks may be connected to at least one of the plurality of memory banks1304. Also, redundant blocks, such as redundant business block1310, may be connected to at least one of the memory instances1304(a,a)-(z,z). Redundant blocks may replicate at least one of the plurality of primary logic blocks, such as memory logic1306or business logic1308. Deactivation switches1312may be connected to at least one of the plurality of primary logic blocks and activation switches1314may be connected to at least one of the plurality of redundant blocks. In these embodiments, upon detecting of a fault associated with one of the plurality of primary logic blocks (memory logic1306and/or business logic1308), deactivation switches1312may be configured to disable the one of the plurality of primary logic blocks. Simultaneously, activation switches1314may be configured to enable one of the plurality of redundant blocks, such as redundant logic block1310, that replicates the one of the plurality of primary logic blocks. In addition, activation switches1314and deactivation switches1312, which may collectively be referred to as “configuration switches,” may include an external input to configure the status of the switch. For instance, activation switches1314may be configured so an activation signal in the external input causes a closed switch condition, while deactivation switches1312may be configured so a deactivation signal in the external input causes an open switch condition. In some embodiments, all configuration switches in1300may be deactivated by default and become activated or enabled after a test indicates an associated logic block is functional and a signal is applied in the external input. Alternatively, in some cases, all configuration switches in1300may be enabled by default and way be deactivated or disabled after a test indicates an associated logic block is not functional and a deactivation signal is applied in the external input. Regardless of whether a configuration switch is initially enabled or disabled, upon detection of a fault associated with an associated logic block, the configuration switch may disable the associated logic block. In cases where the configuration switch is initially enabled, the state of the configuration switch may be changed to disabled in order to disable the associated logic block. In cases where the configuration switch is initially disabled, the state of the configuration switch may be left in its disabled state in order to disable the associated logic block. For example, the result of an operability test may indicate that a certain logic block is nonoperational or that it fails to operate within certain specifications. In such cases, the logic block may be disabled my not enabling its corresponding configuration switch. In some embodiments, configuration switches may be connected to two or more logic blocks and may be configured to choose between different logic blocks. For example, a configuration switch may be connected to both business logic1308and redundant logic block1310. Configuration switch may enable redundant logic block1310while disabling business logic1308. Alternatively, or additionally, at least one of the plurality of primary logic blocks (memory logic1306and/or business logic1308) may be connected to a subset of the plurality of memory banks or memory instances1304with a first dedicated connection. Then, at least one of the plurality of redundant blocks (such as redundant business logic1310), which replicates the at least one of the plurality of primary logic blocks, may be connected to the subset of the same plurality of memory banks or instances1304with a second dedicated connection. Moreover, memory logic1306may have different functions and capabilities than business logic1308. For example, while memory logic1306may be designed to enable read and write operations in the memory bank1304, business logic1308may be designed to perform in-memory computations. Therefore, if the business logic1308includes a first business logic block, and the business logic1308includes a second business logic block (like redundant business logic1310), it is possible to disconnect defective business logic1308and reconnect redundant business logic1310without missing any capability. In some embodiments, configuration switches (including deactivation switches1312and activation switches1314) may be implemented with a fuse, an anti-fuse, or a programmable device (including a one-time programmable device), or other form of non-volatile memory. FIG.14is a block diagram of an exemplary redundant logic block set1400, consistent with disclosed embodiments. In some embodiments, redundant logic block set1400may be disposed in substrate1301. Redundant logic block set1400may include at least one of business logic1308, and redundant business logic1310, connected to switches1312and1314, respectively. In addition, business logic1308and redundant business logic1310may be connected to an address bus1402and a data bus1404. In some embodiments, as shown inFIG.14, the switches1312and1314may connect logic blocks to a clock node. In this way, the configuration switches may engage or disengage the logic blocks from the clock signal, effectively activating or deactivating the logic blocks. In other embodiments, however, switches1312and1314may connect logic blocks to other nodes for activation or deactivation. For instance, configuration switches may connect logic blocks to a voltage supply node (e.g., VCC) or to the ground node (e.g., GND) or clock signal. In this way, the logic blocks may be enable of disable by the configuration switches because they would create an open circuit or cut-off the logic block power supply. In some embodiments, as shown inFIG.14, address bus1402and data bus1404may be in opposite sides of the logic blocks, which are connected in parallel to each one of the buses. In this way, routing of the different on-chip components may be facilitated by the logic block set1400. In some embodiments, each one of the plurality of deactivation switches1312couple at least one of the plurality of primary logic blocks with a clock node, and each one of the plurality of activation switches1314may be couple at least one of the plurality of redundant blocks with the clock node allowing to connect\disconnect the clock as a simple activation\deactivation mechanism. Redundant business logic1310of redundant logic block set1400allows the designer to choose, based on area and routing, the blocks that are worth duplication. For example, a chip designer may select larger blocks for duplication because larger blocks may be more error prone. Thus, a chip designer may decide to duplicate large logic blocks. On the other hand, a designer may prefer to duplicate smaller logic blocks because they are easily duplicated without a significant loss of space. Moreover, using the configuration inFIG.14, a designer may easily choose to duplicate logic blocks depending on the statistics of errors per area. FIG.15is a block diagram for an exemplary logic block1500, consistent with disclosed embodiments. The logic block may be business logic1308and/or redundant business logic1310. In other embodiments, however, the exemplary logic block may describe memory logic1306or other component of memory chip1300. Logic block1500presents yet another embodiment where the logic redundancy is used within a small processor pipeline. The logic block1500may include a register1508, a fetch circuit1504, decoder1506, and a write-back circuit1518. In addition, logic block1500may include a computation unit1510and a duplicated computing unit1512. However, in other embodiments, logic block1500may include other units that do not comprise a controller pipeline but include sporadic processing elements that comprise a required business logic. Computation unit1510and duplicated computation unit1512may include a digital circuit capable of performing digital calculations. For example, computation unit1510and duplicated computation unit1512may include an arithmetic logic unit (ALU) to perform arithmetic and bitwise operations on binary numbers. Alternatively, computation unit1510and duplicated computation unit1512may include a floating-point unit (FPU), which operates on floating point numbers. In addition, in some embodiments computation unit1510and duplicated computation unit1512may implement data base related functions like min, max, count, and compare operations, among others. In some embodiments, as shown inFIG.15, computation unit1510and duplicated computation unit1512may be connected to switching circuits1514and1516. When activated the switching circuits may enable or disable the computing units. In logic block1500, the duplicated computation unit1512may replicate the computation unit1510. Moreover, in some embodiments, register1508, fetch circuit1504, decoder1506, and write-back circuit1518(collectively referred to as the local logic units) may be smaller in size than the computation unit1510. Because larger elements are more prone to issues during fabrication, a designer may decide to replicate larger units (such as computation unit1510) instead of smaller units (such as the local logic units). Depending on historic yields and error rates, however, a designed may elect to duplicate local logic units additionally or alternatively to large units (or the entire block). For example, computation unit1510may be larger, and thus more error prone, than register1508, fetch circuit1504, decoder1506, and write-back circuit1518. A designer may choose to duplicate computation unit1510instead of the other elements in logic block1500or the whole block. Logic block1500may include a plurality of local configuration switches, each one of the plurality of local configuration switches being connected to at least one of the at least one of computation unit1510or duplicated computation unit1512. Local configuration switches may be configured to disable computation unit1510and enable duplicated computation unit1512when a fault is detected in the computation unit1510. FIG.16shows block diagrams of exemplary logic blocks connected with a bus, consistent with disclosed embodiments. In some embodiments, logic blocks1602(which may represent memory logic1306, business logic1308, or redundant business logic1310) may be independent of each other, may be connected via a bus, and may be activated externally by addressing them specifically. For example, memory chip1300may include many logic blocks, each logic block having an ID number. In other embodiments, however, logic blocks1602may represent larger units comprised of a plurality one or more of memory logic1306, business logic1308, or redundant business logic1310. In some embodiments, each one of logic blocks1602may be redundant with the other logic blocks1602. This complete redundancy, in which all blocks may operate as primary or redundant blocks, may improve fabrication yields because a designer may disconnect faulty units while maintaining functionality of the overall chip. For example, a designer may have the ability to disable logic areas that are prone to errors but maintain similar computation capabilities because the all duplicate blocks may be connected to the same address and data buses. For example, the initial number of logic blocks1602may greater than a target capability. Then, disabling some logic blocks1602would not affect the target capability. A bus connected to the logic blocks may include address bus1614, command lines1616, and data lines1618. As shown inFIG.16, each one of the logic blocks may be connected independently from each line in the bus. In certain embodiments, however, logic blocks1602may be connected in a hierarchical structure to facilitate routing. For instance, each line in the bus may be connected to a multiplexer that routes the line to different logic blocks1602. In some embodiments, to allow external access without knowing the internal chip structure, which may change due to enable and disabled units, each one of the logic blocks may include Fused IDs such as fused identification1604. Fused identification1604may include an array of switches (like fuses) that determine an ID and may be connected to a managing circuit. For example, fused identification1604may be connected to address manager1302. Alternatively, fused identification1604may be connected to higher memory address units. In these embodiments, fused identification1604may be configurable to for a specific address. For example, fused identification1604may include a programmable, non-volatile device that determines a final ID based on instructions received form a managing circuit. A distributed processor on a memory chip may be designed with the configuration depicted inFIG.16. A testing procedure executed as BIST at chip wakeup or at factory testing may assign running ID numbers to blocks in the plurality of primary logic blocks (memory logic1306and business logic1308) that pass a testing protocol. A testing procedure may also assign illegal ID numbers to blocks in the plurality of primary logic blocks that do not pass the testing protocol. The test procedure may also assign running ID numbers to blocks in the plurality of redundant blocks (redundant logic block1310) that pass the testing protocol. Because redundant blocks replace failing primary logic blocks, the blocks in the plurality of redundant blocks assigned running ID numbers may be equal to, or greater than, the blocks in the plurality of primary logic blocks assigned illegal ID numbers, thereby disabling the block. In addition, each one of the plurality of primary logic blocks and each one of the plurality of redundant blocks may include at least one fused identification1604. Also, as shown inFIG.16, the bus connecting logic blocks1602may include a command line, a data line, and an address line. In other embodiments, however, all logic blocks1602that are connected to the bus will start disabled and with no ID number. Tested one by one, each good logic block will get a running ID number, and those logic blocks not working will remain with illegal ID, which would disable these blocks. In this manner, redundant logic blocks may improve the fabrication yields by replacing blocks that arm known to be defective during the testing process. Address bus1614may couple a managing circuit to each one of the plurality of memory banks, each one of the plurality of primary logic blocks, and each one of the plurality of redundant blocks. These connections allow the managing circuit to, upon detection of the fault associated with a primary logic blocks (such as business logic1308), assign an invalid address to the one of the plurality of primary logic blocks and assign a valid address to the one of the plurality of redundant blocks. For example, as shown inFIG.16A, illegal IDs are configured to all logic blocks1602(a)-(c) (e.g., address 0xFFF). After testing logic blocks1602(a) and1602(c) are verified to be functional while logic block1602(b) is not functional. InFIG.16Aunshaded logic blocks may represent logic blocks that passed the functionality test successfully, while shaded logic blocks may represent logic blocks that failed the test for functionality. Then, the test procedure changes the illegal IDs to legal IDs for logic blocks that are functional while leaving the illegal IDs for logic blocks that are not functional. As an example, inFIG.16A, the address for logic blocks1602(a) and1602(c) is changed from 0xFFF to 0x00 and 0x002, respectively. In contrast, the address for logic block1602(b) remains the illegal address 0xFFF. In some embodiments, the ID is changed by programming a corresponding fused identification1604. Different results from the testing of logic blocks1602may result in a different configuration. For example, as shown inFIG.16B, address manager1302may initially assign illegal IDs to all logic blocks1602(i.e., 0xFFF). The testing results, however, may indicate that both logic blocks1602(a) and1602(b) are functional. In these cases, testing of logic block1602(c) may not be necessary because memory chip1300may require only two logic blocks. Therefore, to minimize testing resources, logic blocks may be tested only according to the minimum number of functional logic blocks needed by the product definition of1300, leaving other logic blocks untested.FIG.16Balso shows unshaded logic blocks, which represent tested logic blocks that passed the test for functionality, and shaded logic blocks, which represent untested logic blocks. In these embodiments, a production tester (external or internal, automatic or manual) or a controller executing a BIST at startup, may change illegal IDs to running IDs for tested logic blocks that are functional while leaving the illegal IDs to untested logic blocks. As an example, inFIG.16B, the address for logic blocks1602(a) and1602(b) is changed from 0xFFF to 0x00 and 0x002, respectively. In contrast, the address for untested logic block1602(c) remains with the illegal address 0xFFF. FIG.17is a block diagram for exemplary units1702and1712connected in series, consistent with disclosed embodiments.FIG.17may represent an entire system or chip. Alternatively.FIG.17may represent a block in a chip containing other functional blocks. Units1702and1712may represent complete units that include a plurality of logic blocks such as memory logic1306and/or business logic1308. In these embodiments units1702and1712may also include elements required to perform operations such as address manager1302. In other embodiments, however, units1702and1712may represent logic units such as business logic1308or redundant business logic1310. FIG.17presents embodiments in which units1702and1712may need to communicate between themselves. In such cases, units1702and1712may be connected in series. However, a non-working unit may break the continuity between the logic blocks. Therefore, the connection between units may include a bypass option when a unit needs to be disabled due to a defect. The bypass option can also be a part of the bypassed unit itself. InFIG.17units may be connected in series (e.g.,1702(a)-(c)), and a failing unit (e.g.,1702(b)) may be bypassed when it is defective. The units may further be connected in parallel with switching circuits. For example, in some embodiments units1702and1712may be connected with switching circuits1722and1728, as depicted inFIG.17. In the example depicted inFIG.17, unit1702(b) is defective. For example, unit1702(b) does not pass a test for a circuit functionality. Therefore, unit1702(b) may be disabled using, for example, activation switches1314(not shown inFIG.17) and/or switching circuit1722(b) may be activated to bypass unit1702(b) and sustain the connectivity between logic blocks. Accordingly, when a plurality of primary units are connected in series, each one of the plurality of units may be connected in parallel with a parallel switch. Upon detection of a fault associated with the one of the plurality of units, the parallel switch connected to the one of the plurality of units may be activated to connect two of the plurality of units. In other embodiments, as shown inFIG.17, switching circuits1728may include a sampling point or more that would cause a cycle or cycles delay maintaining synchronization between different lines of units. When a unit is disabled, shorting the connection between adjacent logic blocks may generate synchronization errors with other calculations. For example, if a task requires data from both A and B lines, and each of A and B is carried by an independent series of units, disabling a unit would cause a desynchronization between the lines that would require further data management. To prevent desynchronizations, sample circuits1730may simulate the delay caused by the disabled unit1712(b). Nonetheless, in some embodiments, the parallel switch may include an anti-fuse instead of a sampling circuit1730. FIG.18is a block diagram of exemplary units connected in a two-dimension array, consistent with disclosed embodiments.FIG.18may represent an entire system or chip. Alternatively,FIG.18may represent a block in a chip containing other functional blocks. Units1806may represent autonomous units that include a plurality of logic blocks such as memory logic1306and/or business logic1308. However, in other embodiments units1806may represent logic units such as business logic1308. Where convenient, discussion ofFIG.18may refer to elements identified inFIG.13(e.g., memory chip1300) and discussed above. As shown inFIG.18, units may be arranged in a two-dimensional array in which units1806(which may include or represent one or more of memory logic1306, business logic1308, or redundant business logic1310) are interconnected via switching boxes1808and connection boxes1810. In addition, in order to control the configuration of the two-dimensional array, the two-dimensional array may include I/O blocks1804in the periphery of the two-dimensional array. Connection boxes1810may be programmable and reconfigurable devices that may respond to signals inputted from the I/O blocks1804. For example, connection boxes may include a plurality of input pins from units1806and may also be connected to switching boxes1808. Alternatively, connection boxes I/O may include a group of switches connecting pins of programmable logic cells with routing tracks, while switching boxes1808may include a group of switches connecting different tracks. In certain embodiments, connection boxes1810and switching boxes1808may be implemented with configuration switches such as switches1312and1314. In such embodiments, connection boxes1810and switching boxes1808may be configured by a production tester or a BIST executed at chip startup. In some embodiments, connection boxes1810and switching boxes1808may be configured after units1806are tested for a circuit functionality. In such embodiments, I/O blocks1804may be used to send testing signals to units1806. Depending on the test results, I/O blocks1804may send programming signals that configure connection boxes1810and switching boxes1808in a manner disabling the units1806that fail the testing protocol and enabling units1806that pass the testing protocol. In such embodiments, the plurality of primary logic blocks and the plurality of redundant blocks may be disposed on the substrate in a two-dimensional grid. Therefore, each one of the plurality of primary units1806and each one of the plurality of redundant blocks, such as redundant business logic1310, may be interconnected with switching boxes1808, and an input block may be disposed in the periphery of each line and each column of the two-dimensional grid. FIG.19is a block diagram for exemplary units in a complex connection, consistent with disclosed embodiments.FIG.19may represent an entire system. Alternatively,FIG.19may represent a block in a chip containing other functional blocks. The complex connection ofFIG.19includes units1902(a)-(f) and configuration switches1904(a)-(h). Units1902may represent autonomous units that include a plurality of logic blocks such as memory logic1306and/or business logic1308. However, in other embodiments units1902may represent logic units such as memory logic1306, business logic1308, or redundant business logic1310. Configuration switches1904may include any of deactivation switches1312and activation switches1314. As shown inFIG.19, the complex connection may include units1902in two planes. For example, the complex connection may include two independent substrates separated in the z-axis. Alternatively, or additionally, units1902may be arranged in two surfaces of a substrate. For example, with the objective to reduce the area of memory chip1300, substrate1301may be arranged in two overlapping surfaces and connected with configuration switches1904arranged in three dimensions. Configuration switches may include deactivation switches1312and/or activation switches1314. A first plane of the substrate may include “main” unit1902. These blocks may be enabled by default. In such embodiments, a second plain may include “redundant” unit1902. These units may be disabled by default. In some embodiments, configuration switches1904may include anti-fuses. Thus, after testing of units1902, the blocks may be connected in a tile of functional units by switching certain anti-fuses to “always-on” and disable selected units1902, even if they are in a different plane. In the example presented inFIG.19, one of the ‘main’ units (unit1902(e)) is not working.FIG.19may represent nonfunctional or untested blocks as shaded blocks while tested or functional blocks may be unshaded. Therefore, configuration switches1904are configured so one of the logic blocks in a different plane (e.g., unit1902(f)) becomes active. In this way even though one of the main logic blocks was defective, the memory chip is still working by replacing a spare logic unit. FIG.19additionally shows that one of the units1902(i.e.,1902(c)) in the second plane is not tested or enabled because the main logic blocks are functional. For example, inFIG.19, both main units1902(a) and1902(d) passed a test for functionality. Thus, units1902(c) was not tested or enabled. Therefore,FIG.19shows the ability to specifically select the logic blocks that become active depending on testing results. In some embodiments, as shown inFIG.19, not all units1902in a first plain may have a corresponding spare or redundant blocks. However, in other embodiments, all units may be redundant with each other for complete redundancy where all units are both primary or redundant. In addition, while some implementations may follow the star network topology depicted inFIG.19, other implementation may use parallel connections, serial connections, and/or couple the different elements with configuration switches in parallel or in series. FIG.20is an exemplary flowchart illustrating a redundant block enabling process2000, consistent with disclosed embodiments. The enabling process2000may be implemented for memory chip1300and specially for DRAM memory chips. In some embodiments, process2000may include steps of testing each one of a plurality of logic blocks on the substrate of the memory chip for at least one circuit functionality, identifying faulty logic blocks in the plurality of primary logic blocks based on the testing results, testing at least one redundant or additional logic block on the substrate of the memory chip for the at least one circuit functionality, disabling the at least one faulty logic block by applying an external signal to a deactivation switch, and enabling the at least one redundant block by applying the external signal to an activation switch, the activation switch being connected with the at least one redundant block and being disposed on the substrate of the memory chip. The description ofFIG.20below further elaborates on each step of process2000. Process2000may include testing a plurality of logic blocks (step2002), such as business block1308and a plurality of redundant blocks (e.g., redundant business block1310). The testing may be before packaging using, for example, probing stations for on-wafer testing. Step2000, however, may also be performed after packaging. The testing in step2002may include applying a finite sequence of testing signals to every logic block in memory chip1300or a subset of logic blocks in memory chip1300. The testing signals may include requesting a computation that is expected to yield a 0 or a 1. In other embodiments, the testing signal may request reading a specific address in a memory bank or writing in a specific memory bank. Testing techniques may be implemented to test the response of the logic blocks under iterative processes in step2002. For example, the test may involve testing logic blocks by transmitting instructions to write data in a memory bank and then verifying the integrity of the written data. In some embodiments, the testing may include repeating the algorithm with data inversed. In alternative embodiments, the testing of step2002may include running a model of the logic blocks to generate a target memory image based on a set of testing instructions. Then, the same sequence of instructions may be executed to the logic blocks in the memory chip, and the results may be recorded. The residual memory image of the simulation may also be compared to the image taken from the rest, and any mismatch may be flagged as a failure. Alternatively, in step2002, testing may include shadow modeling, where a diagnostic is generated but the results are not necessarily predicted. Instead, the test using shadow modeling may be run in parallel on both the memory chip and a simulation. For example, when the logic blocks in the memory chip complete an instruction or task, the simulation may be signaled to execute the same instruction. Once the logic blocks in the memory chip finalize the instructions, the two models' architectural states may be compared. If there is a mismatch, then a failure is flagged. In some embodiments, all logic blocks (including, e.g., each one of memory logic1306, business logic1308, or redundant business logic1310) may be tested in step2002. In other embodiments, however, only subsets of the logic blocks may be tested in different testing rounds. For example, in a first round of testing only memory logic1306and associated blocks may be tested. In a second round, only business logic1308and associated blocks may be tested. In a third round, depending on the results of the first two rounds, logic blocks associated with redundant business logic1310may be tested. Process2000may continue to step2004. In step2004, faulty logic blocks may be identified, and faulty redundant blocks may also be identified. For example, logic blocks that do not pass the testing of step2002may be identified as faulty blocks in step2004. In other embodiments, however, only certain faulty logic blocks may be initially identified. For example, in some embodiments, only logic blocks associated with business logic1308may be identified, and faulty redundant blocks are only identified if they are required for substituting a faulty logic block. In addition, identifying faulty blocks may include writing on a memory bank or a nonvolatile memory the identification information of the identified faulty blocks. In step2006, faulty logic blocks may be disabled. For example, using a configuration circuit, the faulty logic blocks may be disabled by disconnecting them from clock, ground, and/or power nodes. Alternatively, faulty logic blocks may be disabled by configuring connection boxes in an arrangement that avoids the logic blocks. Yet, in other embodiments, faulty logic blocks may be disabled by receiving an illegal address from address manager1302. In step2008, redundant blocks that duplicate the faulty logic blocks may be identified. To support the same capabilities of the memory chips even though some logic blocks have failed, in step2008, redundant blocks that are available and can duplicate faulty logic blocks may be identified. For example, if a logic block that performs multiplications of vectors is determined to be faulty, in step2008, an address manager1302or an on-chip controller may identify an available redundant logic block that also performs multiplication of vectors. In step2010, the redundant blocks identified in step2008may be enabled. In contrast to the disable operation of step2006, in step2010, the identified redundant blocks may be enabled by connecting them to clock, ground, and/or power nodes. Alternatively, identified redundant blocks may be enabled by configuring connection boxes in an arrangement that connects the identified redundant blocks. Yet, in other embodiments, identified redundant blocks may be enabled by receiving a running address at the test procedure execution time. FIG.21is an exemplary flow chart illustrating an address assignment process2100, consistent with disclosed embodiments. The address assignment process2100may be implemented for memory chip1300and specially for a DRAM memory chips. As described in relation toFIG.16, in some embodiments, logic blocks in memory chip1300may be connected to a data bus and have an address identification. Process2100describes an address assignment method that disables faulty logic blocks and enables logic blocks that pass a test. The steps described in process2100will be described as being performed by a production tester or a BIST executed at chip startup; however, other components of memory chip130M and/or external devices may also perform one or more steps of process2100. In step2102, the tester may disable all logic and redundant blocks by assigning an illegal identification to each logic block at a chip level. In step2104, the tester may execute a testing protocol of a logic block. For example, the tester may run testing methods described in step2002for one or more of the logic blocks in memory chip1300. In step2106, depending on the results of the test in step2104, the tester may determine whether the logic block is defective. If the logic block is not defective (step2106: no), address manager may assign a running ID to the tested logic block in step2108. If the logic block is defective (step2106: yes), address manager1302may leave the illegal ID for the defective logic block in step2110. In step2112, address manager1302may select a redundant logic block that replicates the defective logic block. In some embodiments, the redundant logic block that replicates the defective logic block may have the same components and connections to the defective logic blocks. In other embodiments, however, the redundant logic block may have different components and/or connections to the defective logic blocks but be able to perform an equivalent operation. For example, if the defective logic block is designed to perform multiplication of vectors, the selected redundant logic block would also be capable of performing multiplication of vectors, even if it does not have the same architecture as the defective unit. In step2114, address manager1302may test the redundant block. For instance, the tester may apply the testing techniques applied in step2104to the identified redundant block. In step2116, based on the results of testing in step2114, the tester may determine whether the redundant block is defective. In step2118, if the redundant block is not defective (step2116: no), the tester may assign a running ID to the identified redundant block. In some embodiments, process2100may return to step2104after step2118, creating an iteration loop to test all logic blocks in the memory chip. If the tester determines the redundant block is defective (step2116: yes), in step2120, the tester may determine if additional redundant blocks are available. For example, the tester may query a memory bank with information regarding available redundant logic blocks. If redundant logic blocks are available (step2120: yes), the tester may return to step2112and identify a new redundant logic block replicating the defective logic block. If redundant logic blocks are not available (step2120: no), in step2122, the tester may generate an error signal. The error signal may include information of the defective logic block and the defective redundant block. Coupled Memory Banks The presently disclosed embodiments also include a distributed high-performance processor. The processor may include a memory controller that interfaces memory banks and processing units. The processor may be configurable to expedite delivery of data to the processing units for calculations. For example, if a processing unit requires two data instances to perform a task, the memory controller may be configured so communication lines independently provide access to the information from two data instances. The disclosed memory architecture seeks to minimize hardware requirements that are associated with complex cache memory and complex register files schemes. Normally, processor chips include cache hierarchies that allow cores to work directly with registers. However, the cache operations require significant die area and consume additional power. The disclosed memory architecture avoids the use of a cache hierarchy by adding logic components in the memory. The disclosed architecture also enables strategic (or even optimized) placement of data in memory banks. Even if the memory banks have a single port and high latency, the disclosed memory architecture may enable high performance and avoid memory accessing bottlenecks by strategically positioning data in different blocks of memory banks. With the goal of providing a continuous stream of data to the processing units, a compilation optimization step may determine how data should be stored in memory banks for specific or generic tasks. Then, the memory controller, which interfaces processing units and memory banks, may be configured to grant access to specific processing units when they require data to perform operations. The configuration of the memory chip may be performed by a processing unit (e.g., a configuration manager) or an external interface. The configuration may be also written by a compiler or other SW tool. In addition, the configuration of the memory controller may be based on the available ports in the memory banks and the organization of data in the memory banks. Accordingly, the disclosed architecture may provide processing units with a constant flow of data or simultaneous information from different memory blocks. In this way, computation tasks within the memory may be quickly processed by avoiding latency bottlenecks or cache memory requirements. Moreover, data stored in the memory chip may be arranged based on compilation optimization steps. The compilation may allow for building of processing routines in which the processor efficiently assigns tasks to processing units without memory latency associated delays. The compilation may be performed by a compiler and transmitted to a host connected to an external interface in the substrate. Normally, high latency for certain access patterns and/or low numbers of ports would result in data bottlenecks for processing units requiring the data. The disclosed compilation, however, may position data in memory banks in a way that enables processing units to continuously receive data even with disadvantageous memory types. Furthermore, in some embodiments, a configuration manager may signal required processing units based on computations that are required by a task. Different processing units or logic blocks in the chip may have specialized hardware or architectures for different tasks. Therefore, depending on the task that will be performed, a processing unit, or a group of processing units, may be selected to perform the task. The memory controller on the substrate may be configurable to route data, or grant access, according to the selection of processing subunits to improve data transfer rates. For example, based on the compilation optimization and the memory architecture, processing units may be granted access to memory banks when they are required to perform a task. Moreover, the chip architecture may include on-chip components that facilitate transfer of data by reducing the time required to access data in the memory banks. Therefore, the present disclosure describes chip architecture(s), along with a compilation optimization step, for a high-performance processor capable of performing specific or generic tasks using simple memory instances. The memory instances may have high latency in random access and/or low number of ports, such as those used in a DRAM device or other memory-oriented technologies, but the disclosed architecture may overcome these shortcomings by enabling a continuous (or nearly continuous) flow of data from memory banks to processing units. In this application, simultaneous communication may refer to communication within a clock cycle. Alternatively, simultaneous communication may refer to sending information within a predetermine amount of time. For example, simultaneous communication may refer to communication within a few nanoseconds. FIG.22provides block diagrams for exemplary processing devices, consistent with disclosed embodiments.FIG.22Ashows a first embodiment of a processing device2200in which a memory controller2210connects a first memory block2202and a second memory block2204using multiplexers. Memory controller2210may also connect at least a configuration manager2212, a logic block2214, and multiple accelerators2216(a)-(n).FIG.22Bshows a second embodiment of processing device2200in which memory controller2210connects memory blocks2202and2204using a bus that connects memory controller2210with at least a configuration manager2212, a logic block2214, and multiple accelerators2216(a)-(n). In addition, host2230may be external and connected to processing device2200through, for example, an external interface. Memory blocks2202and2204may include a DRAM mats or group of mats, DRAM banks, MRAM\PRAM\RERAM\SRAM units. Flash mats, or other memory technologies. Memory blocks2202and2204may alternatively include non-volatile memories, a flash memory device, a Resistive Random Access Memory (ReRAM) device, or a Magnetoresistive Random Access Memory (MRAM) device. Memory blocks2202and2204may additionally include a plurality of memory cells arranged in rows and columns between a plurality of word lines (not shown) and a plurality of bit lines (not shown). The gates of each row of memory cells may be connected to a respective one of the plurality of word lines. Each column of memory cells may be connected to a respective one of the plurality of bit lines. In other embodiments, a memory area (including memory blocks2202and2204) is built from simple memory instances. In this application, the term “memory instance” may be used interchangeably with the term “memory block.” The memory instances (or blocks) may have poor characteristics. For example, the memories may be only one port memories and may have high random-access latency. Alternatively, or additionally, the memories may be inaccessible during column and line changes and face data access problems related to, for example, capacity charging and/or circuitry setups. Nonetheless, the architecture presented inFIG.22still facilitates parallel processing in the memory device by allowing dedicated connections between memory instances and processing units and arranging the data in a certain manner that takes the characteristics of the blocks into account. In some device architectures, memory instances may include several ports, facilitating the parallel operations. Nonetheless, in such embodiments, the chip may still achieve an improved performance when data is compiled and organized based on the chip architecture. For example, a compiler may improve the efficiency of access in the memory area by providing instructions and organizing data placement, so it can be readily access even using one-port memories. Furthermore, memory blocks2202and2204may be multiple types for memory in a single chip. For example, memory blocks2202and2204may be eFlash and eDRAM. Also, memory blocks may include DRAM with instances of ROM. Memory controller2210may include a logic circuit to handle the memory access and return the results to the rest of the modules. For example, memory controller2210may include an address manager and selection devices, such as multiplexers, to route data between the memory blocks and processing units or grant access to the memory blocks. Alternatively, Memory controller2210may include double data rate (DDR) memory controllers used to drive DDR SDRAM, where data is transferred on both rising and falling edges of the system's memory clock. In addition, memory controller2210may constitute Dual Channel memory controllers. The incorporation of dual channel memory may facilitate control of parallel access lines by memory controller2210. The parallel access lines may be configured to have identical lengths to facilitate synchronization of data when multiple lines are used in conjunction. Alternatively, or additionally, the parallel access lines may allow access of multiple memory ports of the memory banks. In some embodiments processing device2200may include one or more muxes that may be connected to processing units. The processing units may include configuration manager2212, logic block2214, and accelerators2216, which may be connected directly to the mux. Also, memory controller2210may include at least one data input from a plurality of memory banks or blocks2202and2204and at least one data output connected to each one of the plurality of processing units. With this configuration, memory controller2210may simultaneously receive data from memory banks or memory blocks2202and2204via the two data inputs, and simultaneously transmits data received via to the at least one selected processing unit via the two data outputs. In some embodiments, however, the at least one data input and at least one data output may be implemented in a single port allowing only read or write operations. In such embodiments, the single port may be implemented as a data bus including data, address, and command lines. Memory controller2210may be connected to each one of the plurality of memory blocks2202and2204, and may also connect to processing units via, for example, a selection switch. Also processing units on the substrate, including configuration manager2212, logic block2214, and accelerators2216, may be independently connected to memory controller2210. In some embodiments, configuration manager2212may receive an indication of a task to be performed and, in response, configure memory controller2210, accelerators2216, and/or logic blocks2214according to a configuration stored in memory or supplied externally. Alternatively, memory controller2210may be configured by an external interface. The task may require at least one computation that may be used to select at least one selected processing unit from the plurality of processing units. Alternatively, or additionally, the selection may be based at least in part upon a capability of the selected processing unit for performing the at least one computation. In response, memory controller2210may grant access to the memory banks, or route data between the at least one selected processing unit and at least two memory banks, using dedicated buses and/or in a pipelined memory access. In some embodiments, first memory block2202of at least two memory blocks may be arranged on a first side of the plurality of processing units; and second memory bank2204of the at least two memory banks may be arranged on a second side of the plurality of processing units opposite to the first side. Further, a selected processing unit to perform the task, for instance accelerator2216(n), may be configured to access second memory bank2204during a clock cycle in which a communication line is opened to the first memory bank or first memory block2202. Alternatively, the selected processing unit may be configured to transfer data to second memory block2204during a clock cycle in which a communication line is opened to first memory block2202. In some embodiments, memory controller2210may be implemented as an independent element, as shown inFIG.22. In other embodiments, however, memory controller2210may be embedded in the memory area or may be disposed along accelerators2216(a)-(n). A processing area in processing device2200may include configuration manager2212, logic block2214, and accelerators2216(a)-(n). Accelerators2216may include multiple processing circuits with pre-defined functions and may be defined by a specific application. For example, an accelerator may be a vector multiply accumulate (MAC) unit or a Direct Memory Access (DMA) unit handling memory moving between modules. Accelerators2216may also be able to calculate their own address and request the data from memory controller2210or write data to it. For example, configuration manager2212may signal at least one of accelerators2216that he can access the memory bank. Then accelerators2216may configure memory controller2210to route data or grant access to themselves. In addition, accelerators2216may include at least one arithmetic logic unit, at least one vector handling logic unit, at least one string compare logic unit, at least one register, and at least one direct memory access. Configuration manager2212may include digital processing circuits to configure accelerators2216and instructs execution of tasks. For example, configuration manager2212may be connected to memory controller2210and each one of the plurality of accelerators2216. Configuration manager2212may have its own dedicated memory to hold the configurations of accelerators2216. Configuration manager2212may use the memory banks to fetch commands and configurations via memory controller2210. Alternatively, configuration manager2212may be programmed through an external interface. In certain embodiments, configuration manager2212may be implemented with an on-chip reduced instruction set computer (RISC) or an on-chip complex CPU with its own cache hierarchy. In some embodiments, configuration manager2212may also be omitted and the accelerators can be configured through an external interface. Processing device2200may also include an external interface (not shown). The external interface allows access to the memory from an upper level, such a memory bank controller which receives the command from external host2230or on-chip main processor or access to the memory from external host2230or on-chip main processor. The external interface may allow programming of the configuration manager2212and the accelerators2216by writing configurations or code to the memory via memory controller2210to be used later by configuration manager2212or the units2214and2216themselves. The external interface, however, may also directly program processing units without being routed through memory controller2210. In case configuration manager2212is a microcontroller, configuration manager2212may allow loading of code from a main memory to the controller local memory via the external interface. Memory controller2210may be configured to interrupt the task in response to receiving a request from the external interface. The external interface may include multiple connectors associated with logic circuits that provide a glue-less interface to a variety of elements on the processing device. The external interface may include: Data I/O Inputs for data reads and output for data writes; External address outputs; External CEO chip select pins; Active-low chip selectors; Byte enable pins; a pin for wait states on the memory cycle; a Write enable pin; an Output enable-active pin; and read-write enable pin. Therefore, the external interface has the required inputs and outputs to control processes and obtain information from the processing device. For example, the external interface may conform to JEDEC DDR standards. Alternatively, or additionally, external interface may conform to other standards such as SPI\OSPI or UART. In some embodiments, the external interface may be disposed on the chip substrate and may be connected external host2230. The external host may gain access to memory blocks2202and2204, memory controller2210, and processing units via the external interface. Alternatively, or additionally, external host2230may read and write to the memory or may signal configuration manager2212, through read and write commands, to perform operations such as starting a process and/or stopping a process. In addition, external host2230may configure the accelerators2216directly. In some embodiments, external host2230be able to perform read/write operations directly on memory blocks2202and2204. In some embodiments, configuration manager2212and accelerators2216may be configured to connect the device area with the memory area using direct buses depending on the target task. For example, a subset of accelerators2216may connect with memory instances2204when the subset of accelerators has the capability to perform computations required to execute the task. By doing such a separation, it is possible to assure that dedicated accelerators get the bandwidth (BW) needed to memory blocks2202and2204. Moreover, this configuration with dedicated buses may allow splitting a large memory to smaller instances or blocks because connecting memory instances to memory controller2210allows quick access to data in different memories even with high row latency time. To achieve the parallelization of connection, memory controller2210may be connected to each of the memory instances with data, address, and/or control buses. The above-discussed inclusion of memory controller2210may eliminate the requirement of a cache hierarchy or complex register file in the processing device. Although the cache hierarchy can be added to give added capabilities, the architecture in processing device processing device2200may allow a designer to add enough memory blocks or instances based on the processing operations and manage the instances accordingly without a cache hierarchy. For example, the architecture in processing device processing device2200may eliminate requirements of a cache hierarchy by implementing a pipelined memory access. In the pipelined memory access, processing units may receive a sustaining flow of data in every cycle certain data lines may be opened (or activated) while other data lines receive or transmit data. The sustained flow of data using independent communication lines may allow an improved execution speed and minimum latency due to line changes. Moreover, the disclosed architecture inFIG.22enables a pipelined memory access it may be possible to organize data in a low number of memory blocks and save power losses caused by line switching. For example, a In some embodiments, a compiler may communicate host2230the organization of, or a method to organize, data in memory banks to facilitate access to data during a given task. Then, configuration manager2212may define which memory banks, and in some cases which ports of the memory banks, may be accessed by the accelerators. This synchronization between the location of data in memory banks and the access method to data, improves computing tasks by feeding data to the accelerators with minimum latency. For example, in embodiments in which configuration manager2212includes a RISC\CPU, the method may be implemented in offline software (SW) and then the configuration manager2212may be programmed to execute the method. The method may be developed in any language executable by RISC/CPU computers and may be executed on any platform. The inputs of the method may include configuration of the memories behind memory controller and the data itself along with the pattern of memory accesses. In addition, the method may be implemented in a language or machine language specific to the embodiment and may also be just a series of configuration values in binary or text. As discussed above, in some embodiments, a compiler may provide instructions to host2230for organizing data in memory blocks2202and2204in preparation of a pipelined memory access. The pipelined memory access may generally include steps of: receiving a plurality of addresses of a plurality of memory banks or memory blocks2202and2204; accessing the plurality of memory banks according to the received addresses using independent data lines; supplying data from a first address through a first communication line to at least one of the plurality of processing units and opening a second communication line to a second address, the first address being in a first memory bank of the plurality of memory banks, the second address being in second memory bank2204of the plurality of memory banks; and supplying data from the second address through the second communication line to the at least one of the plurality of processing units and opening a third communication line to a third address in the first memory bank in the first line within a second clock cycle. In some embodiments, the pipelined memory access may be executed with two memory blocks being connected to a single port. In such embodiments, memory controller2210may hide the two memory blocks behind a single port but transmit data to the processing units with the pipelined memory access approach. In some embodiments, a compiler can run on host2230before executing a task. In such embodiments, the compiler may be able to determine a configuration of data flow based on the architecture of the memory device since the configuration would be known to the compiler. In other embodiments, if the configuration of memory blocks2204and2202is unknown at offline time, the pipelined method can run on host2230which may arrange data in memory blocks before starting calculations. For example, host2230may directly write data in memory blocks2204and2202. In such embodiments, processing units, such as configuration manager2212and memory controller2210may not have information regarding required hardware until run time. Then, it may be necessary to delay the selection of an accelerator2216until a task starts running. In these situations, the processing units or memory controller2210may randomly select an accelerator2216and create a test data access pattern, which may be modified as the task is executed. Nonetheless, when the task is known in advance, a compiler may organize data and instructions in memory banks for host2230to provide to a processing unit, such as configuration manager2212, to set signal connections that minimize access latency. For example, in some cases n words may be needed at the same time by accelerators2216. However, each memory instance supports retrieving only m words at a time, where “m” and “n” are integers and m<n. Thus, the compiler may place the needed data across different memory instances or blocks facilitating data access. Also, to avoid line miss latencies, a host may split data in different lines of different memory instances if processing device2200includes multiple memory instances. The division of data may allow accessing the next line of data in the next instance while still using data from the current instance. For example, accelerator2216(a) may be configured to multiply two vectors. Each one of the vectors may be stored in independent memory blocks, such as memory blocks2202and2204, and each vector may include multiple words. Therefore, to complete a task requiring a multiplication by accelerator2216(a), it may be necessary to access the two memory blocks and retrieve multiple words. However, in some embodiments, memory blocks only allow access of one word per clock cycle. For instance, memory blocks may have a single port. In these cases, to expedite data transmittal during an operation, a compiler may organize the words composing vectors in different memory blocks allowing parallel and/or simultaneous reading of the words. In these situations, a compiler may store words in memory blocks that have a dedicated line. For instance, if each vector includes two words and memory controller has direct access to four memory blocks, a compiler may arrange data in four memory blocks, each one transmitting a word and expediting data delivery. Moreover, in embodiments when memory controller2210may have more than a single connection to each memory block, the compiler may instruct configuration manager2212(or other processing unit) to access ports specific ports. In this way, processing device220M may perform a pipelined memory access, continuously providing data to processing units by simultaneously loading words in some lines and transmitting data in other lines. Thus, this pipelined memory access avoid may avoid latency issues. FIG.23is a block diagram of an exemplary processing device2300, consistent with disclosed embodiments. The block diagram shows a simplified processing device2300displaying a single accelerator in the form of MAC Unit2302, configuration manager2304(equivalent or similar to configuration manager2212), memory controller2306(equivalent or similar to memory controller2210), and a plurality of memory blocks2308(a)-(d). In some embodiments, MAC unit2302may be a specific accelerator for processing a particular task. By way of example, the processing device2300may be tasked with 2D-convolutions. Then, configuration manager2304can signal an accelerator that has the appropriate hardware to perform calculations associated with the task. For instance, MAC unit2302may have four internal incrementing counters (logical adders and registers to manage the four loops needed by a convulsion calculation) and a multiply accumulate unit. Configuration manager2304may signal MAC unit2302to process incoming data and execute the task. Configuration manager2304may transmit an indication to MAC unit2302to execute the task. In these situations, MAC unit2302may iterate over calculated addresses, multiply the numbers, and accumulate them to an internal register. In some embodiments, configuration manager2304may configure the accelerators while memory controller2306grants access to blocks2308and MAC unit2302using dedicated buses. In other embodiments, however, memory controller2306can directly configure the accelerators based on instructions received from configuration manger2304or an external interface. Alternatively, or additionally, configuration manager2304can pre-load a few configurations and allow the accelerator to iteratively run on different addresses with different sizes. In such embodiments, configuration manager2304may include a cache memory that stores a command before it is transmitted to at least one of the plurality of processing units, such as accelerators2216. However, in other embodiments configuration manager2304may not include a cache. In some embodiments, configuration manager2304or memory controller2306may receive addresses that need to be accessed for a task. Configuration manager2304or memory controller2306may check a register to determine whether the address is already in a loaded line to one of memory blocks2308. If so, memory controller2306may read the word from memory block2308and pass it to the MAC unit2302. If the address is not in a loaded line, configuration manager2304may request memory controller2306may load the line and signal MAC unit2302to delay until it is retrieved. In some embodiments, as shown inFIG.23, memory controller2306may include two inputs form two independent addresses. But if more than two addresses should be accessed simultaneously, and these addresses are in a single memory block (for example it is only in of memory blocks2308(a)), memory controller2306or configuration manager2304may raise an exception. Alternatively, configuration manager2304may return invalid data signal when the two addresses can only be access through a single line. In other embodiments, the unit may delay the process execution until it is possible to retrieve all needed data. This may diminish the overall performance. Nonetheless, a compiler may be able to find a configuration and data placement that would prevent delays. In some embodiments, a compiler may create a configuration or instruction set for processing device2300that may configure configuration manager2304and memory controller2306and accelerator2302to handle situations in which multiple addresses need to be accessed from a single memory block but the memory block has one port. For instance, a compiler may re-arrange data in memory blocks2308such that processing units may access multiple lines in memory blocks2308. In addition, memory controller2306may also work simultaneously on more than one input at the same time. For example, memory controller2306may allow accessing one of memory blocks2308through one port and supplying the data while receiving a request from a different memory block in another input. Therefore, this operation may result in and accelerator2216tasked with the exemplary 2D-convolutions receiving data from dedicated lines of communication with the pertinent memory blocks. Additionally, or alternatively, memory controller2306or a logic block may hold refresh counters for every memory block2308and handle the refresh of all lines. Having such a counter allows memory controller2306to slip in the refresh cycles between dead access times from the devices. Furthermore, memory controller2306may be configurable to perform the pipelined memory access, receiving addresses and opening lines in memory blocks before supplying the data. The pipelined memory access may provide data to processing units without interruption or delayed clock cycles. For example, while memory controller2306or one of the logic blocks access data with the right line inFIG.23, it may be transmitting data in the left line. These methods will be explained in greater detail in connection toFIG.26. In response to the required data, processing device2300may use multiplexors and/or other switching devices to choose which device gets serviced to perform a given task. For example, configuration manager2304may configure multiplexers so at least two data lines reach the MAC unit2302. In this way, a task requiring data from multiple addresses, such as 2D-convolutions, may be performed faster because the vectors or words requiring multiplication during convolution can reach the processing unit simultaneously, in a single clock. This data transferring method may allow the processing units, such as accelerators2216, to quickly output a result. In some embodiments, configuration manager2304may be configurable to execute processes based on priority of tasks. For example, configuration manager2304can be configured to let a running process finish without any interruptions. In that case, configuration manger2304may provide an instruction or configurations of a task to accelerators2216, let them run uninterrupted, and switch multiplexers only when the task is finished. However, in other embodiments, configuration manager2304may interrupt a task and reconfigure data routing when it receives a priority task, such a request from an external interface. Nevertheless, with enough memory blocks2308, memory controller2306may be configurable to route data, or grant access, to processing units with dedicated lines that do not have to be changed until a task is completed. Moreover, in some embodiments, all devices may be connected by buses to the entries of configuration manager2304, and the devices may manage access between themselves and the buses (e.g., using the same logic as a multiplexer). Therefore, memory controller2306may be directly connected to a number of memory instances or memory blocks. Alternatively, memory controller2306may be connected directly to memory sub-instances. In some embodiments, each memory instance or block can be built from sub-instances (for example, DRAM may be built from mats with independent data lines arranged in multiple sub-blocks). Further, the instances may include at least one of DRAM mats, DRAM, banks, flash mats, or SRAM mats or any other type of memory. Then, memory controller2306may include dedicated lines to address sub-instances directly to minimize latency during a pipelined memory access. In some embodiments, memory controller2306may also hold the logic needed for a specific memory instance (such as row\col decoders, refresh logic, etc.) and memory blocks2308may handle its own logic. Therefore, memory blocks2308may get an address and generate commands for return\write data. FIG.24depicts exemplary memory configuration diagrams, consistent with disclosed embodiments. In some embodiments, a compiler generating code or configuration for processing device2200may perform a method to configure loading from memory blocks2202and2204by pre-arranging data in each block. For example, a compiler may prearrange data so each word required for a task is correlated to a line of memory instance or memory block(s). But for tasks that require more memory blocks than the one available in processing device2200, a compiler may implement methods of fitting data in more than one memory location of each memory block. The compiler may also store data in sequence and evaluate the latency of each memory block to avoid line miss latency. In some embodiments, the host may be part of a processing unit, such as configuration manger2212, but in other embodiments the compiler host may be connected to processing device2200via an external interface. In such embodiments, the host may run compiling functions, such as the ones described for the compiler. In some embodiments, configuration manager2212may be a CPU or a micro controller (uC). In such embodiments, configuration manager2212may have to access the memory to fetch commands or instructions placed in the memory. A specific compiler may generate the code and place it in the memory in a manner that allows for consecutive commands to be stored in the same memory line and across a number of memory banks to allow for the pipelined memory access also on the fetched command. In these embodiments, configuration manager2212and memory controller2210may be capable of avoiding row latency in linear execution by facilitating the pipelined memory access. The previous case of linear execution of a program described a method for a compiler to recognize and place the instructions to allow for pipelined memory execution. However other software structures may be more complex and would require the compiler to recognize them and act accordingly. For example, in case a task requires loops and branches, a compiler may place all the loop code inside a single line so that the single line can be looped without line opening latency. Then, memory controller2210may not need to change lines during an execution. In some embodiments, configuration manager2212may include internal caching or small memory. The internal caching may store commands that are executed by configuration manager2212to handle branches and loops. For example, commands in internal caching memory may include instructions to configure accelerators for accessing memory blocks. FIG.25is an exemplary flowchart illustrating a possible memory configuration process2500, consistent with disclosed embodiments. Where convenient in describing memory configuration process2500, reference may be made to the identifiers of elements depicted inFIG.22and described above. In some embodiments, process2500may be executed by a compiler that provides instructions to a host connected through an external interface. In other embodiments, process2500may be executed by components of processing device2200, such as configuration manager2212. In general, process2500may include determining a number of words required simultaneously to perform the task; determining a number of words that can be accessed simultaneously from each one of the plurality of memory banks; and dividing the number of words required simultaneously between multiple memory banks when the number of words required simultaneously is greater than the number of words that can be accessed simultaneously. Moreover, dividing the number of words required simultaneously may include executing a cyclic organization of words and sequentially assigning one word per memory bank. More specifically, process2500may begin with step2502, in which a compiler may receive a task specification. The specification include required computations and/or a priority level. In step2504, a compiler may identify an accelerator, or group of accelerators, that may perform the task. Alternatively, the compiler may generate instructions so the processing units, such as configuration manager2212, may identify an accelerator to perform the task. For example, using the required computation configuration manger2212may identify accelerators in the group of accelerators2216that may process the task. In step2506, the compiler may determine a number of words that needs to be simultaneously accessed to execute the task. For example, the multiplication of two vectors requires access to at least two vectors, and the compiler may therefore determine that vector words must be simultaneously accessed to perform the operation. In step2508, the compiler may determine a number of cycles necessary to execute the task. For example, if the task requires a convolution operation of four by-products, the compiler may determine that at least 4 cycles will be necessary to perform the task. In step2510, the compiler may place words that are needed to be accessed simultaneously in different memory banks. In that way, memory controller2210may be configured to open lines to different memory instances and access the required memory blocks within a clock cycle, without any required cached data. In step2512, the compiler place words that are accessed sequentially in the same memory banks. For example, in the case that four cycles of operations are required, the compiler may generate instructions to write needed words in sequential cycles in a single memory block to avoid changing lines between different memory blocks during execution. In step2514, compiler generate instructions for programing processing units, such as configuration manager2212. The instructions may specify conditions to operate a switching device (such as a multiplexor) or configure a data bus. With such instructions, configuration manager2212may configure memory controller2210to route data from, or grant access to, memory blocks to processing units using dedicated lines of communication according to a task. FIG.26is an exemplary flowchart illustrating a memory read process2600, consistent with disclosed embodiments. Where convenient in describing memory read process2600, reference may be made to the identifiers of elements depicted inFIG.22and described above. In some embodiments, as described below, process2600may be implemented by memory controller2210. In other embodiments, however, process2600may be implemented by other elements in the processing device2200, such as configuration manager2212. In step2602, memory controller2210, configuration manager2212, or other processing units may receive an indication to route data from, or grant access to, a memory bank. The request may specify an address and a memory block. In some embodiments, the request may be received via a data bus specifying a read command in line2218and address in line2220. In other embodiments, the request may be received via demultiplexers connected to memory controller2210. In step2604, configuration manager2212, a host, or other processing units, may query an internal register. The internal register may include information regarding opened lines to memory banks, opened addresses, opened memory blocks, and/or upcoming tasks. Based on the information in the internal register, it may be determined whether there are lines opened to the memory bank and/or whether the memory block received the request in step2602. Alternatively, or additionally, memory controller2210may directly query the internal register. If the internal register indicates that the memory bank is not loaded in an opened line (step2606: no), process2600may continue to step2616and a line may be loaded to a memory bank associated with the received address. In addition, memory controller2210or a processing unit, such as configuration manager2212, may signal a delay to the element requesting information from the memory address in step2616. For example, if accelerator2216is requesting the memory information that is located an already occupied memory block, memory controller2210may send a delay signal to the accelerator in step2618. In step2620, configuration manager2212or memory controller2210may update the internal register to indicate a line has opened to a new memory bank or a new memory block. If the internal register indicates that the memory bank is loaded in an opened line (step2606: yes), process2600may continue to step2608. In step2608, it may be determined whether the line loaded the memory bank is being used for a different address. If the line is being used for a different address (step2608: yes), it would indicate that there are two instances in a single block and, therefore, they cannot be accessed simultaneously. Thus, an error or exemption signal may be send to the element requesting information from the memory address in step2616. But, if the line is not being used for a different address (step2608: no), a line may be opened for the address and retrieve data from the target memory bank and continue to step2614to transmit data to the to the element requesting information from the memory address. With process2600, processing device2200has the ability to establish direct connections between processing units and the memory blocks or memory instances that contain the required information to perform a task. This organization of data would enable reading information from organized vectors in different memory instances, as well as allow the retrieval of information simultaneously from different memory blocks when a device requests a plurality of these addresses. FIG.27is an exemplary flowchart illustrating an execution process2700, consistent with disclosed embodiments. Where convenient in describing execution process2700, reference may be made to the identifiers of elements depicted inFIG.22and described above. In step2702, a compiler or a local unit, such as configuration manager2212, may receive an indication of a task that needs to be performed. The task may include a single operation (e.g., multiplication) or a more complex operation (e.g., convolution between matrixes). The task may also indicate a required computation. In step2704, the compiler or configuration manager2212may determine a number of words that is required simultaneously to perform the task. For example, configuration a compiler may determine two words are required simultaneously to perform a multiplication between vectors. In another example, a 2D convolution task, configuration manager2212may determine that “n” times “m” words are required for a convolution between matrices, where “n” and “m” are the matrices dimensions. Moreover, in step2704, configuration manager2212may also determine a number of cycles necessary to perform the task. In step2706, depending on the determinations in step2704, a compiler may write words that need to be accessed simultaneously in a plurality of memory banks disposed on the substrate. For instance, when a number a number of words that can be accessed simultaneously from one of the plurality of memory banks is lower than the number of words that are required simultaneously, a compiler may organize data in multiple memory banks to facilitate access to the different required words within a clock. Moreover, when configuration manager2212or the compiler determine a number of cycles is necessary to perform the task, the compiler may write words that are needed in sequential cycles in a single memory bank of the plurality of memory banks to prevent switching of lines between memory banks. In step2708, memory controller2210may be configured to read or grant access to at least one first word from a first memory bank from the plurality of memory banks or blocks using a first memory line. In step2170, a processing unit, for example one of accelerators2216, may process the task using the at least one first word. In step2712, memory controller2210may be configured to open a second memory line in a second memory bank. For example, based on the tasks and using the pipelined memory access approach, memory controller2210may be configured to open a second memory line in a second memory block where information required for the tasks was written in step2706. In some embodiments, the second memory line may be opened when the task in step2170is about to be completed. For example, if a task requires 100 clocks, the second memory line may be opened in the 90th clock. In some embodiments, steps2708-2712may be executed within one line access cycle. In step2714, memory controller2210may be configured to grant access to data from at least one second word from the second memory bank using the second memory line opened in step2710. In step2176, a processing unit, for example one of accelerators2216, may process the task using the at least second word. In step2718, memory controller2210may be configured to open a second memory line in the first memory bank. For example, based on the tasks and using the pipelined memory access approach, memory controller2210may be configured to open a second memory line to the first memory block. In some embodiments, the second memory line to the first block may be opened when the task in step2176is about to be completed. In some embodiments, steps2714-2718may be executed within one line access cycle. In step2720, memory controller2210may read or grant access to at least one third word from the first memory bank from the plurality of memory banks or blocks using a second memory line in the first bank or a first line in a third bank and continuing in different memory banks. Partial Refreshes Some memory chips, such as dynamic random access memory (DRAM) chips, use refreshes to keep stored data (e.g., using capacitance) from being lost due to voltage decay in capacitors or other electric components of the chips. For example, in DRAM each cell has to be refreshed from time to time (based on the specific process and design) to restore the charge in the capacitors so that data is not lost or damaged. As the memory capacities of a DRAM chip increase, the amount of time required to refresh the memory becomes significant. During the time periods when a certain line of memory is being refreshed, the bank containing the line being refreshed cannot be accessed. This can result in reductions in performance. Additionally, the power associated with the refresh process may also be significant. Prior efforts have attempted to reduce the rate at which refreshes are performed to reduce adverse effects associated with refreshing memory, but most of these efforts have focused on the physical layers of the DRAM. Refreshing is similar to reading and writing back a row of the memory. Using this principle and focusing on the access pattern to the memory, embodiments of the present disclosure include software and hardware techniques, as well as modifications to the memory chips, to use less power for refreshing and to reduce amounts of time during which memory is refreshed. For example, an as an overview, some embodiments may use hardware and/or software to track line access timing and skip recently accessed rows within a refresh cycle (e.g., based on a timing threshold). In another example, some embodiments may rely on software executed by the memory chip's refresh controller to assign reads and writes such that access to the memory is non-random. Accordingly, the software may control the refresh more precisely to avoid wasted refresh cycles and/or lines. These techniques may be used alone or combined with a compiler that encodes commands for the refresh controller along with machine code for a processor such that access to the memory is again non-random. Using any combination of these techniques and configurations, which are described in detail below, the disclosed embodiments may reduce memory refresh power requirements and/or increase system performance by reducing an amount of time during which a memory unit is refreshed. FIG.28depicts an example memory chip2800with a refresh controller2803, consistent with the present disclosure. For example, memory chip2800may include a plurality of memory banks (e.g., memory bank2801aand the like) on a substrate. In the example ofFIG.28, the substrate includes four memory banks, each with four lines. A line may refer to a wordline within one or more memory banks of memory chip2800or any other collection of memory cells within memory chip2800, such as a portion of or an entire row along a memory bank or a group of memory banks. In other embodiments, the substrate may include any number of memory banks, and each memory bank may include any number of lines. Some memory banks may include a same number of lines (as shown inFIG.28) while other memory banks may include different numbers of lines. As further depicted inFIG.28, memory chip280xmay include a controller2805to receive input to memory chip2800and transmit output from memory chip2800(e.g., as described above in “Division of Code”). In some embodiments, the plurality of memory banks may comprise dynamic random access memory (DRAM). However, the plurality of memory banks may comprise any volatile memory that stores data requiring periodic refreshes. As will be discussed in more detail below, the presently disclosed embodiments may employ counters or resistor-capacitor circuits to time refresh cycles. For example a counter or timer may be used to count time from the last fill refresh cycle and then when the counter reaches its target value another counter may be used to iterate over all rows. Embodiments of the present disclosure may additionally track accesses to segments of memory chip2800and reduce refresh power required. For example, although not depicted inFIG.28, memory chip2800may further include a data storage configured to store access information indicative of access operations for one or more segments of the plurality of memory banks. For example, the one or more segments may comprise any portions of lines, columns, or any other groupings of memory cells within memory chip2800. In one particular example, the one or more segments may include at least one row of memory structures within the plurality of memory banks. Refresh controller2803may be configured to perform a refresh operation of the one or more segments based, at least in part, on the stored access information. For example, the data storage may comprise one or more registers, static random access memory (SRAM) cells, or the like associated with segments of memory chip2800(e.g., lines, columns, or any other groupings of memory cells within memory chip2800). Further, the data storage may be configured to store bits indicative of whether the associated segment was accessed in one or more previous cycles. A “bit” may comprise any data structure storing at least one bit, such as a register, an SRAM cell, a nonvolatile memory, or the like. Moreover, a bit may be set by setting a corresponding switch (or switching element, such as a transistor) of the data structure to ON (which may be equivalent to “1” or “true”). Additionally or alternatively, a bit may be set by modifying any other property within the data structure (such as charging a floating gate of a flash memory, modifying a state of one or more flip-flops in an SRAM, or the like) in order to write a “1” to the data structure (or any other value indicating the setting of a bit). If a bit is determined to be set as part of the memory controller's refresh operation, refresh controller2803may skip a refresh cycle for the associated segment and clear the register(s) associated with that portion. In another example, the data storage may comprise one or more nonvolatile memories (e.g., a flash memory or the like) associated with segments of memory chip2800(e.g., lines, columns, or any other groupings of memory cells within memory chip2800). The nonvolatile memory may be configured to store bits indicative of whether the associated segment was accessed in one or more previous cycles. Some embodiments may additionally or alternatively add a timestamp register on each row or group of rows (or other segment of memory chip2800) holding the last tick within the current refresh cycle which the line was accessed. This means that with each row access, the refresh controller may update the row timestamp register. Thus, when a next time to refresh occurs (e.g., at the end of a refresh cycle), the refresh controller may compare the stored timestamp, and if the associated segment was previously accessed within a certain period of time (e.g., within a certain threshold as applied to the stored timestamp), the refresh controller may skip to the next segment. This saves the system from expending refresh power on segments that have been recently accessed. Moreover, the refresh controller may continue to track access to make sure each segment is accessed or refreshed at the next cycle. Accordingly, in yet another example, the data storage may comprise one or more registers or nonvolatile memories associated with segments of memory chip2800(e.g., lines, columns, or any other groupings of memory cells within memory chip2800). Rather than using bits to indicate whether an associated segment has been accessed, the registers or nonvolatile memories may be configured to store timestamps or other information indicative of a most recent access of the associated segments. In such an example, refresh controller2803may determine whether to refresh or access the associated segments based on whether an amount of time between timestamps stored in the associated registers or memories and a current time (e.g., from a timer, as explained below inFIGS.29A and29B) exceed a predetermined threshold (e.g., 8 ms, 16 ms, 32 ms, 64 ms, or the like). Accordingly, the predetermined threshold may comprise an amount of time for a refresh cycle to ensure that the associated segments are refreshed (if not accessed) at least once per refresh cycle. Alternatively, the predetermined threshold may comprise an amount of time shorter than that required for a refresh cycle (e.g., to ensure that any required refresh or access signals may reach the associated segments before the refresh cycle is complete). For example, the predetermined time may comprise 7 ms for a memory chip with an 8 ms refresh period such that, if a segment has not been accessed in 7 ms, the refresh controller will send a refresh or access signal that reaches the segment by the end of the 8 ms refresh period. In some embodiments, the predetermined threshold may depend on the size of an associated segment. For example, the predetermined threshold may be smaller for smaller segments of memory chip2800. Although described above with respect to a memory chip, the refresh controllers of the present disclosure may also be used in distributed processor architectures, like those described in the sections above and throughout the present disclosure. One example of such an architecture is depicted inFIG.7A. In such embodiments, the same substrate as memory chip2800may include, disposed thereon, a plurality of processing groups, e.g., as depicted inFIG.7A. As explained above with respect toFIG.3A, a “processing group” may refer to two or more processor subunits and their corresponding memory banks on the substrate. The group may represent a spatial distribution on the substrate and/or a logical grouping for the purposes of compiling code for execution on memory chip2800. Accordingly, the substrate may include a memory array that includes a plurality of banks, such as banks2801aand other banks shown inFIG.28. Furthermore, the substrate may include a processing array that may include a plurality of processor subunits (such as subunits730a,730b,730c,730d,730e,730f,730g, and730hshown inFIG.7A). As further explained above with respect toFIG.7A, each processing group may include a processor subunit and one or more corresponding memory banks dedicated to the processor subunit. Moreover, to allow each processor subunit to communicate with its corresponding, dedicated memory bank(s), the substrate may include a first plurality of buses connecting one of the processor subunits to its corresponding, dedicated memory bank(s). In such embodiments, as shown inFIG.7A, the substrate may include a second plurality of buses to connect each processor subunit to at least one other processor subunit (e.g., an adjacent subunit in in the same row, an adjacent processor subunit in the same column, or any other processor subunit on the substrate). The first and/or second plurality of buses may be free of timing hardware logic components such that data transfers between processor subunits and across corresponding ones of the plurality of buses are uncontrolled by timing hardware logic components, as explained above in the “Synchronization Using Software” section. In embodiments where same substrate as memory chip2800may include, disposed thereon, a plurality of processing groups (e.g., as depicted inFIG.7A), the processor subunits may further include an address generator (e.g., address generator450as depicted inFIG.4). Moreover, each processing group may include a processor subunit and one or more corresponding memory banks dedicated to the processor subunit. Accordingly, each one of the address generators may be associated with a corresponding, dedicated one of the plurality of memory banks. In addition, the substrate may include a plurality of buses, each connecting one of the plurality of address generators to its corresponding, dedicated memory bank. FIG.29Adepicts example refresh controller2900consistent with the present disclosure. Refresh controller2900may be incorporated in a memory chip of the present disclosure, such as memory chip2800ofFIG.28. As depicted inFIG.29A, refresh controller2900may include a timer2901, which may comprise a on-chip oscillator or any other timing circuit for refresh controller2900. In the configuration depicted inFIG.29A, timer2901may trigger a refresh cycle periodically (e.g., every 9 ms, 16 ms, 32 ms, 64 ms, or the like). The refresh cycle may use a row counter2903to cycle through all rows of a corresponding memory chip and generate a refresh signal for each row using adder2901combined with an active bit2905. As shown inFIG.29A, bit2905may be fixed at 1 (“true”) to ensure that each row is refreshed during a cycle. In embodiments of the present disclosure, refresh controller2900may be include a data storage. As described above, the data storage may comprise one or more registers or nonvolatile memories associated with segments of memory chip2800(e.g., lines, columns, or any other groupings of memory cells within memory chip2800). The registers or nonvolatile memories may be configured to store timestamps or other information indicative of a most recent access of the associated segments. Refresh controller2900may use the stored information to skip refreshes for segments of memory chip2900. For example, refresh controller2900may skip a segment in a current refresh cycle if the information indicates it was refreshed during one or more previous refresh cycles. In another example, refresh controller2900may skip a segment in a current refresh cycle if a difference between the stored timestamp for the segment a current time is below a threshold. Refresh controller2900may further continue to track accesses and refreshes of the segments of memory chip2800through multiple refresh cycles. For example, refresh controller2900may update stored timestamps using timer2901. In such embodiments, refresh controller2900may be configured to use an output of the timer in clearing the access information stored in the data storage after a threshold time interval. For example, in embodiments where the data storage stores timestamps of a most recent access or refresh for an associated segment, refresh controller2900may store a new timestamp in the data storage whenever an access command or refresh signal is sent to the segment. If the data storage stores bits rather than timestamps, timer2901may be configured to clear bits that are set for longer than a threshold period of time. For example, in embodiments where the data storage stores bits indicating that associated segments was accessed in one or more previous cycles, refresh controller2900may clear bits (e.g., setting them to 0) in the data storage whenever timer2901triggers a new refresh cycle that is a threshold number of cycles (e.g., one, two, or the like) later since the associated bits were set (e.g., set to 1). Refresh controller2900may track access of the segments of memory chip2800in cooperation with other hardware of memory chip2800. For example, memory chips use sense amplifiers to perform read operations (e.g., as shown above inFIGS.9and10). The sense amplifiers may comprise a plurality of transistors configured to sense low-power signals from a segment of the memory chip2800storing a data in one or more memory cells and amplify the small voltage swing to higher voltage levels such that the data can be interpreted by logic, such as external CPUs or GPUs or integrated processor subunits as explained above. Although not depicted inFIG.29A, refresh controller2900may further communicate with a sense amplifier configured to access the one or more segments and change the state of the at least one bit register. For example, when the sense amplifier accesses the one or more segments, it may set (e.g., set to 1) bits associated with the segments indicating that the associated segments were accessed in a previous cycle. In embodiments where the data storage stores timestamps of a most recent access or refresh for an associated segment, when the sense amplifier accesses the one or more segments, it may trigger a write of a timestamp from timer2901to the registers, memories, or other elements comprising the data storage. In any of the embodiments described above, refresh controller2900may be integrated with a memory controller for the plurality of memory banks. For example, similar to the embodiments depicted inFIG.3A, refresh controller2900may be incorporated into a logic and control subunit associated with a memory bank or other segment of memory chip2800. FIG.29Bdepicts another example refresh controller2900′ consistent with the present disclosure. Refresh controller2900′ may be incorporated in a memory chip of the present disclosure, such as memory chip2800ofFIG.28. Similar to refresh controller2900, refresh controller2900′ includes timer2901, row counter2903, active bit2905, and adder2907. Additionally, refresh controller2900′ may include data storage2909. As shown inFIG.29B, data storage2909may comprise one or more registers or nonvolatile memories associated with segments of memory chip2800(e.g., lines, columns, or any other groupings of memory cells within memory chip2800), and states within the data storage may be configured to be changed (e.g., by a sense amplifier and/or other elements of refresh controller2900′, as described above) in response to the one or more segments being accessed. Accordingly, the refresh controller2900′ may be configured to skip a refresh of the one or more segments based on the states within the data storage. For example, if a stare associated with a segment is activated (e.g., set to 1 by being switched on, having a property altered in order to store a “1,” or the like), refresh controller2900′ may skip a refresh cycle for the associated segment and clear the state associated with that portion. The state may be stored with at least a one-bit register or any other memory structure configured to store at least one bit of data. In order to ensure segments of the memory chip are refreshed or accessed during each refresh cycle, refresh controller2900′ may reset or otherwise clear the states in order to trigger a refresh signal during the next refresh cycle. In some embodiments, after a segment is skipped, refresh controller2900′ may clear the associated state in order to ensure that the segment is refreshed on the next refresh cycle. In other embodiments, refresh controller2900′ may be configured to reset the states within the data storage after a threshold time interval. For example, refresh controller2900′ may clear states (e.g., setting them to 0) in the data storage whenever timer2901exceeds a threshold time since the associated states were set (e.g., set to 1 by being switched on, having a property altered in order to store a “1,” or the like). In some embodiments, refresh controller2900′ may use a threshold number of refresh cycles (e.g., one, two, or the like) or use a threshold number of clock cycles (e.g., two, four, or the like) rather than a threshold time. In other embodiments, the state may comprise a timestamp of a most recent refresh or access of an associated segment such that, if an amount of time between the timestamp and a current time (e.g., from timer2901ofFIGS.29A and29B) exceeds a predetermined threshold (e.g., 8 ms, 16 ms, 32 ms, 64 ms, or the like), refresh controller2900′ may send an access command or a refresh signal to the associated segment and update the timestamp associated with that portion (e.g., using timer2901). Additionally or alternatively, refresh controller2900′ may be configured to skip a refresh operation relative to the one or more segments of the plurality of memory banks if the refresh time indicator indicates a last refresh time within a predetermined time threshold. In such embodiments, refresh controller2900′, after skipping a refresh operation relative to the one or more segments, may be configured to alter the stored refresh time indicator associated with the one or more segments such that during a next operation cycle, the one or more segments will be refreshed. For example, as described above, refresh controller2900′ may use timer2901to update the stored refresh time indicator. Accordingly, the data storage may include a timestamp register configured to store a refresh time indicator indicative of a time at which the one or more segments of the plurality of memory banks were last refreshed. Moreover, refresh controller2900′ may use an output of the timer in clearing the access information stored in the data storage after a threshold time interval. In any of the embodiments described above, access to the one or more segments may include a write operation associated with the one or more segments. Additionally or alternatively, access to the one or more segments may include a read operation associated with the one or more segments. Moreover, as depicted inFIG.29B, refresh controller2900′ may comprise a row counter2903and an adder2907configured to assist in updating the data storage2909based, at least in part, on the states within the data storage. Data storage2909may comprise a bit table associated with the plurality of memory banks. For example, the bit table may comprise an array of switches (or switching elements such as transistors) or registers (e.g., SRAM or the like) configured to hold bits for associated segments. Additionally or alternatively, data storage2909may store timestamps associated with the plurality of memory banks. Moreover, refresh controller2900′ may include a refresh gate2911configured to control whether a refresh to the one or more segments occurs based a corresponding value stored in the bit table. For example, refresh gate2911may comprise a logic gate (such as an “and” gate) configured to nullify a refresh signal from row counter2903if a corresponding state of data storage2909indicates that the associated segment was refreshed or accessed during one or more previous clock cycles. In other embodiments, refresh gate2911may comprise a microprocessor or other circuit configured to nullify a refresh signal from row counter2903if a corresponding timestamp from data storage2909indicates that the associated segment was refreshed or accessed within a predetermined threshold time value. FIG.30is an example flowchart of a process3000for partial refreshes in a memory chip (e.g., memory chip2800ofFIG.28) Process3000may be executed by a refresh controller consistent with the present disclosure, such as refresh controller2900ofFIG.29Aor refresh controller2900′ ofFIG.29B. At step3010, the refresh controller may access information indicative of access operations for one or more segments of a plurality of memory banks. For example, as explained above with respect toFIGS.29A and29B, the refresh controller may include a data storage associated with segments of memory chip2800(e.g., lines, columns, or any other groupings of memory cells within memory chip2800) and configured to store timestamps or other information indicative of a most recent access of the associated segments. At step3020, the refresh controller may generate refresh and/or access commands based, at least in part, on the accessed information. For example, as explained above with respect toFIGS.29A and29B, the refresh controller may skip a refresh operation relative to the one or more segments of the plurality of memory banks if the accessed information indicates a last refresh or access time within a predetermined time threshold and/or if the accessed information indicates a last refresh or access occurred during one or more previous clock cycles. Additionally or alternatively, the refresh controller may generate comments to refresh or access the associated segments based on whether the accessed information indicates a last refresh or access time that exceeds a predetermined threshold and/or if the accessed information indicates a last refresh or access did not occur during one or more previous clock cycles. At step3030, the refresh controller may alter the stored refresh time indicator associated with the one or more segments such that during a next operation cycle, the one or more segments will be refreshed. For example, after skipping a refresh operation relative to the one or more segments, the refresh controller may alter the information indicative of access operations for the one or more segments such that, during a next clock cycle, the one or more segments will be refreshed. Accordingly, the refresh controller may clear (e.g., set to 0) states for the segments after skipping a refresh cycle. Additionally or alternatively, the refresh controller may set (e.g., set to 1) states for the segments that are refreshed and/or accessed during the current cycle. In embodiments where the information indicative of access operations for the one or more segments includes timestamps, the refresh controller may update any stored timestamps associated with segments that are refreshed and/or accessed during the current cycle. Method3000may further include additional steps. For example, in addition to or as an alternative to step3030, a sense amplifier may access the one or more segments and may change the information associated with the one or more segments. Additionally or alternatively, the sense amplifier may signal to the refresh controller when the access has occurred such that the refresh controller may update the information associated with the one or more segments. As explained above, a sense amplifier may comprise a plurality of transistors configured to sense low-power signals from a segment of the memory chip storing a data in one or more memory cells and amplify the small voltage swing to higher voltage levels such the data can be interpreted by logic, such as external CPUs or GPUs or integrated processor subunits as explained above. In such an example, whenever the sense amplifier accesses the one or more segments, it may set (e.g., set to 1) bits associated with the segments indicating that the associated segments were accessed in a previous cycle. In embodiments where the information indicative of access operations for the one or more segments includes timestamps, whenever the sense amplifier accesses the one or more segments, it may trigger a write of a timestamp from a timer of the refresh controller to the data storage to update any stored timestamps associated with the segments. FIG.31is an example flowchart of a process3100for determining refreshes for a memory chip (e.g., memory chip2800ofFIG.28). Process3100may be implemented within a compiler consistent with the present disclosure. As explained above, a “compiler” refers to any computer program that converts a higher-level language (e.g., a procedural language, such as C, FORTRAN, BASIC, or the like; an object-oriented language, such as Java, C++, Pascal, Python, or the like; etc.) to a lower-level language (e.g., assembly code, object code, machine code, or the like). The compiler may allow a human to program a series of instructions in a human-readable language, which is then converted to a machine-executable language. The compiler may comprise software instructions executed by one or more processors. At step3110, the one or more processors may receive higher-level computer code. For example, the higher-level computer code may be encoded in one or more files on a memory (e.g., a non-volatile memory such as a hard disk drive or the like, a volatile memory such as DRAM, or the like) or received over a network (e.g., the Internet or the like). Additionally or alternatively, the higher-level computer code may be received from a user (e.g., using an input device such as a keyboard). At step3120, the one or more processors may identify a plurality of memory segments distributed over a plurality of memory banks associated with a memory chip to be accessed by the higher-level computer code. For example, the one or more processors may access a data structure defining the plurality of memory banks and a corresponding structure of the memory chip. The one or more processor may access the data structure from a memory (e.g., a non-volatile memory such as a hard disk drive or the like, a volatile memory such as DRAM, or the like) or receive the data structure over a network (e.g., the Internet or the like). In such embodiments, the data structure may be included in one or more libraries accessible by the compiler to permit the compiler to generate instructions for the particular memory chip to be accessed. At step3130, the one or processors may assess the higher-level computer code to identify a plurality of memory read commands to occur over a plurality of memory access cycles. For example, the one or more processor may identify each operation within the higher-level computer code requiring one or more read commands from memory and/or one or more write commands to memory. Such instructions may include variable initialization, variable re-assignment, logic operations on variables, input-output operations, or the like. At step3140, the one or more processors may cause a distribution of data, associated with the plurality of memory access commands, across each of the plurality of memory segments such that each of the plurality of memory segments is accessed during each of the plurality of memory access cycles. For example, the one or more processors may identify the memory segments from the data structure defining the structure of the memory chip and then assign variables from the higher-level code to various ones of the memory segments such that each memory segment is accessed (e.g., via a write or a read) at least once during each refresh cycle (which may comprise a particular number of clock cycles). In such an example, the one or more processors may access information indicative of how many clock cycles each line of higher-level code requires in order to assign variables from the lines of higher-level code such that each memory segment is accessed (e.g., via a write or a read) at least once during the particular number of clock cycles. In another example, the one or more processors may first generate machine code or other lower-level code from the higher-level code. The one or more processors may then assign variables from the lower-level code to various ones of the memory segments such that each memory segment is accessed (e.g., via a write or a read) at least once during each refresh cycle (which may comprise a particular number of clock cycles). In such an example, each line of lower-level code may require a single clock cycle. In any of the examples given above, the one or more processor may further assign logic operations or other commands that use temporary output to various ones of the memory segments. Such temporary outputs may still result in read and/or write commands such that the assigned memory segment is still being accessed during that refresh cycle even though a named variable has not been assigned to that memory segment. Method3100may further include additional steps. For example, the one or more processors may, in embodiments where the variables are assigned prior to compiling, generate machine code or other lower-level code from the higher-level code. Moreover, the one or more processors may transmit the compiled code for execution by the memory chip and corresponding logic circuits. The logic circuits may comprise conventional circuits such as GPUs or CPUs or may comprise processing groups on the same substrate as the memory chip, e.g., as depicted inFIG.7A. Accordingly, as described above, the substrate may include a memory array that includes a plurality of banks, such as banks2801aand other banks shown inFIG.28. Furthermore, the substrate may include a processing array that may include a plurality of processor subunits (such as subunits730a,730b,730c,730d,730e,730f,730g, and730hshown inFIG.7A). FIG.32is another example flowchart of a process3200for determining refreshes for a memory chip (e.g., memory chip2800ofFIG.28). Process3200may be implemented within a compiler consistent with the present disclosure. Process3200may be executed by one or more processors executing software instructions comprising the compiler. Process3200may be implemented separately from or in combination with process3100ofFIG.31. At step3210, similar to step3110, the one or more processors may receive higher-level computer code. At step3220, similar to step3210, the one or more processors may identify a plurality of memory segments distributed over a plurality of memory banks associated with a memory chip to be accessed by the higher-level computer code. At step3230, the one or more processors may assess the higher-level computer code to identify a plurality of memory read commands each implicating one or more of the plurality of memory segments. For example, the one or more processor may identify each operation within the higher-level computer code requiring one or more read commands from memory and/or one or more write commands to memory. Such instructions may include variable initialization, variable re-assignment, logic operations on variables, input-output operations, or the like. In some embodiments, the one or more processors may simulate an execution of the higher-level code using logic circuits and the plurality of memory segments. For example, the simulation may comprise a line-by-line step-through of the higher-level code similar to that of a debugger or other instruction set simulator (ISS). The simulation may further maintain internal variables which represent the addresses of the plurality of memory segments, similar to how a debugger may maintain internal variables which represent registers of a processor. At step3240, the one or more processors may, based on analysis of the memory access commands and for each memory segment among the plurality of memory segments, track an amount of time that would accrue from a last access to the memory segment. For example, using the simulation described above, the one or processors may determine lengths of time between each access (e.g., a read or a write) to one or more addresses within each of the plurality of memory segments. The lengths of time may be measured in absolute time, clock cycles, or refresh cycles (e.g., determined by a known refresh rate of the memory chip). At step3250, in response to a determination that an amount of time since a last access for any particular memory segment would exceed a predetermined threshold, the one or more processors may introduce into the higher-level computer code at least one of a memory refresh command or a memory access command configured to cause an access to the particular memory segment. For example, the one or more processors may include a refresh command for execution by a refresh controller (e.g., refresh controller2900ofFIG.29Aor refresh controller2900′ ofFIG.29B). In embodiments where the logic circuits are not embedded on the same substrate as the memory chip, the one or more processors may generate the refresh commands for sending to the memory chip separate from the lower-level code for sending to the logic circuits. Additionally or alternatively, the one or more processors may include an access command for execution by a memory controller (which may be separate from the refresh controller or incorporated into the same). The access command may comprise a dummy command configured to trigger a read operation on the memory segment but without having the logic circuits perform any further operation on the read or written variable from the memory segment. In some embodiments, the compiler may include a combination of steps from process3100and from process3200. For example, the compiler may assign variables according to step3140and then run the simulation described above to add in any additional memory refresh commands or memory access commands according to step3250. This combination may allow for the compiler to distribute the variables across as many memory segments as possible and to generate refresh or access commands for any memory segments that cannot be accessed within the predetermined threshold amount of time. In another combinatory example, the compiler may simulate the code according to step3230and assign variables according to step3140based on any memory segments that the simulation indicates will not be accessed within the predetermined threshold amount of time. In some embodiments, this combination may further include step3250to allow for the compiler to generate refresh or access commands for any memory segments that cannot be accessed within the predetermined threshold amount of time, even after assignments according to step3140are complete. Refresh controllers of the present disclosure may allow software executed by logic circuits (whether conventional logic circuits such as CPUs and GPUs or processing groups on the same substrate as the memory chip, e.g., as depicted inFIG.7A) to disable an automatic refresh executed by the refresh controller and control the refresh via the executed software instead. Accordingly, some embodiments of the present disclosure may provide software with a known access pattern to a memory chip (e.g., if the compiler has access to a data structure defining a plurality of memory banks and a corresponding structure of the memory chip). In such embodiments, a post-compiling optimizer may disable automatic refresh and manually set refresh controls only for segments of the memory chip not accessed within threshold amounts of time. Thus, similar to step3250described above but after compilation, the post-compiling optimizer may generate refresh commands to ensure each memory segment is accessed or refreshed with the predetermined threshold amount of time. Another example of reducing refresh cycles may include using predefined patterns of access to the memory chip. For example, if software executed by the logic circuits can control its access pattern for the memory chip, some embodiments may create access patterns for refresh beyond conventional linear line refreshes. For example, if a controller determines that software executed by the logic circuits accesses regularly every second row of memory, then a refresh controller of the present disclosure may use an access pattern that does not refresh every second line in order to speed up the memory chip and reduce power usage. An example of such a refresh controller is shown inFIG.33.FIG.33depicts an example refresh controller3300configured by stored patterns consistent with the present disclosure. Refresh controller3300may be incorporated in a memory chip of the present disclosure, e.g., having a plurality of memory banks and a plurality of memory segments included in each of the plurality of memory banks, such as memory chip2800ofFIG.28. Refresh controller3300includes a timer3301(similar to timer2901ofFIGS.29A and29B), a row counter3303(similar to row counter2903ofFIGS.29A and29B), and an adder3305(similar to adder2907ofFIGS.29A and29B). Moreover, refresh controller3300includes a data storage3307. Unlike data store2909ofFIG.29B, data storage3307may store at least one memory refresh pattern to be implemented in refreshing the plurality of memory segments included in each of the plurality of memory banks. For example, as depicted inFIG.33, data storage3307may include Li (e.g., L1, L2, L3, and L4 in the example ofFIG.33) and Hi (e.g., H1, H2, H3, and H4 in the example ofFIG.33) that define segments in the memory banks by row and/or column. Moreover, each segment may be associated with an Inci variable (e.g., Inc1, Inc2, Inc3, and Inc4 in the example ofFIG.33) which defines how the rows associated with the segment are incremented (e.g., whether each row is accessed or refresh, whether every other row is accessed or refreshed, or the like). Thus, as shown inFIG.33, the refresh pattern may comprise a table including a plurality of memory segment identifiers assigned by the software to identify ranges of the plurality of memory segments in a particular memory bank that are to be refreshed during a refresh cycle and ranges of the plurality of memory segments in the particular memory bank that are not to be refreshed during the refresh cycle. Thus, data storage3308may define a refresh pattern which the software executed by logic circuits (whether conventional logic circuits such as CPUs and GPUs or processing groups on the same substrate as the memory chip, e.g., as depicted inFIG.7A) may select for use. The memory refresh pattern may be configurable using software to identify which of the plurality of memory segments in a particular memory bank are to be refreshed during a refresh cycle and which of the plurality of memory segments in the particular memory bank are not to be refreshed during the refresh cycle. Thus, refresh controller3300may refresh some or all rows within the defined segments that are not accessed during a current cycle according to Inci. Refresh controller3300may skip other rows of the defined segments that are set for access during the current cycle. In embodiments where data storage3308of refresh controller3300) includes a plurality of memory refresh patterns, each may represent a different refresh pattern for refreshing the plurality of memory segments included in each of the plurality of memory banks. The memory refresh patterns may be selectable for use on the plurality of memory segments. Accordingly, refresh controller3300may be configured to allow selection of which of the plurality of memory refresh patterns to implement during a particular refresh cycle. For example, the software executed by logic circuits (whether conventional logic circuits such as CPUs and GPUs or processing groups on the same substrate as the memory chip, e.g., as depicted inFIG.7A) may select different memory refresh patterns for use during one or more different refresh cycles. Alternatively, the software executed by logic circuits may select one memory refresh pattern for use throughout some or all of the different refresh cycles. The memory refresh patterns may be encoded using one or more variables stored in data storage3308. For example, in embodiments where the plurality of memory segments are arranged in rows, each memory segment identifier may be configured to identify a particular location within a row of memory where a memory refresh should either begin or end. For example, in addition to Li and Hi, one or more additional variables may define which portions of the rows defined by Li and Hi are within the segment. FIG.34is an example flowchart of a process3400for determining refreshes for a memory chip (e.g., memory chip2800ofFIG.28). Process3100may be implemented by software within a refresh controller (e.g., refresh controller3300ofFIG.33) consistent with the present disclosure. At step3410, the refresh controller may store at least one memory refresh pattern to be implemented in refreshing a plurality of memory segments included in each of a plurality of memory banks. For example, as explained above with respect toFIG.33, the refresh pattern may comprise a table including a plurality of memory segment identifiers assigned by the software to identify ranges of the plurality of memory segments in a particular memory bank that are to be refreshed during a refresh cycle and ranges of the plurality of memory segments in the particular memory bank that are not to be refreshed during the refresh cycle. In some embodiments, the at least one refresh pattern may be encoded onto refresh controller (e.g., onto a read-only memory associated with or at least accessible by the refresh controller) during manufacture. Accordingly, the refresh controller may access the at least one memory refresh pattern but not store the same. At steps3420and3430, the refresh controller may use software to identify which of the plurality of memory segments in a particular memory bank are to be refreshed during a refresh cycle and which of the plurality of memory segments in the particular memory bank are not to be refreshed during the refresh cycle. For example, as explained above with respect toFIG.33, software executed by logic circuits (whether conventional logic circuits such as CPUs and GPUs or processing groups on the same substrate as the memory chip, e.g., as depicted inFIG.7A) may select the at least one memory refresh pattern. Moreover, the refresh controller may access the selected at least one memory refresh pattern to generate corresponding refresh signals during each refresh cycle. The refresh controller may refresh some or all portions within the defined segments that are not accessed during a current cycle according to the at least one memory refresh pattern and may skip other portions of the defined segments that are set for access during the current cycle. At step3440, the refresh controller may generate corresponding refresh commands. For example, as depicted inFIG.33, an adder3305may comprise a logic circuit configured to nullify refresh signals for particular segments that are not to be refreshed according to the at least one memory refresh pattern in data storage3307. Additionally or alternatively, a microprocessor (not shown inFIG.33) may generate particular refresh signals based on which segments are to be refreshed according to the at least one memory refresh pattern in data storage3307. Method3400may further include additional steps. For example, in embodiments where the at least one memory refresh pattern is configured to change (e.g., moving from L1, H1, and Inc1 to L2, H2, and Inc2 as shown inFIG.33) every one, two, or other number of refresh cycles, the refresh controller may access a different portion of the data storage for a next determination of refresh signals according to steps3430and3440. Similarly, if the software executed by logic circuits (whether conventional logic circuits such as CPUs and GPUs or processing groups on the same substrate as the memory chip, e.g., as depicted inFIG.7A) selects a new memory refresh pattern from the data storage for use in one or more future refresh cycles, the refresh controller may access a different portion of the data storage for a next determination of refresh signals according to steps3430and3440. Selectable Sized Memory Chins When designing a memory chip and aiming for a certain capacity of memory, changes in memory capacity to a larger size or a smaller size may require redesign of the product and a redesign of a full mask set. Often, the product design is done in parallel with market research and, in some cases, the product design is completed before the market research is available. Thus, there is the potential for disconnects between product designs and actual demands of the market. The present disclosure proposes a way to flexibly provide a memory chips with memory capacities commensurate with market demands. The design method may include designing dies on a wafer along with appropriate interconnect circuitry such that memory chips that may contain one or more dies can selectively be cut from the wafer in order to provide an opportunity to produce memory chips of variable sized memory capacities from a single wafer. The present disclosure relates to systems and methods for fabricating memory chips by cutting them from a wafer. The method may be used for producing selectable sized memory chips from the wafer. An example embodiment of a wafer3501containing dies3503is shown inFIG.35A. Wafer3501may be formed from a semiconductor material (e.g., silicon (Si), silicon-germanium (SiGe), silicon on insulator (SOI), gallium nitride (GaN), aluminum nitride (AlN), aluminum gallium nitride (AlGaN), boron nitride (BN), gallium arsenide (GaAs), gallium aluminum arsenide (AlGaAs), indium nitride (InN) combination of thereof, and the like). Dies3503may include any suitable circuit elements (e.g., transistors, capacitors, resistors, and/or the like) which may include any suitable semiconductor, dielectric or metallic components. Dies3503may be formed from a semiconductor material which may be the same or different as the material of wafer3501. In addition to dies3503, wafer3501may include other structures and/or circuitry. In some embodiments, one or more coupling circuits may be provided and may couple together one or more of the dies. In an example embodiment, such a coupling circuit may include a bus shared by two or more dies3503. Additionally, the coupling circuit may include one or more logic circuits designed to control circuitry associated with dies3503and/or to direct information to/from dies3503. In some cases, the coupling circuit may include a memory access management logic. Such logic may translate logical memory addresses into physical addresses associated with dies3503. It should be noted that the term fabrication, as used herein, may refer collectively to any of the steps for building the disclosed wafers, dies, and/or chips. For example, fabrication may refer to the simultaneous laying out and forming of the various dies (and any other circuitry) included on the wafer. Fabrication may also refer to the cutting of selectable sized memory chips from the wafer to include one die, in some cases, or multiple dies in other cases. Of course, the term fabrication is not intended to be limited to these examples but may include other aspects associated with generation of the disclosed memory chips and any or all of the intermediate structures. Die3503or a group of dies may be used for fabrication of a memory chip. The memory chip may include a distributed processor, as described in other sections of the present disclosure. As shown inFIG.35B, die3503may include a substrate3507and a memory array disposed on the substrate. The memory array may include one or more memory units, such as, for example, memory banks3511A-3511D designed to store data. In various embodiments, memory banks may include semiconductor-based circuit elements such as transistors, capacitors, and the like. In an example embodiment, a memory bank may include multiple rows and columns of storage units. In some cases, such a memory bank may have a capacity greater than one megabyte. The memory banks may include dynamic or static access memory. Die3503may further include a processing array disposed on the substrate, the processing array including a plurality of processor subunits3515A-3515D, as shown inFIG.35B. As described above, each memory bank may include a dedicated processor subunit connected by a dedicated bus. For example, processor subunit3515A is associated with memory bank3511A via bus or connection3512. It should be understood that various connections between memory banks3511A-3511D and processor subunits3515A-3515D are possible, and only some illustrative connections are shown inFIG.35B. In an example embodiment, processor subunit may perform read/write operations for an associated memory bank and may further perform refreshing operations or any other suitable operations relative to memory stored in the various memory banks. As noted, die3503may include a first group of buses configured to connect processor subunits with their corresponding memory banks. An example bus may include a set of wires or conductors that connect electrical components and allow transfers of data and addresses to and from each memory bank and its associated processor subunit. In an example embodiment, connection3512may serve as a dedicated bus for connecting processor subunit3515A to memory bank3511A. Die3503may include a group of such buses, each connecting a processor subunit to a corresponding, dedicated memory bank. Additionally, die3503may include another group of buses, each connecting processor subunits (e.g., subunits3515A-3515D) to each other. For example, such buses may include connections3516A-3516D. In various embodiments data for memory banks3511A-3511D may be delivered via input-output bus3530. In an example embodiment, input-output bus3530may carry data-related information, and command related information for controlling the operation of memory units of die3503. Data information may include data for storing in memory banks, data read from memory banks, processing results from one or more of the processor subunits based on operations performed relative to data stored in corresponding memory banks, command related information, various codes, etc. In various cases, data and commands transmitted by input-output bus3530may be controlled by input-output (IO) controller3521. In an example embodiment, IO controller3521may control the flow of data from bus3530to and from processor subunits3515A-3515D. IO controller3521may determine from which one of processor subunits3515A-3515D information is retrieved. In various embodiments, IO controller3521may include a fuse3554configured to deactivate IO controller3521. Fuse3554may be used if multiple dies are combined together to form a larger memory chip (also referred to as a multi-die memory chip, as an alternative to a single die memory chip that contains only one die). The multi-die memory chip may then use one of the IO controllers of one of the die units forming the multi-die memory chip while disabling other IO controllers related to the other die units by using fuses corresponding to the other IO controllers. As noted, each memory chip or predecessor die or group of dies may include distributed processors associated with corresponding memory banks. These distributed processors, in some embodiments, may be arranged in a processing array disposed on the same substrate as a plurality of memory banks. Additionally, the processing array may include one or more logic portions each including an address generator (also referred to as address generator unit (AGU)). In some cases, the address generator may be part of at least one processor subunit. The address generator may generate memory addresses required for fetching data from the one or more memory banks associated with the memory chip. Address-generation calculations may involve integer arithmetic operations, such as addition, subtraction, modulo operations, or bit shifts. The address generator may be configured to operate on multiple operands at a time. Furthermore, multiple address generators may perform more than one address-calculation operation simultaneously. In various embodiments, an address generator may be associated with a corresponding memory bank. The address generators may be connected with their corresponding memory banks by means of corresponding bus lines. In various embodiments, a selectable sized memory chip may be formed from wafer3501by selectively cutting different regions of the wafer. As noted, the wafer may include a group of dies3503, the group including any group of two or more dies (e.g., 2, 3, 4, 5, 10, or more dies) included on the wafer. As will be discussed further below, in some cases, a single memory chip may be formed by cutting a portion of the wafer that includes just one of the dies of the group of dies. In such cases, the resulting memory chip would include memory units associated with one die. In other cases, however, selectable sized memory chips may be formed to include more than one die. Such memory chips may be formed by cutting regions of the wafer that include two or more dies of a group of dies included on the wafer. In such cases, the dies together with a coupling circuit that couples together the dies provide a multi-die memory chip. Some additional circuit elements may also be wired on board between chips, such as, for example, clock elements, data buses or any suitable logic circuits. In some cases, at least one controller associated with the group of dies may be configured to control the operation of the group of dies as a single memory chip (e.g., a multiple memory unit memory chip). The controller may include one or more circuits that manage the flow of data going to and from the memory chip. A memory controller can be a part of the memory chip, or it can be a part of a separate chip not directly related to the memory chip. In an example embodiment, the controller may be configured to facilitate read and write requests or other commands associated with the distributed processors of the memory chip, and may be configured to control any other suitable aspects of the memory chip (e.g., refreshing the memory chip, interacting with the distributed processors, etc.). In some cases, the controller may be part of die3503, and in other cases the controller may be laid out adjacent to die3503. In various embodiments, the controller may also include at least one memory controller of at least one of the memory units included on the memory chip. In some cases, a protocol used for accessing information on a memory chip may be agnostic to duplicate logic and memory units (e.g., memory banks) that may be present on the memory chip. The protocol may be configured to have different IDs or address ranges for adequate access of data on the memory chip. An example of a chip with such protocol may include a chip with a Joint Electron Device Engineering Council (JEDEC) double data rate (DDR) controller where different memory banks may have different address ranges, a serial peripheral interface (SPI) connection, where different memory units (e.g., memory banks) have different identifications (IDs), and the like. In various embodiments, multiple regions may be cut from the wafer, with various regions including one or more dies. In some cases, each separate region may be used to build a multi-die memory chip. In other cases, each region to be cut from the wafer may include a single die to provide a single die memory chip. In some cases, two or more of the regions may have the same shape and have the same number of dies coupled to the coupling circuit in the same way. Alternatively, in some example embodiments, a first group of regions may be used to form a first type of the memory chip, and a second group of regions may be used to form a second type of memory chip. For example, wafer3501, as shown inFIG.35Cmay include a region3505that may include a single die, and a second region3504may include a group of two dies. When region3505is cut from the wafer3501, a single die memory chip will be provided. When region3504is cut from the wafer3501, a multi-die memory chip will be provided. Groups shown inFIG.35Care only illustrative, and various other regions and groups of dies may be cut out from wafer3501. In various embodiments, dies may be formed on wafer3501, such that they are arranged along one or more rows of the wafer, as shown, for example, inFIG.35C. The dies may share input-output bus3530corresponding to one or more rows. In an example embodiment, group of dies may be cut out from wafer3501using various cutting shapes where, when cutting out a group of dies that may be used to form a memory chip, at least a portion of the shared input-output bus3530may be excluded (e.g., only a portion of input-output bus3530may be included as a part of the memory chip formed including a group of dies). As previously discussed, when multiple dies (e.g., dies3506A, and3506B, as shown inFIG.35C) are used to form a memory chip3517, one IO controller corresponding to one of the dies may be enabled and configured to control data flow to all the processor subunits of dies3506A and3506B. For example,FIG.35Dshows memory dies3506A and3506B combined to form memory chip3517that includes memory banks3511A-3511H, processor subunits3515A-3515H, IO controllers3521A, and3521B and fuses3554A and3554B. It should be noted that memory chip3517corresponds to a region3517of wafer3501prior to removal of the memory chip from the wafer. In other words, as used here and elsewhere in the disclosure, regions3504,3505,3517etc. of wafer3501once cut from wafer3501will result in memory chips3504,3505,3517, etc. Additionally, fuses herein are also referred to as disabling elements. In an example embodiment, fuse3554B may be used to deactivate IO controller3521B, and IO controller3521A may be used to control data flow to all memory banks3511A-3511H by communicating data to processor subunits3515A-3515H. In an example embodiment, IO controller3521A may be connected to various processor subunits using any suitable connection. In some embodiments, as further described below, processor subunits3515A-3515H may be interconnected, and IO controller3521A may be configured to control data flow to processor subunits3515A-3515H that form processing logic of memory chip3517. In an example embodiment, IO controllers, such as controller3521A and3521B and corresponding fuses3554A and3554B may be formed on wafer3501together with the formation of memory banks3511A-3511H and processor subunits3515A-3515H. In various embodiments, when forming memory chip3517, one of the fuses (e.g., fuse3554B) may be activated such that dies3506A and3506B are configured to form memory chip3517that functions as a single chip and is controlled by a single input-output controller (e.g., controller3521A). In an example embodiment, activating a fuse may include applying a current to trigger the fuse. In various embodiment, when more than one die is used for forming a memory chip, all but one IO controller may be deactivated via corresponding fuses. In various embodiments, as shown inFIG.35C, multiple dies are formed on wafer3501together with a set of input-output buses, and/or control buses. An example input-output bus3530is shown inFIG.35C. In an example embodiment, one of the input-output buses (e.g., input-output bus3530) may be connected to multiple dies.FIG.35Cshows an example embodiment of input-output bus3530passing next to dies3506A and3506B. Configuration of dies3506A and3506B and input-output bus3530, as shown inFIG.35Cis only illustrative, and various other configurations may be used. For example,FIG.35Eillustrates dies3540formed on wafer3501and arranged in a hexagonal formation. A memory chip3532that includes four dies3540may be cut out form wafer3501. In an example embodiment, memory chip3532may include a portion of input-output bus3530connected to the four dies by suitable bus lines (e.g., line3533, as shown inFIG.35E). In order to route information to appropriate memory unit of memory chip3532, memory chip3532may include input/output controllers3542A and3542B placed at branch points for input-output bus3530. Controllers3542A and3542B may receive command data via input-output bus3530and select a branch of bus3530for transmitting information to an appropriate memory unit. For example, if command data includes read/write information from/to memory units associated with die3546, controller3542A may receive command request and transmit data to a branch3531A, of bus3530, as shown inFIG.35D, while controller35428may receive command request and transmit data to a branch3531B.FIG.35Eindicates various cuts of different regions that may be made, where cut lines are represented by dashed lines. In an example embodiment, a group of dies and interconnecting circuitry may be designed for inclusion in a memory chip3506as shown inFIG.36A. Such an embodiment may include processor subunits (for in-memory processing) that may be configured to communicate between one another. For example, each die to be included in memory chip3506may include various memory units such as memory banks3511A-3511D, processor subunits3515A-3515D, and IO controllers3521and3522. IO controllers3521and3522may be connected in parallel to input-output bus3530. IO controller3521may have a fuse3554, and IO controller3522may have a fuse3555. In an example embodiment, processor subunits3515A-3515D may be connected by means of, for example, bus3613. In some cases, one of IO controller may be disabled using a corresponding fuse. For instance, IO controller3522may be disabled using fuse3555, and IO controller3521may control data flow into memory banks3511A-3511D via processor subunits3515A-3515D connected to each other via bus3613. Configuration of memory units, as shown inFIG.36Ais only illustrative, and various other configurations may be formed by cutting different regions of wafer3501. For example,FIG.36Bshows a configuration with three domains3601-3603containing memory units and connected to input-output bus3530. In an example embodiment, domains3601-3603are connected to input-output bus3530using IO control modules3521-3523that may be disabled by corresponding fuses3554-3556. Another example of embodiment of arranging domains containing memory units is shown inFIG.36Cwhere three domains3601,3602and3603are connected to input-output bus3530using bus lines3611,3612and3613.FIG.36Dshows, another example embodiment of memory chips3506A-3506D connected to input-output buses3530A and3530B via IO controllers3521-3524. In an example embodiment, IO controllers may be deactivated using corresponding fuse elements3554-3557, as shown inFIG.36D. FIG.37shows various groups of dies3503, such as group3713and group3715that may include one or more dies3503. In an example embodiment, in addition to forming dies3503on wafer3501, wafer3501may also contain logical circuits3711referred to as glue logic3711. Glue logic3711may take some space on wafer3501resulting in the fabrication of a fewer number of dies per wafer3501as compared to a number of the dies that could have been fabricated without the presence of glue logic3711. However, the presence of glue logic3711may allow multiple dies to be configured to function together as a single memory chip. The glue logic, for example, may connect multiple dies, without having to make configuration changes and without having to designate area inside any of the dies themselves for circuitry that is only used for connecting dies together. In various embodiments, glue logic3711provides an interface with other memory controllers, such that multi-die memory chip functions as a single memory chip. Glue logic3711may be cut together with a group of dies as shown, for example, by group3713. Alternatively, if only one die is required for the memory chip, as, for example, for group3715, glue logic may not be cut. For example, the glue logic may be selectively eliminated where not needed to enable cooperation between different dies. InFIG.37, various cuts of different regions may be made as shown, for example, by the dashed line regions. In various embodiments, as shown inFIG.37, one glue logic element3711may be laid out on the wafer for every two dies3506. In some cases, one glue logic element3711may be used for any suitable number of dies3506forming a group of dies. Glue logic3711may be configured to be connected to all the dies from the group of dies. In various embodiments, dies connected to glue logic3711may be configured to form a multi-die memory chip and may be configured to form separate single die memory chips when they are not connected to glue logic3711. In various embodiments, dies connected to glue logic3711and designed to function together may be cur out from wafer3501as a group and may include glue logic3711as indicated, for example, by group3713. The dies not connected to glue logic3711may be cut out from wafer3501without including glue logic3711as indicated, for example, by group3715to form a single die memory chip. In some embodiments, during manufacturing of multi-die memory chips from wafer3501, one or more cutting shapes (e.g., shapes forming groups3713,3715) may be determined for creating the desired set of the multi-die memory chips. In some cases, as shown by group3715, the cutting shapes may exclude glue logic3711. In various embodiments, glue logic3711may be a controller for controlling multiple memory units of a multi-die memory chip. In some cases, glue logic3711may include parameters that may be modified by various other controllers. For example, a coupling circuit for multi-die memory chips may include a circuit for configuring parameters of glue logic3711or parameters of memory controllers (e.g., processor subunits3515A-3515D, as shown, for example, inFIG.35B). Glue logic3711may be configured to do a variety of tasks. For example, logic3711may be configured to determine which die may need to be addressed. In some cases, logic3711may be used to synchronize multiple memory units. In various embodiments, logic3711may be configured to control various memory units such that the memory units operate as a single chip. In some cases, amplifiers between input-output bus (e.g., bus3530, as shown inFIG.35C) and processor subunits3515A-3515D may be added to amplify a data signal from bus3530. In various embodiments, cutting complex shapes from wafer3501may be technologically difficult/expensive, and a simpler cutting approach may be adopted, provided that dies are aligned on wafer3501. For example,FIG.38Ashows dies3506aligned to form a rectangular grid. In an example embodiment, vertical cuts3803and horizontal cuts3801across entire wafer3501may be made to separate cut out groups of dies. In an example embodiment, vertical and horizontal cuts3803and3801can lead to a group containing a selected number of dies. For instance, cuts3803and3801can result in regions containing a single die (e.g., region3811A), regions containing two dies (e.g., region38111B) and regions containing four dies (e.g., region3811C). The regions formed by cuts3801and3803are only illustrative, and any other suitable regions may be formed. In various embodiments, depending on the alignment of dies, various cuts may be made. For instance, if dies are arranged in a triangular grid, as shown inFIG.38B, cut lines such as lines3802,3804, and3806may be used to make multi-die memory chips. For example, some regions may include six dies, five dies, four dies, three dies, two dies, one die, or any other suitable number of dies. FIG.38Cshows bus lines3530arranged in a triangular grid, with dies3503aligned in the centers of triangles formed by intersecting bus lines3530. Dies3503may be connected via bus lines3820to all the neighboring bus lines. By cutting a region containing two or more adjacent dies (e.g., region3822, as shown inFIG.38C) at least one bus line (e.g., line3824) remains within region3822, and bus line3824may be used to supply data and commands to a multi-die memory chip formed using region3822. FIG.39shows that various connections may be formed between processor subunits3515A-3515P to allow a group of memory units to act as a single memory chip. For instance, a group3901of various memory units may include a connection3905between processor subunit3515B and subunit3515E. Connection3905may be used as a bus line for transmitting data and commands to from subunit3515B to subunit3515E that can be used to control a respective memory bank3511E. In various embodiments, connections between processor subunits may be implemented during the formation of dies on wafer3501. In some cases, additional connections may be fabricated during a packaging stage of a memory chip formed from several dies. As shown inFIG.39, processor subunits3515A-3515P may be connected to each other using various buses (e.g., connection3905). Connection3905may be free of timing hardware logic components such that data transfers between processor subunits and across connection3905may not be controlled by timing hardware logic components. In various embodiments, buses connecting processor subunits3515A-3515P may be laid out on wafer3501prior to fabricating various circuits on wafer3501. In various embodiments, processor subunits (e.g., subunits3515A-3515P) may be interconnected. For instance, subunits3515A-3515P may be connected by suitable buses (e.g., connections3905). Connections3905may be connect any one of subunits3515A-3515P with any other of the subunits3515A-3515P. In an example embodiment, connected subunits may be on a same die (e.g., subunits3515A and3515B) and in other cases, the connected subunits may be on different dies (e.g., subunits3515B and3515E). Connections3905may include dedicated buses for connecting subunits and may be configured to efficiently transmit data between subunits3515A-3515P. Various aspects of the present disclosure relate to methods for producing selectable sized memory chips from a wafer. In an example embodiment, selectable sized memory chips may be formed from one or more dies. The dies, as noted before, may be arranged along one or more rows, as shown, for example, inFIG.35C. In some cases, at least one shared input-output bus corresponding to one or more rows may be laid out on wafer3501. For example, bus3530may be laid out, as shown inFIG.35C. In various embodiments, bus3530may be electrically connected to memory units of at least two of the dies, and the connected dies may be used to form a multi-die memory chip. In an example embodiment, one or more controllers (e.g., input-output controllers3521and3522, as shown inFIG.35B) may be configured to control the at memory units of least two dies that are used to form a multi-die memory chip. In various embodiments, the dies with memory units connected to bus3530may be cut off the wafer with at least one corresponding portion of the shared input-output bus (e.g., bus3530, as shown inFIG.35B) transmitting information to at least one controller (e.g., controllers3521,3522) to configure the controller to control the memory units of the connected dies to function together as a single chip. In some cases, the memory units located on wafer3501may be tested prior to manufacturing memory chips by cutting regions of wafer3501. The testing may be done using at least one shared input-output bus (e.g., bus3530, as shown inFIG.35C). The memory chip may be formed from a group of dies containing memory units when the memory units pass the testing. The memory units that do not pass the testing may be discarded, and not used for manufacturing of a memory chip. FIG.40shows an example process4000of building memory chips from a group of dies. At step4011of process4000, the dies may be laid out on semiconductor wafer3501. At step4015the dies may be fabricated on wafer3501using any suitable approach. For example, dies may be fabricated by etching wafer3501, depositing various dielectric, metallic or semiconductor layers, and further etching of the deposited layers, etc. For example, multiple layers may be deposited and etched. In various embodiments, layers may be n-type doped or p-type doped using any suitable doping elements. For instance, semiconductor layers may be n-type doped with phosphorus and may be p-type doped with boron. Dies3503, as shown inFIG.35Amay be separated from each other by a space that may be used to cut dies3503out of wafer3501. For example, dies3503may be spaced apart from each other by spacing regions, where the width of the spacing regions may be selected to allow wafer cuts in the spacing regions. At step4017, dies3503may be cut out from wafer3501using any suitable approach. In an example embodiment dies3503may be cut out using a laser. In an example embodiment, wafer3501may be scribed first following by mechanical dicing. Alternatively, mechanical dicing saw may be used. In some cases, a stealth dicing process may be used. During dicing, wafer3501may be mounted on a dicing tape for holding dies once they are cut out. In various embodiments large cuts may be done, as shown for example inFIG.38A, by cuts3801and3803or inFIG.38Bas shown by cuts3802,3804, or3806. Once dies3503are cut out individually or in groups, as shown for example by group3504inFIG.35C, dies3503may be packaged. Packaging of dies may include forming contacts to dies3503, depositing protective layers over contacts, attaching heat managing devices (e.g., heatsinks) and encapsulating dies3503. In various embodiments, depending on how many dies are selected to form a memory chip, appropriate configuration of contacts and buses may be used. In an example embodiment, some of the contacts between different dies forming the memory chip may be made during memory chip packaging. FIG.41Ashows an example process4100for manufacturing memory chips containing multiple dies. Step4011of process4100may be the same as step4011of process4000. At step4111, glue logic3711, as shown inFIG.37may laid out on wafer3501. Glue logic3711may be any suitable logic for controlling operations of dies3506, as shown inFIG.37. As described before, the presence of glue logic3711may allow multiple dies to function as a single memory chip. Glue logic3711may provide an interface with other memory controllers, such that memory chip formed from multiple dies functions as a single memory chip. At step4113of process4100, buses (e.g., input-output buses and control buses) may be laid out on wafer3501. The buses may be laid out such that they are connected with various dies and logic circuits, such as glue logic3711. In some cases, buses may connect memory units. For example, buses may be configured to connect processor subunits of different dies. At step4115, dies, glue logic and buses may be fabricated using any suitable approach. For example, logic elements may be fabricated by etching wafer3501, depositing various dielectric, metallic or semiconductor layers, and further etching of the deposited layers, etc. Buses may be fabricated using, for example, metal evaporation. At step4140, cutting shapes may be used to cut groups of dies connected to a single glue logic3711, as shown, for example, inFIG.37. Cutting shapes may be determined using memory requirements for a memory chip containing multiple dies3503. For instance,FIG.41Bshows a process4101, which may be a variant of process4100, where step4140of process4100may be preceded by steps4117and4119. At step4117a system for cutting wafer3501may receive instructions describing requirements for a memory chip. For example, requirements may include forming a memory chip including four dies3503. In some cases, a program software may determine a periodic pattern for group of dies and glue logic3711at step4119. For instance, a periodic pattern may include two glue logic3711elements and four dies3503with every two dies connected to one glue logic3711. Alternatively, at step4119the pattern may be provided by a designer of memory chips. In some cases, the pattern may be selected to maximize a yield of memory chips from wafer3501. In an example embodiment, memory units of dies3503may be tested to identify dies with faulty memory units (such dies are referred to as faulty of failed dies), and based on the location of faulty dies, groups of dies3503that contain memory units that pass the test can be identified and an appropriate cutting pattern can be determined. For example, if a large number of dies3503fail at edges of wafer3501, a cutting pattern may be determined to avoid dies at the edges of wafer3501. Other steps of process4101, such as steps4011,4111,4113,4115, and4140may be the same as the same numbered steps of process4100. FIG.41Cshows an example process4102that may be a variation of process4101. Steps4011,4111,4113,4115, and4140of process4102may be the same as the same numbered steps of process4101, step4131of process4102may substitute step4117of process4101, and step4133of process4102may substitute step4119of process4101. At step4131, a system for cutting wafer3501may receive instructions describing requirements for a first set of memory chips and a second set of memory chips. For example, requirements may include forming the first set of memory chip with memory chips consisting of four dies3503, and forming a second set of memory chip with memory chips consisting of two dies3503. In some cases, more than two sets of memory chips may need to be formed from wafer3501. For instance, a third set of memory chips may include memory chips consisting of only one die3503. In some cases, at step4133, a program software may determine a periodic pattern for a group of dies and glue logic3711for forming memory chips for each set of memory chips. For instance, a first set of memory chips may include memory chips containing two glue logic3711and four dies3503with every two dies connected to one glue logic3711. In various embodiments, glue logic units3711for the same memory chip may be linked together to act as a single glue logic. For example, during fabrication of glue logic3711appropriate bus lines may be formed linking glue logic units3711with one another. The second set of memory chips may include memory chips containing one glue logic3711and two dies3503with dies3503connected to glue logic3711. In some cases, when a third set of memory chips is selected, and when it includes a memory chip consisting of a single die3503, no glue logic3711may be needed for these memory chips. Dual Port Functionality When designing memory chips or memory instances within a chip, one important characteristic is the number of words that can be accessed simultaneously during a single clock cycle. The more addresses (e.g., addresses along rows, also called words or word lines, and columns, also called bits or bitlines) that can be accessed at the same time for reading and/or writing, the faster the memory chip. While there has been some activity in developing memories that include multi-way ports that allow access to multiple addresses at the same time, e.g., for building register files, cashes, or shared memories, most instances use a memory mat that is larger in size and that supports the multiple address accesses. However, DRAM chips usually include a single bit line and a single row line connected to each capacitor of each memory cell. Accordingly, embodiments of the present disclosure seek to provide multi-port access on existing DRAM chips without modifying this conventional single-port memory structure of DRAM arrays. Embodiments of the present disclosure may clock memory instances or chips at twice the speed of logic circuits using the memory. Any logic circuits using the memory may therefore “correspond” to the memory and any components thereof. Accordingly, embodiments of the present disclosure may retrieve or write to two addresses in two memory array clock cycles, which are equivalent to a single processing clock cycle for the logic circuits. The logic circuits may comprise circuits such as controllers, accelerators, GPUs, or CPUs or may comprise processing groups on the same substrate as the memory chip, e.g., as depicted inFIG.7A. As explained above with respect toFIG.3A, a “processing group” may refer to two or more processor subunits and their corresponding memory banks on a substrate. The group may represent a spatial distribution on the substrate and/or a logical grouping for the purposes of compiling code for execution on memory chip2800. Accordingly, as described above with respect toFIG.7A, a substrate with the memory chip may include a memory array with a plurality of banks, such as banks2801aand other banks shown inFIG.28. Furthermore, the substrate may also include a processing array that may include a plurality of processor subunits (such as subunits730a,730b,730c,730d,730e,730f,730g, and730hshown inFIG.7A). Accordingly, embodiments of the present disclosure may retrieve data from the array at each one of two consecutive memory cycles in order handle two addresses for each logic cycle and provide the logic with two results as though the single-port memory array were a two-port memory chip. Additional clocking may allow for memory chips of the present disclosure to function as though the single-port arrays are a two-port memory instance, a three-port memory instance, a four-port memory instance, or any other multi-port memory instance. FIG.42depicts example circuitry4200providing dual-port access along columns of a memory chip in which circuitry4200is used, consistent with the present disclosure. The embodiment depicted inFIG.42may use one memory array4201with two column multiplexers (“muxes”)4205aand4205bto access two words on the same row during a same clock cycle for a logic circuit. For example, during a memory clock cycle, RowAddrA is used in row decoder4203, and ColAddrA is used in multiplexer4205ato buffer data from a memory cell with address (RowAddrA, ColAddrA). During the same memory clock cycle, ColAddrB is used in multiplexer4205bto buffer data from a memory cell with address (RowAddrA, ColAddrB). Thus, circuitry4200may allow for dual-port access to data (e.g., DataA and DataB) stored on memory cells at two different addresses along the same row or word line. Thus, the two addresses may sham a row such that the row decoder4203activates the same word line for both retrievals. Moreover, embodiments like the example depicted inFIG.42may use column muxes such that two addresses may be accessed during a same memory clock cycle. Similarly,FIG.43depicts example circuitry4300providing dual-port access along rows of a memory chip in which circuitry4300is used, consistent with the present disclosure. The embodiment depicted inFIG.43may use one memory array4301with a row decoder4303coupled with a multiplexer (“mux”) to access two words on the same column during a same clock cycle for a logic circuit. For example, on the first of two memory clock cycles, RowAddrA is used in row decoder4303, and ColAddrA is used in column multiplexer4305to buffer data (e.g., to the “Buffered Word” buffer ofFIG.43) from a memory cell with address (RowAddrA, ColAddrA). On the second of two memory clock cycles, RowAddrB is used in row decoder4303, and ColAddrA is used in column multiplexer4305to buffer data from a memory cell with address (RowAddrB, ColAddrA). Thus, circuitry4300may allow for dual-port access to data (e.g., DataA and DataB) stored on memory cells at two different addresses along the same column or bitline. Thus, the two addresses may share a row such that the column decoder (which may be separate from or combined with one or more column multiplexers, as depicted inFIG.43) activates the same bitline for both retrievals. Embodiments like the example depicted inFIG.43may use two memory clock cycles because row decoder4303may need one memory clock cycle to activate each word line. Accordingly, a memory chip using circuitry4300may function as a dual-port memory if clocked at least twice as fast as a corresponding logic circuit. Accordingly, as explained above,FIG.43may retrieve DataA and DataB during two memory clock cycles, which is faster than a clock cycle for a corresponding logic circuit. For example, the row decoder (e.g., row decoder4303ofFIG.43) and the column decoder (which may be separate from or combined with one or more column multiplexers, as depicted inFIG.43) may be configured to be clocked at a rate at least twice a rate of a corresponding logic circuit generating the two addresses. For example, a clock circuit for circuitry4300(not shown inFIG.43) may clock circuitry4300according to a rate at least twice a rate of a corresponding logic circuit generating the two addresses. The embodiment ofFIGS.42and43may be used separately or combined. Accordingly, circuitry (e.g., circuitry4200or4300) providing dual-port functionality on a single-port memory array or mat may comprise a plurality of memory banks arranged along at least one row and at least one column. The plurality of memory banks are depicted as memory array4201inFIG.42and as memory array4301inFIG.43. The embodiments may further use at least one row multiplexer (as depicted inFIG.43) or at least one column multiplexer (as depicted inFIG.42) configured to receive, during a single clock cycle, two addresses for reading or writing. Moreover, the embodiments may use a row decoder (e.g., row decoder4203ofFIG.42and row decoder4303ofFIG.43) and a column decoder (which may be separate from or combined with one or more column multiplexers, as depicted inFIGS.42and43) to read from or write to the two addresses. For example, the row decoder and column decoder may, during a first cycle, retrieve a first of the two addresses from the at least one row multiplexer or the at least one column multiplexer and decode a word line and a bitline corresponding to the first address. Moreover, the row decoder and column decoder may, during a second cycle, retrieve a second of the two addresses from the at least one row multiplexer or the at least one column multiplexer and decode a word line and a bitline corresponding to the second address. The retrievals may each comprise activating a word line corresponding to an address using the row decoder and activating a bit line on the activated word line corresponding to the address using the column decoder. Although described above for retrievals, the embodiments ofFIGS.42and43, whether implemented separately or in combination, may include write commands. For example, during the first cycle, the row decoder and column decoder may write first data retrieved from the at least one row multiplexer or the at least one column multiplexer to the first of the two addresses. Moreover, during the second cycle, the row decoder and column decoder may write second data retrieved from the at least one row multiplexer or the at least one column multiplexer to the second of the two addresses. The example ofFIG.42shows this process when the first and second addresses share a word line address while the example ofFIG.43shows this process when the first and second addresses share a column address. As described further with respect toFIG.47below, the same process may be implemented when the first and second address do not share either a word line address or a column address. Accordingly, although the examples above provide dual-port access along at least one of rows or columns, additional embodiments may provide dual-port access along both rows and columns.FIG.44depicts example circuitry4400providing dual-port access along both rows and columns of a memory chip in which circuitry4400is used, consistent with the present disclosure. Accordingly, circuitry4700may represent a combination of circuitry4200ofFIG.42with circuitry4300ofFIG.43. The embodiment depicted inFIG.44may use one memory array4401with a row decoder4403coupled with a multiplexer (“mux”) to access two rows during a same clock cycle for a logic circuit. Moreover, the embodiment depicted inFIG.44may use memory array4401with a column decoder (or multiplexer)4405coupled with a multiplexer (“mux”) to access two columns during the same clock cycle. For example, on the first of two memory clock cycles, RowAddrA is used in row decoder4403, and ColAddrA is used in column multiplexer4405to buffer data (e.g., to the “Buffered Word” buffer ofFIG.44) from a memory cell with address (RowAddrA, ColAddrA). On the second of two memory clock cycles, RowAddrB is used in row decoder4403, and ColAddrB is used in column multiplexer4405to buffer data from a memory cell with address (RowAddrB, ColAddrB). Thus, circuitry4400may allow for dual-port access to data (e.g., DataA and DataB) stored on memory cells at two different addresses. Embodiments like the example depicted inFIG.44may use the additional buffer because row decoder4403may need one memory clock cycle to activate each word line. Accordingly, a memory chip using circuitry4400may function as a dual-port memory if clocked at least twice as fast as a corresponding logic circuit. Although not depicted inFIG.44, circuitry440) may further include the additional circuitry ofFIG.46(described further below) along the rows or word lines and/or similar additional circuitry along the columns or bitlines. Accordingly, circuitry4400may activate corresponding circuitry (e.g., by opening one or more switching elements, such as one or more of switching elements4613a,4613b, and the like ofFIG.46) to activate disconnected portions including the addresses (e.g., by connecting voltages or allowing current to flow to the disconnected portions). Accordingly, the circuitry may “correspond” when elements of the circuitry (such as lines or the like) include locations identified the addresses and/or when elements of the circuitry (such as the switching elements) control a supply or voltage and/or a flow of current to memory cells identified by the addresses. Circuitry4400may then use row decoder4403and column multiplexer4405to decode corresponding word lines and bitlines to retrieve data from or write data to the addresses, which are located in the activated disconnected portions. As further depicted inFIG.44, circuity4400may further use at least one row multiplexer (depicted separate from row decoder4403but may be incorporate therein) and/or at least one column multiplexer (e.g., depicted separate from column multiplexer4405but may be incorporate therein) configured to receive, during a single clock cycle, two addresses for reading or writing. Accordingly, the embodiments may use a row decoder (e.g., row decoder4403) and a column decoder (which may be separate from or combined with column multiplexer4405) to read from or write to the two addresses. For example, the row decoder and column decoder may, during a memory clock cycle, retrieve a first of the two addresses from the at least one row multiplexer or the at least one column multiplexer and decode a word line and a bitline corresponding to the first address. Moreover, the row decoder and column decoder may, during the same memory cycle, retrieve a second of the two addresses from the at least one row multiplexer or the at least one column multiplexer and decode a word line and a bitline corresponding to the second address. FIGS.45A and45Bdepict existing duplication techniques for providing dual-port functionality on a single-port memory array or mat. As shown inFIG.45A, dual-port reading may be provided by keeping duplicate copies of data in sync across memory arrays or mats. Accordingly, reading may be performed from both copies of the memory instance, as depicted inFIG.45A. Moreover, as shown inFIG.45B, dual-port writing may be provided by duplicating all writes across the memory arrays or mats. For example, the memory chip may require that logic circuits using the memory chip send write commands in duplicate, one for each duplicate copy of the data. Alternatively, in some embodiments, as shown inFIG.45A, additional circuitry may allow for the logic circuits using the memory instance to send single write commands that are automatically duplicated by the additional circuitry to generate duplicate copies of the written data across the memory arrays or mats in order to keep the copies in sync. The embodiments ofFIGS.42,43, and44may reduce the redundancy from these existing duplication techniques either by using multiplexers to access two bitlines in a single memory clock cycle (e.g., as depicted inFIG.42) and/or by clocking the memory faster than a corresponding logic circuit (e.g., as depicted inFIGS.43and44) and providing additional multiplexers to handle additional addresses rather than duplicating all data in the memory. In addition to the faster clocking and/or additional multiplexers described above, embodiments of the present disclosure may use circuitry that disconnects the bitlines and/or word lines at some points within the memory array. Such embodiments may allow for multiple simultaneous access to the array as long as the row and column decoders access different locations that are not coupled to the same portions of the disconnect circuitry. For example, locations with different word lines and bitlines may be accessed simultaneously because the disconnecting circuitry may allow the row and column decodes to access the different addresses without electrical interference. The granularity of the disconnected regions within the memory array may be weighed against the additional area required by the disconnect circuitry during design of the memory chip. An architecture for implementing such simultaneous access is depicted inFIG.46. In particular,FIG.46depicts example circuitry4600providing dual-port functionality on a single-port memory array or mat. As depicted inFIG.46, circuitry4600may include a plurality of memory mats (e.g., memory mat4609a, mat4609b, and the like) arranged along at least one row and at least one column. The layout of circuitry4600further includes a plurality of word lines, such as word lines4611aand4611bcorresponding to rows and bitlines4615aand4615bcorresponding to columns. The example ofFIG.46includes twelve memory mats, each with two lines and eight columns. In other embodiments, the substrate may include any number of memory mats, and each memory mat may include any number of lines and any number of columns. Some memory mats may include a same number of lines and columns (as shown inFIG.46) while other memory mats may include different numbers of lines and/or columns. Although not depicted inFIG.46, circuitry4600may further use at least one row multiplexer (either separate from or incorporated with row decoder4601aand/or4601b) or at least one column multiplexer (e.g., column multiplexer4603aand/or4603b) configured to receive, during a single clock cycle, two (or three or any plurality of) addresses for reading or writing. Moreover, the embodiments may use a row decoder (e.g., row decoder4601aand/or4601b) and a column decoder (which may be separate from or combined with column multiplexer4603aand/or4603b) to read from or write to the two (or more) addresses. For example, the row decoder and column decoder may, during a memory clock cycle, retrieve a first of the two addresses from the at least one row multiplexer or the at least one column multiplexer and decode a word line and a bitline corresponding to the first address. Moreover, the row decoder and column decoder may, during the same memory cycle, retrieve a second of the two addresses from the at least one row multiplexer or the at least one column multiplexer and decode a word line and a bitline corresponding to the second address. As explained above, as long as the two addresses are in different locations that are not coupled to the same portions of the disconnect circuitry (e.g., switching elements such as4613a,4613b, and the like), the access may occur during the same memory clock cycle. Additionally, circuitry4600may access a first two addresses simultaneously during a first memory clock cycle and then a second two addresses simultaneously during a second memory clock cycle. In such embodiments, a memory chip using circuitry4600may function as a four-port memory if clocked at least twice as fast as a corresponding logic circuit. FIG.46further includes at least one row circuit and at least one column circuit configured to function as switches. For example, corresponding switching elements such as4613a,4613b, and the like may comprise transistors or any other electrical element configured to allow or stop current to flow and/or connect or disconnect voltages from the word line or bitline connected to switching elements such as4613a,4613b, and the like. Thus, the corresponding switching elements may divide circuitry4600into disconnected portions. Although depicted as comprising single rows and sixteen columns of each row, the disconnected regions within the circuitry4600may include differing levels of granularity depending on design of the circuitry4600. Circuitry4600may use a controller (e.g., row control4607) to activate corresponding ones of the at least one row circuit and the at least one column circuit in order to activate corresponding disconnected regions during the address operations described above. For example, circuitry4600may transmit one or more control signals to close corresponding ones of the switching elements (e.g., switching elements4613a,4613b, and the like). In embodiments where switching elements4613a,4613b, and the like comprises transistors, the control signals may comprise voltages to open the transistors. Depending on the disconnected regions including the addresses, more than one of the switching elements may be activated by circuitry4600. For example, to reach an address within memory mat4609bofFIG.46, the switching element allowing access to memory mat4609amust be opened as well as the switching element allowing access to memory mat4609b. Row control4607may determine the switching elements to activate in order to retrieve a particular address within circuitry4600according to the particular address. FIG.46represents an example of circuitry4600used to divide word lines of a memory array (e.g., comprising memory mat4609a, mat4609b, and the like). However, other embodiments may use similar circuitry (e.g., switching elements dividing memory chip4600into disconnected regions) to divide bitlines of the memory array. Accordingly, the architecture of circuitry4600may be used in dual-column access like that depicted inFIG.42orFIG.44as well as dual-row access like that depicted inFIG.43orFIG.44. A process for multi-cycle access to memory arrays or mats is depicted inFIG.47A. In particular,FIG.47Ais an example flowchart of a process4700for providing dual-port access on a single-port memory array or mat (e.g., using circuitry4300ofFIG.43or circuitry4400ofFIG.44) Process4700may be executed using row and column decoders consistent with the present disclosure, such as row decoder4303or4403ofFIG.43or44, respectively, and a column decoder (which may be separate from or combined with one or more column multiplexers, such as column multiplexer4305or4405depicted inFIG.43or44, respectively). At step4710, during a first memory clock cycle, the circuitry may use at least one row multiplexer and at least one column multiplexer to decode a word line and a bitline corresponding to a first of two addresses. For example, the at least one row decoder may activate a word line, and the at least one column multiplexer may amplify a voltage from a memory cell along the activated word line and corresponding to the first address. The amplified voltage may be provided to a logic circuit using a memory chip including the circuitry or buffered according to step4720described below. The logic circuits may comprise circuits such as GPUs or CPUs or may comprise processing groups on the same substrate as the memory chip, e.g., as depicted inFIG.7A. Although described above as a read operation, method4700may similarly process a write operation. For example, the at least one row decoder may activate a word line, and the at least one column multiplexer may apply a voltage to a memory cell along the activated word line and corresponding to the first address to write new data to the memory cell. In some embodiments, the circuitry may provide confirmation of the write to the logic circuit using the memory chip including the circuitry or buffer the confirmation according to step4720below. At step4720, the circuitry may buffer the retrieved data of the first address. For example, as depicted inFIGS.43and44, the buffer may allow the circuitry to retrieve a second of the two addresses (as described in step4730below) and return the results of both retrievals together. The buffer may comprise a register, an SRAM, a nonvolatile memory, or any other data storage device. At step4730, during a second memory clock cycle, the circuitry may use the at least one row multiplexer and the at least one column multiplexer to decode a word line and a bitline corresponding to a second address of the two addresses. For example, the at least one row decoder may activate a word line, and the at least one column multiplexer may amplify a voltage from a memory cell along the activated word line and corresponding to the second address. The amplified voltage may be provided to a logic circuit using a memory chip including the circuitry, whether individually or together with a buffered voltage. e.g., from step4720. The logic circuits may comprise circuits such as GPUs or CPUs or may comprise processing groups on the same substrate as the memory chip, e.g., as depicted inFIG.7A. Although described above as a read operation, method4700may similarly process a write operation. For example, the at least one row decoder may activate a word line, and the at least one column multiplexer may apply a voltage to a memory cell along the activated word line and corresponding to the second address to write new data to the memory cell. In some embodiments, the circuitry may provide confirmation of the write to the logic circuit using the memory chip including the circuitry, whether individually or together with a buffered voltage, e.g., from step4720. At step4740, the circuitry may output the retrieved data of the second address with the buffered first address. For example, as depicted inFIGS.43and44, the circuitry may return the results of both retrievals (e.g., from steps4710and4730) together. The circuitry may return the results to a logic circuit using a memory chip including the circuitry. The logic circuits may comprise circuits such as GPUs or CPUs or may comprise processing groups on the same substrate as the memory chip, e.g., as depicted inFIG.7A. Although described with reference to multiple cycles, if the two addresses share a word lines, as depicted inFIG.42, method4700may allow for single-cycle access to the two addresses. For example, steps4710and4730may occur during a same memory clock cycle since multiple column multiplexers may decode different bitlines on a same word line during the same memory clock cycle. In such embodiments, the buffering step4720may be skipped. A process for simultaneous access (e.g., using circuitry4600described above) is depicted inFIG.47B. Accordingly, although shown in sequence, the steps ofFIG.47Bmay all occur during a same memory clock cycle, and at least some steps (e.g., steps4760and4780or steps4770and4790) may be executed simultaneously. In particular,FIG.47Bis an example flowchart of a process4750for providing dual-port access on a single-port memory array or mat (e.g., using circuitry4200ofFIG.42or circuitry4600ofFIG.46) Process4750may be executed using row and column decoders consistent with the present disclosure, such as row decoder4203or rows decoders4601aand4601bofFIG.42or46, respectively, and a column decoder (which may be separate from or combined with one or more column multiplexers, such as column multiplexers4205aand4205bor column multiplexers4603aand4306bdepicted inFIG.42or46, respectively). At step4760, during a memory clock cycle, the circuitry may activate corresponding ones of at least one row circuit and at least one column circuit based on a first of two addresses. For example, the circuitry may transmit one or more control signals to close corresponding ones of switching elements comprising the at least one row circuit and the at least one column circuit. Accordingly, the circuitry may access a corresponding disconnected region including the first of the two addresses. At step4770, during the memory clock cycle, the circuitry may use at least one row multiplexer and at least one column multiplexer to decode a word line and a bitline corresponding to the first address. For example, the at least one row decoder may activate a word line, and the at least one column multiplexer may amplify a voltage from a memory cell along the activated word line and corresponding to the first address. The amplified voltage may be provided to a logic circuit using a memory chip including the circuitry. For example, as described above, the logic circuits may comprise circuits such as GPUs or CPUs or may comprise processing groups on the same substrate as the memory chip, e.g., as depicted inFIG.7A. Although described above as a read operation, method4500may similarly process a write operation. For example, the at least one row decoder may activate a word line, and the at least one column multiplexer may apply a voltage to a memory cell along the activated word line and corresponding to the first address to write new data to the memory cell. In some embodiments, the circuitry may provide confirmation of the write to the logic circuit using the memory chip including the circuitry. At step4780, during the same cycle, the circuitry may activate corresponding ones of the at least one row circuit and the at least one column circuit based on a second of the two addresses. For example, the circuitry may transmit one or more control signals to close corresponding ones of switching elements comprising the at least one row circuit and the at least one column circuit. Accordingly, the circuitry may access a corresponding disconnected region including the second of the two addresses. At step4790, during the same cycle, the circuitry may use the at least one row multiplexer and the at least one column multiplexer to decode a word line and a bitline corresponding to the second address. For example, the at least one row decoder may activate a word line, and the at least one column multiplexer may amplify a voltage from a memory cell along the activated word line and corresponding to the second address. The amplified voltage may be provided to a logic circuit using the memory chip including the circuitry. For example, as described above, the logic circuits may comprise conventional circuits such as GPUs or CPUs or may comprise processing groups on the same substrate as the memory chip, e.g., as depicted inFIG.7A. Although described above as a read operation, method4500may similarly process a write operation. For example, the at least one row decoder may activate a word line, and the at least one column multiplexer may apply a voltage to a memory cell along the activated word line and corresponding to the second address to write new data to the memory cell. In some embodiments, the circuitry may provide confirmation of the write to the logic circuit using the memory chip including the circuitry. Although described with reference to a single cycle, if the two addresses are in disconnected regions sharing word lines or bitlines (or otherwise sharing switching elements in the at least one row circuit and the at least one column circuit), method4500may allow for multi-cycle access to the two addresses. For example, steps4760and4770may occur during a first memory clock cycle in which a first row decoder and a first column multiplexer may decode the word line and bitline corresponding to the first address while steps4780and4790may occur during a second memory clock cycle in which a second row decoder and a second column multiplexer may decode the word line and bitline corresponding to the second address. A further example of architecture for dual-port access along both rows and columns is depicted inFIG.48. In particular,FIG.48depicts example circuitry4800providing dual-port access along both rows and columns using multiple row decoders in combination with multiple column multiplexers. InFIG.48, row decoder4801amay access a first word line, and column multiplexer4803amay decode data from one or more memory cells along the first word line while row decoder4801bmay access a second word line, and column multiplexer4803bmay decode data from one or more memory cells along the second word line. As described with respect toFIG.47B, this access may be simultaneous during one memory clock cycles. Accordingly, similar to the architecture ofFIG.46, the architecture ofFIG.48(including the memory mats described inFIG.49below) may allow for multiple addresses to be accessed in a same clock cycle. For example, the architecture ofFIG.48may include any number of row decoders and any number of column multiplexers such that a number of addresses corresponding to the number of row decoder and column multiplexers may be accessed all within a single memory clock cycle. In other embodiments, this access may be sequential along two memory clock cycles. By clocking memory chip4800faster than a corresponding logic circuit, two memory clock cycles may be equivalent to one clock cycle for the logic circuit using the memory. For example, as described above, the logic circuits may comprise conventional circuits such as GPUs or CPUs or may comprise processing groups on the same substrate as the memory chip, e.g., as depicted inFIG.7A. Other embodiments may allow for simultaneous access. For example, as described with respect toFIG.42, multiple column decoders (which may comprise column multiplexers such as4803aand4803bas shown inFIG.48) may read multiple bitlines along a same word line during a single memory clock cycle. Additionally or alternatively, as described with respect toFIG.46, circuitry4800may incorporate additional circuitry such that this access may be simultaneous. For example, row decoder4801amay access a first word line, and column multiplexer4803amay decode data from a memory cell along the first word line during a same memory clock cycle in which row decoder4801baccesses a second word line, and column multiplexer4803bdecodes data from a memory cell along the second word line. The architecture ofFIG.48may be used with modified memory mats forming the memory banks as shown inFIG.49. InFIG.49, each memory cell (depicted as a capacitor similar to DRAM but may also comprise a number of transistors arranged in a manner similar to SRAM or any other memory cell) is accessed by two word lines and by two bit lines. Accordingly, memory mat4900ofFIG.49allows for access of two different bits simultaneously or even access to a same bit by two different logic circuits. However, the embodiment ofFIG.49uses a modification to the memory mats rather than implementing a dual-port solution on standard DRAM memory mats, which are wired for single-port access, as the embodiments above do. Although described with two ports, any of the embodiments described above may be extended to more than two ports. For example, the embodiments ofFIGS.42,46,48, and49may include additional column or row multiplexers, respectively, to provide access to additional columns or rows, respectively, during a single clock cycle. As another example, the embodiments ofFIGS.43and44may include additional row decoders and/or column multiplexers to provide access to additional rows or columns, respectively, during a single clock cycle. Variable Word Length Access in Memory As used above and further blow, the term “coupled” may include directly connected, indirectly connected, in electrically communication with, and the like. Moreover, terms like “first,” “second,” and the like are used to distinguish between elements or method steps having a same or similar name or title and do not necessarily indicate a spatial or temporal order. Typically, a memory chip may include memory banks. The memory banks may be coupled to a row decoder and a column decoder configured to choose a specific word (or other fixed size data unit) to be read or written. Each memory bank may include memory cells to store the data units, sense amplifiers to amplify voltages from the memory cells selected by the row and column decoders, and any other appropriate circuits. Each memory bank usually has a specific I/O width. For example, the L/O width may comprise a word. While some processes executed by logic circuits using the memory chip may benefit from using very long words, some other processes may require only a part of the word. Indeed, in-memory computing units (such as processor subunits disposed on the same substrate as the memory chip, e.g., as depicted and described inFIG.7A) frequently perform memory access operations that require only a part of the word. To reduce latency associated with accessing an entire word when only a portion is used, embodiments of the present disclosure may provide a method and a system for fetching only one or more parts of a word, thereby reducing data losses associated with transferring unneeded parts of the word and allowing power saving in a memory device. Furthermore, embodiments of the present disclosure may also reduce power consumption in the interaction between the memory chip and other entities (such as logic circuits, whether separate like CPUs and GPUs or included on the same substrate as the memory chip, such as the processor subunits depicted and described inFIG.7A) that access the memory chip, which may receive or write only a part of the word. A memory access command (e.g., from a logic circuit using the memory) may include an address in the memory. For example, the address may include a row address and a column address or may be translated to a row address and a column address, e.g., by a memory controller of the memory. In many volatile memories, such as DRAMs, the row address is sent (e.g., directly by the logic circuit or using the memory controller) to the row decoder, which activates the entire row (also called the word line) and loads all of the bitlines included in the row. The column address identifies the bitline(s) on the activated row that are transferred outside a memory bank including the bitline(s) and to next level circuitry. For example, the next level circuitry may comprise an I/O bus of the memory chip. In embodiments using in-memory processing, the next level circuitry may comprise a processor subunit of the memory chip (e.g., as depicted inFIG.7A). Accordingly, the memory chip described below may be included in or otherwise comprise the memory chip as illustrated in any one ofFIG.3A,3B,4-6,7A-7D,11-13,16-19,22, or23. The memory chip may be manufactured by a first manufacturing process optimized for memory cells rather than logic cells. For example, the memory cells manufactured by the first manufacturing process may exhibit a critical dimension that is smaller (for example, by a factor that exceeds 2, 3, 4, 5, 6, 7, 8, 9, 10, and the like), than the critical dimension of a logic circuit manufactured by the first manufacturing process. For example, the first manufacturing process may comprise an analog manufacturing process, a DRAM manufacturing process, and the like. Such a memory chip may comprise an integrated circuit that may include a memory unit. The memory unit may include memory cells, an output port, and read circuitry. In some embodiments, the memory unit may further include a processing unit, such as a processor subunit as described above. For example, the read circuitry may include a reduction unit and a first group of memory read paths for outputting up to a first number of bits through the output port. The output port may connect to an off-chip logic circuit (such as an accelerator, CPU, GPU, or the like) or to an on-chip processor subunit, as described above. In some embodiments, the processing unit may include the reduction unit, may be a part of the reduction unit, may differ from the reduction unit, or may otherwise comprise the reduction unit. An in-memory read path may be included in the integrated circuit (for example, may in the memory unit) and may include any circuit and/or link configured for reading from and/or writing to a memory cell. For example, the in-memory read path may include a sense amplifier, a conductor coupled to the memory cell, a multiplexer, and the like. The processing unit may be configured to send to the memory unit a read request for reading a second number of bits from the memory unit. Additionally or alternatively, the read request may originate from an off-chip logic circuit (such as an accelerator, CPU, GPU, or the like). The reduction unit may be configured to assist in reducing power consumption related to an access request, e.g., by using any of the partial word accesses described herein. The reduction unit may be configured to control the memory read paths, during a read operation triggered by the read request, based on the first number of bits and the second number of bits. For example, the control signal from the reduction unit may affect the memory consumption of the read paths to reduce energy consumption of memory read paths not relevant to the requested second number of bits. For example, the reduction unit may be configured to control irrelevant memory read paths when the second number is smaller than the first number. As explained above, the integrated circuit may be included in, may include, or otherwise comprise a memory chip as illustrated in any one ofFIG.3A,3B,4-6,7A-7D,11-13,16-19,22, or23. The irrelevant in-memory read paths may be associated with irrelevant bits of the first number of bits, such as bits of the first number of bits not included in the second number of bits. FIG.50illustrates an example integrated circuit5000including memory cells5001-5008of an array5050of memory cells, an output port5020that includes bits5021-5028, read circuitry5040that includes memory read paths5011-5018, and reduction unit5030. When a second number of bits are read using corresponding memory read paths, the irrelevant bits of the first number of bits may correspond to bits that should not be read (e.g., bits that are not included in the second number of bits). During the read operation, the reduction unit5030may be configured to activate memory read paths corresponding to the second number of bits such that the activated memory read paths may be configured to convey the second number of bits. In such embodiments, only the memory read paths corresponding to the second number of bits may be activated. During the read operation, the reduction unit5030may be configured to shut down at least a portion of each irrelevant memory read paths. For example, the irrelevant memory read paths may corresponding to the irrelevant bits of the first number of bits. It should be noted that instead of shutting down at least one portion of an irrelevant memory path, the reduction unit5030may instead insure that the irrelevant memory path is not activated. Additionally or alternatively, during the read operation, the reduction unit5030may be configured to maintain the irrelevant memory read paths in a low power mode. For example, a low power mode may comprise a mode in which the irrelevant memory paths are supplied with voltage or current lower than a normal operating voltage or current, respectively. The reduction unit5030may be further configured to control bitlines of the irrelevant memory read paths. Accordingly, the reduction unit5030may be configured to load bitlines of relevant memory read paths and maintain bitlines of the irrelevant memory read paths in the low power mode. For example, only the bitlines of the relevant memory read paths may be loaded. Additionally or alternatively, the reduction unit5030may be configured to load bitlines of the relevant memory read paths while maintaining bitlines of the irrelevant memory read paths deactivated. In some embodiments, the reduction unit5030may be configured to utilize portions of the relevant memory read paths during the read operation and to maintain in the low power mode a portion of each irrelevant memory read path, wherein the portion differs from a bitline. As explained above, memory chips may use sense amplifiers to amplify voltages from memory cells included therein. Accordingly, the reduction unit5030may be configured to utilize portions of the relevant memory read paths during the read operation and to maintain in the low power mode a sense amplifier associated with at least some of the irrelevant memory read paths. In such embodiments, the reduction unit5030may be configured to utilize portions of the relevant memory read paths during the read operation and to maintain in the low power mode one or more sense amplifiers associated with all of the irrelevant memory read paths. Additionally or alternatively, the reduction unit5030may be configured to utilize portions of the relevant memory read paths during the read operation and to maintain in the low power mode portions of the irrelevant memory read paths that follow (e.g., spatially and/or temporally) one or more sense amplifiers associated with the irrelevant memory read paths. In any of the embodiments described above, the memory unit may include a column multiplexer (not shown). In such embodiments, the reduction unit5030may be coupled between the column multiplexer and the output port. Additionally or alternatively, the reduction unit5030may be embedded in the column multiplexer. Additionally or alternatively, the reduction unit5030may be coupled between the memory cells and the column multiplexer. The reduction unit5030may comprise reduction subunits that may be independently controllable. For example, different reduction subunits may be associated with different memory unit columns. Although described above with respect to read operations and read circuitry, and of the embodiments above may similarly be applied for write operations and write circuitry. For example, an integrated circuit according to the present disclosure may include a memory unit comprising memory cells, an output port, and write circuitry. In some embodiments, the memory unit may further include a processing unit, such as a processor subunit as described above. The write circuitry may include a reduction unit and a first group of memory write paths for outputting up to a first number of bits through the output port. The processing unit may be configured to send to the memory unit a write request for writing a second number of bits from the memory unit. Additionally or alternatively, the write request may originate from an off-chip logic circuit (such as an accelerator, CPU, GPU, or the like). The reduction unit5030may be configured to control the memory write paths, during a write operation triggered by the write request, based on the first number of bits and the second number of bits. FIG.51illustrates a memory bank5100that include an array5111of memory cells that are addressed using row and column addresses (e.g., from an on-chip processor subunit or an off-chip logic circuit, such as an accelerator, CPU, GPU, or the like). As shown inFIG.51, the memory cells are fed to bitlines (vertical) and word lines (horizontal—many omitted for simplicity). Moreover, row decoder5112may be fed with a row address (e.g., from the on-chip processor subunit, the off-chip logic circuit, or a memory controller not shown inFIG.51), column multiplexer5113may be fed with a column address (e.g., from the on-chip processor subunit, the off-chip logic circuit, or a memory controller not shown inFIG.51), and column multiplexer5113may receive outputs from up to an entire line and output up to a word over output bus5115. InFIG.51, the output bus5115of the column multiplexer5113is coupled to a main I/O bus5114. In other embodiments, the output bus5115may be coupled to a processor subunit of the memory chip (e.g., as depicted inFIG.7A) sending the row and column addresses. The division of the memory bank into memory mats is not shown for simplicity. FIG.52illustrates a memory bank5101. InFIG.52, the memory bank is also illustrated as including a PLM (processing in memory) logic5116that has inputs coupled to output bus5115. PIM logic5116may generate addresses (e.g., comprising row addresses and column addresses) and output the addresses via PIM address buses5118to access the memory bank. PIM logic5116is an example of a reduction unit (e.g., unit5030) that also comprises a processing unit. The PIM logic5016may control other circuits not shown inFIG.52that assist in the reduction of power. PIM logic5116may further control the memory paths of a memory unit including memory bank5101. As explained above, the word length (e.g., the number of bitlines chosen to be transferred at a time) may be large in some cases. In those cases, each word for reading and/or writing may be associated with a memory path that may consume power at various stages of the reading and/or writing operation, for example:a. loading the bitline- to avoid loading the bitline to the needed value (either from a capacitor on the bitline in a read cycle or to the new value to be written to the capacitor in a write cycle), there is a need to disable a sense amplifier located at the end of the memory array and make sure the capacitor holding the data is not discharged or charged (otherwise the data stored thereon would be destructed); andb. moving the data from the sense amplifier through a column multiplexer that chooses the bitlines and to the rest of the chip (either to the I/O bus that transfers data in and out of the chip or to the embedded logic, such as a processor subunit on the same substrate as the memory, that would use the data). To achieve power saving, integrated circuits of the present disclosure may determine, at row activation time, that some parts of a word are irrelevant and then send a disable signal to one or more sense amplifier for the irrelevant parts of the word. FIG.53illustrates a memory unit5102that includes an array5111of memory cells, a row decoder5112, a column multiplexer5113that is coupled to output bus5115, and PIM logic5116. Memory unit5102also includes switches5201that enable or disable the passage of bits to the column multiplexer5113. Switches5201may comprise analog switches, transistors configured to function as switches, or any other circuitry configured to control a supply or voltage and/or a flow of current to part of memory unit5102. The sense amplifiers (not shown) may be located at the end of the memory cell array, e.g. before (spatially and/or temporally) switches5201. The switches5201may be controlled by enable signals sent over bus5117from PIM logic5116. The switches are configured, when disconnected, to disconnect the sense amplifiers (not shown) of the memory unit5102and therefore not discharge or charge bitlines disconnected from the sense amplifiers. Switches5201and PIM logic5116may form a reduction unit (e.g., reduction unit5030). In yet another example, PIM logic5116may send enable signals to the sense amplifiers (e.g., when the sense amplifiers have an enable input) instead of being sent to switches5201. The bitlines may additionally or alternatively be disconnected at other points, e.g., not at the end of the bitlines and after the sense amplifiers. For example—a bitline may be disconnected before entering the array5111. In these embodiments, power may also be saved on data transfer from the sense amplifiers and forwarding hardware (such as output bus5115). Other embodiments (that may save less power but may be easier to implement) focus on saving the power of the column multiplexer5113and transfer losses from the column multiplexer5113to a next level circuitry. For example, as explained above, the next level circuitry may comprise an I/O bus of the memory chip (such as bus5115). In embodiments using in-memory processing, the next level circuitry may additionally or alternatively comprise a processor subunit of the memory chip (such as PIM logic5116). FIG.54Aillustrates a column multiplexer5113segmented to segments5202. Each segment5202of the column multiplexer5113may be individually enabled or disabled by enable and/or disable signals sent over bus5119from PIM logic5116. Column multiplexer5113may also be fed by address columns bus5118. The embodiment ofFIG.54Amay provide better control over different portions of the output from column multiplexer5113. It should be noted that the control of different memory paths may be of different resolutions, e.g., ranging from a bit resolution and to a resolution of multiple bits. The former may be more effective in sense of power savings. The latter may be simpler to implement and require fewer control signals. FIG.54Billustrates an example method5130. For example, method5130may be implemented using any of the memory units described above with respect toFIG.50,51,52,53, or54A. Method5130may include steps5132and5134. Step5132may include sending, by a processing unit (e.g., PIM logic5116) of the integrated circuit and to a memory unit of the integrated circuit, an access request for reading a second number of bits from the memory unit. The memory unit may include memory cells (e.g., memory cells of array5111), an output port (e.g., output bus5115), and read/write circuitry that may include a reduction unit (e.g., reduction unit5030) and a first group of memory read/write paths for outputting and/or inputting up to a first number of bits through the output port. An access request may comprise a read request and/or a write request. A memory input/output path may comprise a memory read path, a memory write path, and/or a path used for both reading and writing. Step5134may include responding to the access request. For example, step5134may include controlling, by the reduction unit (e.g., unit5030), the memory read/write paths, during an access operation triggered by the access request, based on the first number of bits and the second number of bits. Step5134may further include any one of the following and/or any combination of any one of the following. Any of the listed below operations may be executed during the responding to the access request but may also be executed before and/or after responding to the access request. Thus, step5134may include at least one of:a. controlling irrelevant memory read paths when the second number is smaller than the first number, wherein the irrelevant memory read paths are associated with bits of the first number of bits not included in the second number of bits;b. activating, during a read operation, relevant memory read paths, wherein the relevant memory read paths are configured to convey the second number of bits;c. shutting down, during the read operation, at least a portion of each one of the irrelevant memory read paths;d. maintaining, during the read operation, the irrelevant memory read paths in a low power mode;e. controlling bitlines of the irrelevant memory read paths;f. loading bitlines of the relevant memory read paths and maintaining bitlines of the irrelevant memory read paths in a low power mode;g. loading bitlines of the relevant memory read paths, while maintaining bitlines of the irrelevant memory read paths deactivated;h. utilizing portions of the relevant memory read paths during the read operation and maintaining in a low power mode a portion of each irrelevant memory read path, wherein the portion differs from a bitline;i. utilizing portions of the relevant memory read paths during a read operation and maintaining in a low power mode a sense amplifier for at least some of the irrelevant memory read paths;j. utilizing portions of the relevant memory read paths during a read operation and maintaining in a low power mode a sense amplifier of at least some of the irrelevant memory read paths; andk. utilizing portions of the relevant memory read paths during a read operation and maintaining in a low power mode portions of the irrelevant memory read paths that follow sense amplifiers of the irrelevant memory read paths. A low power mode or an idle mode may comprise a mode in which power consumption of a memory access path is lower than power consumption of the same when the memory access path is used for an access operation. In some embodiments, a low power mode may even involve shutting down the memory access path. A low power mode may additionally or alternatively include not activating the memory-access path. It should be noted that power reductions that occur during the bitline phase may require that the relevancy or irrelevancy of the memory access paths should be known prior to opening the word line. Power reductions that occur elsewhere (for example, in the column multiplexer) may instead allow for deciding the relevancy or irrelevancy of the memory access paths on every access. Fast and Low Power Activation and Fast Access Memory DRAM and other memory types (such as SRAM, Flash, or the like) are often built from memory banks, which are usually built to allow for row and column access schemes. FIG.55illustrates an example of a memory chip5140that includes multiple memory mats and associated logic (such as row and column decoders—depicted as RD and COL inFIG.55, respectively). In the example ofFIG.55, the mats are grouped into banks and have word lines and bitlines through them. The memory mats and associated logic are denoted5141,5142,5143,5144,5145and5146inFIG.55and share at least one bus5147. Memory chip5140may be included in, may include, or otherwise comprise a memory chip as illustrated in any one ofFIG.3A,3B,4-6,7A-7D,11-13,16-19,22, or23. In DRAM, for example, there is a lot overhead associated with activation of a new row (e.g., preparing a new line for access). Once a line is activated (also referred to as being opened), the data within that row may be available for much faster access. In DRAM, this access may occur in a random manner. Two problems associated with activating a new line are power and time:c. The power rises due to a rush of current caused by accessing all capacitors on the line together and having to load the line (e.g., the power can reach several Amperes when opening a line with just a few memory banks); andd. the time delay problem is mostly associated with the time it takes to load the row (word) line and then the bit (column) lines. Some embodiments of the present disclosure may include a system and method to reduce peak power consumption during activation of a line and reduce activation time of the line. Some embodiments may sacrifice full random access within a line, at least to some extent, to reduce these power and time costs. For example, in one embodiment, a memory unit may include a first memory mat, a second memory mat, and an activation unit configured to activate a first group of memory cells included in the first memory mat without activating a second group of memory cells included in the second memory mat. The first group of memory cells and the second group of memory cells may both belong to a single row of the memory unit. Alternatively, the activation unit may be configured to activate the second group of memory cells included in the second memory mat without activating the first group of memory cells. In some embodiments, the activation unit may be configured to activate the second group of memory cells after activation of the first group of memory cells. For example, the activation unit may be configured to activate the second group of memory cells following expiration of a delay period initiated after activation of the first group of memory cells has been completed. Additionally or alternatively, the activation unit may be configured to activate the second group of memory cells based on a value of a signal developed on a first word line segment coupled to the first group of memory cells. In any of the embodiments described above, the activation unit may include an intermediate circuit disposed between a first word line segment and a second word line segment. In such embodiments. The first word line segment may be coupled to the first memory cells and the second word line segment may be coupled to the second memory cells. Non-limiting examples of intermediate circuits include switches, flip-flops, buffers, inverters, and the like—some of which are illustrated throughoutFIGS.56-61. In some embodiments, the second memory cells may be coupled to a second word line segment. In such embodiments, the second word line segment may be coupled to a bypass word line path that passes through at least the first memory mat. An example of such bypass paths is illustrated inFIG.61. The activation unit may comprises a control unit configured to control a supply of voltage (and/or a flow of current) to the first group of memory cells and to the second group of memory cells based on an activation signal from a word line associated with the single row. In another example embodiment, a memory unit may include a first memory mat, a second memory mat, and an activation unit configured to supply an activation signal to a first group of memory cells of the first memory mat and delay a supply of the activation signal to a second group of memory cells of the second memory mat at least until activation of the first group of memory cells has been completed. The first group of memory cells and the second group of memory cells may belong to a single row of the memory unit. For example, the activation unit may include a delay unit that may be configured to delay the supply of the activation signal. Additionally or alternatively, the activation unit may include a comparator that may be configured to receive the activation signal at an input of the comparator and to control the delay unit based on at least one characteristic of the activation signal. In another example embodiment, a memory unit may include a first memory mat, a second memory mat, and an isolation unit configured to: isolate first memory cells of the first memory mat from second memory cells of the second memory mat during an initial activation period in which the first memory cells are activated; and couple the first memory cells to the second memory cells following the initial activation period. The first and second memory cells may belong to a single row of the memory unit. In the following examples, no modifications to the memory mats themselves may be required. In certain examples, embodiments may rely on minor modifications to the memory bank. The diagrams below depict a mechanism to shorten the word signal added to memory banks, thereby splitting a word line into a number of shorter portions. In the following figures, various memory bank components wee omitted for clarity. FIGS.56-61illustrate portions (denoted5140(1),5140(2),5140(3),5140(4),5140(5), and5149(6), respectively) of memory banks that include row decoder5112and multiple memory mats (such as5150(1),5150(2),5150(3),5150(4),5150(5),5150(6),5151(1),5151(2),5151(3),5151(4),5151(5),5151(6),5152(1),5152(2),5152(3),5152(4),5152(5), and5152(6)) that are grouped within different groups. Memory mats that are arranged in a row may include different groups. FIGS.56-59and61illustrates nine groups of memory mats, where each group includes a pair of memory mats. Any number of groups, each with any number of memory mats, may be used. Memory mats5150(1),5150(2),5150(3),5150(4),5150(5), and5150(6) are arranged in a row, share multiple memory lines and are divided into three groups—a first upper group includes memory mats5150(1) and5150(2), a second upper group includes memory mats5150(3) and5150(4), and a third upper group includes memory mats5150(5) and5150(6). Similarly, memory mats5151(1),5151(2),5151(3),5151(4),5151(5), and5151(6) are arranged in a row, share multiple memory lines and am divided into three groups—a first intermediate group includes memory mats5151(1) and5151(2), a second intermediate group includes memory mats5151(3) and5151(4), and a third intermediate group includes memory macs5151(5) and5151(6). Moreover, memory mats5152(1),5152(2),5152(3),5152(4),5152(5) and5152(6) are arranged in a row, share multiple memory lines and are grouped to three groups—a first lower group includes memory mats5152(1) and5152(2), a second lower group includes memory mats5152(3) and5152(4), and a third lower group includes memory mats5152(5) and5152(6). Any number of memory mats may be arranged in a row and share memory lines and may be divided into any number of groups. For example, the number of memory mats per group may be one, two, or may exceed two. As explained above, an activation circuit may be configured to activate one group of memory mats without activating another group of memory mats that share the same memory lines—or at least are coupled to different memory line segments that have a same line address. FIGS.56-61illustrates different examples of activation circuits. In some embodiments, at least a portion of the activation circuit (such as intermediate circuits) may be located between groups of memory mats to allow memory mats of one group to be activated while another group of memory mats of the same row is not activated. FIG.56illustrates intermediate circuits, such as delay or isolation circuits5153(1)-5153(3), as positioned between different lines of the first upper group of memory and of the second upper group of memory mats. FIG.56also illustrates intermediate circuits, such as delay or isolation circuits5154(1)-5154(3), as positioned between different lines of second upper group of memory and of third upper group of memory mats. Additionally, some delay or isolation circuits are positioned between groups formed from memory mats of the intermediate groups. Moreover, some delay or isolation circuits are positioned between groups formed from memory mats of the lower groups. The delay or isolation circuits may delay or stop a word line signal from the row decoder5112from propagating along a row to another group. FIG.57illustrates intermediate circuits, such as delay or isolation circuits, that comprise flip-flops (such as5155(1)-5155(3) and5156(1)-5156(3)). When an activation signal is injected to a word line, one of the first groups of mats (depending on the word line) is activated while the other groups along the word line remain deactivated. The other groups may be activated at the next clock cycle. For example, second groups of the other groups may be activated at the next clock cycle, and third groups of the other groups may be activated after yet another clock cycle. The flipflops may comprise D-type flip-flops or any other type of flip-flop. The clock fed to the D-type flip-flop is omitted from the drawing for simplicity. Thus, access to the first groups may use power to charge only the part of the word line associated with the first group, which is faster than charging the entire word line and requires less current. More than one flip-flop may be used between groups of memory mats, thereby increasing the delay between opening parts. Additionally or alternatively, embodiments may use a slower clock to increase the delay. Moreover, the groups that are activated may still contain groups from the previous line value that was used. For example, the method may allow activating a new line segment while still accessing data of the previous line, thereby reducing the penalty associated with activating a new line. Accordingly, some embodiments may have a first group that is activated and allow other groups of the previously activated line to remain active with the signals of the bitlines not interfering with each other. Additionally, some embodiments may include switches and a control signals. The control signals may be controlled by the bank controller or by adding flip-flops between control signals (e.g., generating the same timing effect that the mechanism described above had). FIG.58illustrates intermediate circuits, such as delay or isolation circuits, that are switches (such as5157(1)-5157(3) and5158(1)-5158(3)) and positioned between one group of another. A set of switches positioned between groups may be controlled by a dedicated control signal. InFIG.58the control signal may be sent by a row control unit5160(1) and delayed by a sequence of one or more delay units (e.g., units5160(2) and5160(3)) between different sets of switches. FIG.59illustrates intermediate circuits, such as delay or isolation circuits, that are sequences of inverter gates or buffers (such as5159(1)-5159(3) and5159′1(0-5159′(3)) and positioned between groups of memory mats. Instead of switches, buffers may be used between groups of memory mats. Buffers may allow not dropping voltage along the word line from switch to switch, which is an effect that sometimes occurs when using the single transistor structure. Other embodiments may allow for more random access and still provide very low activation power and time by using added area to the memory bank. An example is shown inFIG.60, which illustrates using global word lines (such as5152(1)-5152(8)) positioned in proximity to the memory mats. These word lines may or may not pass through the memory mats and are coupled via intermediate circuits, such as switches (such as5157(1)-5157(8)), to word lines within the memory mats. The switches may control which memory mat will be activated and allow a memory controller to activate, at each point of time, only the relevant line part. Unlike embodiments using a sequential activation of line portions described above, the example ofFIG.60may provide greater control. Enable signals, such as row part enable signals5170(1) and7150(2), may originate from logic, such as a memory controller, that is not shown. FIG.61illustrates that the global word lines5180pass through the memory mats and form bypass paths for the word line signals, which may not need to be routed outside the mat. Accordingly, the embodiments shown inFIG.61may reduce the area of the memory bank at a cost of some memory density. InFIG.61, the global world line may pass uninterrupted through a memory mat and may not be connected to memory cells. A local word line segment may be controlled by one of the switches and connected to memory cells in the mat. When the groups of memory mats provide a substantial partition of the word lines, the memory bank may virtually support full random access. Another embodiment for slowing the spreading of the activation signal along a word line, that also may save some wiring and logic, uses switches and/or other buffering or isolating circuits between memory mats without using dedicated enable signals and dedicated lines for conveying the enable signals. For example, a comparator may be used to control switches or other buffering or isolating circuits. The comparator may activate the switch or other buffering or isolating circuit when the level of signal on the word line segment monitored by the comparator reaches a certain level. For example, the certain level may indicate that the previous word line segment was fully loaded. FIG.62illustrates a method5190for operating a memory unit. For example, method5130may be implemented using any of the memory banks described above with respect toFIGS.56-61. Method5190may include steps5192and5194. Step5192may include activating, by an activation unit, a first group of memory cells included in a first memory mat of the memory unit without activating a second group of memory cells included in a second memory mat of the memory unit. The first group of memory cells and the second group of memory cells may both belong to a single row of the memory unit. Step5194may include activating, by an activation unit, the second group of memory cells, e.g., after step5192. Step5194may be executed while the first group of memory cells are activated, after a full activation of the first group of memory cells, following expiration of a delay period initiated after activation of the first group of memory cells has been completed after the first group of memory cells are deactivated, and the like. The delay period may be fixed or may be adjusted. For example, the duration of the delay period may be based on an expected access pattern of the memory unit or may be set regardless of the expected access pattern. The delay period may range between less than one millisecond and more than one second. In some embodiments, step5194may be initiated based on a value of a signal developed on a first word line segment coupled to the first group of memory cells. For example, when a value of the signal exceeds a first threshold, it may indicate that the first group of memory cells are fully activated. Either one of steps5192and5194may involve using an intermediate circuit (e.g., of the activation unit) disposed between a first word line segment and a second word line segment. The first word line segment may be coupled to the first memory cells and the second word line segment may be coupled to the second memory cells. Examples of an intermediate circuit are illustrated throughoutFIGS.56-61. Steps5192and5194may further include controlling, by a control unit, a supply to the first group of memory cells and to the second group of memory cells of an activation signal from a word line associated with the single row. Using Memory Parallelism to Speedup Testing Times and Testing Logic in Memory Using Vectors Some embodiments of the present disclosure may speed up testing using in chip testing units. Generally, memory chips testing requires significant testing time. Reducing testing time can reduce cost of production and also allow for more testing, leading to a more reliable product. FIGS.63and64illustrate a tester5200and a chip (or a wafer of chips)5210. The tester5200may include software that manages the testing. The tester5200may run different sequences of data to all of memory5210and then read the sequences back to identify where failed bits of memory5210are located. Once recognized, the tester5200may issue a command to fix the bits, and if it was able to fix the problem, tester5200may declare memory5210as passed. In other cases, some chips may be declared as failed. The tester5200may write test sequences and then read back the data to compare it to expected results. FIG.64shows a test system with a tester5200and a full wafer5202of chips (such as5210) being tested in parallel. For example, the tester5200may connect to each of the chips with a bus of wires. As shown inFIG.64, the tester5200has to read and write all of the memory chips a few times, and that data must be passed through the external chip interface. Moreover, it may be beneficial to test both logic and memory banks of an integrated circuit, e.g., using programmable configuration information that may be provide using regular V/O operations. The testing may also benefit from the presence of testing units within the integrated circuit. The testing units may belong to the integrated circuit and may analyze a results of the test and find, for example, failures in logic (e.g., processor subunits as depicted inFIG.7Aand described) and/or memory (e.g., across a plurality of memory banks). Memory testers are usually very simple and exchange test vectors with integrated circuits according to a simple format. For example, there may be write vectors that include pairs of addresses of memory entries to be written and the values to be written to the memory entries. There may also be a read vector that includes addresses of memory entries to be read. At least some of the addresses of the write vectors may be the same as at least some addresses of the read vectors. At least some other addresses of the write vectors may differ from at least some other addresses of the read vectors. When programmed, the memory testers may also receive an expected result vector that may include the addresses of memory entries to be read and the expected values to be read. The memory tester may compare the expected values to the values it reads. According to an embodiment, the logic (e.g., processor subunits) of an integrated circuit (with or without the memory of the integrated circuit) may be tested by a memory tester using the same protocol\format. For example, some of the values in the write vector may be commands to be executed by the logic (and may, for example, involve calculations and/or memory access) of the integrated circuit. The memory tester may be programmed with the read vector and the expected result vector that may include memory entry addresses—at least some of which store expected values of the calculations. Thus, the memory tester may be used for testing the logic as well as the memory. Memory testers are usually much simpler and cheaper than logic testers, and the proposed methods allow for performing complex logic tests using a simple memory tester. In some embodiments, a logic within the memory may enable testing of logic within the memory by using only vectors (or other data structures) and not more complex mechanisms common in logic testing (such as communicating with the controller, for example, through an interface, telling the logic which circuit to test). Instead of using testing units, the memory controllers may be configured to receive instructions to access memory entries included in configuration information and execute the access instructions and output results. Any of the integrated circuits illustrated inFIGS.65-69may execute the tests—even in the absence of testing units—or in the presence of testing units not capable to perform tests. Embodiments of the present disclosure may include a method and system that use the parallelism of the memory and the internal chip bandwidth to speed up and improve test times. The method and system may be based on a memory chip testing itself (as opposed to a tester running the test, reading results of the test, and analyzing the results), saving the results, and eventually allowing the tester to read them (and, if needed, to program the memory chip back, e.g., to activate redundancy mechanisms). The testing may include testing the memory or testing the memory banks and the logic (in case of a computational memory that has functional logic portions to test, such as that described above inFIG.7A). In one embodiments, the method may include reading and writing data within the chip such that external bandwidth does not limit the test. In embodiments where the memory chip includes processor subunits, each processor subunit may be programmed with a test code or configuration. In embodiments where the memory chip has processor subunits that cannot execute a test code or is without processor subunits but has memory controllers, then the memory controllers may be configured to read and write patterns (e.g., programmed to the controllers externally) and mark locations of faults (for example, writing a value to a memory entry, reading the entry, and receiving a value that differs from the written value) for further analysis. It should be noted that the testing of a memory may require testing a vast number of bits, for example, testing each bit of the memory and verifying that the tested bits are functional. Moreover, sometimes the memory testing may be repeated under different voltage and temperature conditions. For some defects, one or more redundancy mechanisms may be activated (e.g., by programming flash or OTP or burning fuses). In addition, the logic and analog circuits of the memory chips (e.g., controllers, regulators, I/Os) may also have to be tested. In one embodiment, an integrated circuit that may include a substrate, a memory array disposed on the substrate, a processing array disposed on the substrate, and an interface disposed on the substrate. The integrated circuits described herein may be included in, may include, or otherwise comprise a memory chip as illustrated in any one ofFIG.3A,3B,4-6,7A-7D,11-13,16-19,22, or23. FIGS.65-69illustrates various integrated circuits5210and tester5200. The integrated circuit is illustrated as including memory banks5212, a chip interface5211(such as I/O controller5214and bus5213shared by the memory banks), and logic unit (hereinafter “logic”)5215.FIG.66illustrates a fuse interface5216and a bus5217coupled to the fuse interface and the different memory banks. FIGS.65-70also illustrate various steps in a testing process—such as:a. write test sequence5221(FIGS.65,67,68and69);b. read back test results5222(FIGS.67,68and69);c. write expected results sequence5223(FIG.65);d. read faulty addresses to fix5224(FIG.66); ande. program fuses5225(FIG.66). Each memory bank may be coupled to and/or controlled by its own logic unit5215. However, as described above, any allocation of memory banks to logic unit5215may be provided. Thus, the number of logic units5215may differ from the number of memory banks, a logic unit may control more than a single memory bank or a fraction of a memory bank, and the like. The logic unit5215may include one or more testing units.FIG.65illustrates a testing unit (TU)5218within logic5215. A TU may be included in all or some of the logic units5212. It should be noted that the testing unit may be separate from the logic unit or integrated with the logic unit. FIG.65also illustrates a test patter generator (denoted GEN)5219within TU5218. A test pattern generator may be included in all or some of the testing units. For simplicity, test patter generators and testing units are not illustrated inFIGS.66-70but may be included in such embodiments. The memory array may include multiple memory banks. Moreover, the processing army may include a plurality of testing units. The plurality of testing units may be configured to test the multiple memory banks to provide test results. The interface may be configured to output, to a device external to the integrated circuit, information indicative of the test results. The plurality of testing units may include at least one test pattern generator configured to generate at least one test pattern for use in testing one or more of the multiple memory banks. In some embodiments, as explained above, each of the plurality of testing units may include a test pattern generator configured to generate a test pattern for use by a particular one of the plurality of testing units to test at least one of the multiple memory banks. As indicated above,FIG.65illustrates a test pattern generator (GEN)5219within a testing unit. One or more or even all logic units may include the test pattern generator. The at least one test pattern generator may be configured to receive instructions from the interface for generating the at least one test pattern. A test pattern may include memory entries that should be accessed (e.g., read and/or written) during a test and/or values to be written to the entries, and the like. The interface may be configured to receive, from an external unit that may be external to the integrated circuit, configuration information including the instructions for generating the at least one test pattern. The at least one test pattern generator may be configured to read configuration information including instructions for generating the at least one test pattern from the memory array. In some embodiments, the configuration information may include a vector. The interface may be configured to receive, from a device that may be external to the integrated circuit, configuration information that may include instructions that may be the at least one test pattern. For example, at least one test pattern may include memory array entries to be accessed during the testing of the memory array. The at least one test pattern further may include input data to be written to the memory arrays entries accessed during the testing of the memory array. Additionally or alternatively, at least one test pattern further may include input data to be written to the memory array entries accessed during the testing of the memory array and expected values of output data to be read from the memory array entries accessed during the testing of the memory array. In some embodiments, the plurality of testing units may be configured to retrieve, from the memory array, test instructions that once executed by the plurality of testing units cause the plurality of testing units to test the memory array. For example, the test instructions may be included in configuration information. The configuration information may include expected results of the testing of the memory array. Additionally or alternatively, the configuration information may include values of output data to be read from memory array entries accessed during the testing of the memory array. Additionally or alternatively, the configuration information may include a vector. In some embodiments, the plurality of testing units may be configured to retrieve, from the memory array, test instructions that once executed by the plurality of testing units cause the plurality of testing units to test the memory array and to test the processing array. For example, the test instructions may be included in configuration information. The configuration information may include a vector. Additionally or alternatively, the configuration information may include expected results of the testing of the memory array and of the processing array. In some embodiments, as described above, the plurality of testing units may lack a test pattern generator for generating a test pattern used during the testing of the multiple memory banks. In such embodiments, at least two of the plurality of testing units may be configured to test in parallel at least two of the multiple memory banks. Alternatively, at least two of the plurality of testing units may be configured to test in series at least two of the multiple memory banks. In some embodiments, the information indicative of the test results may include identifiers of faulty memory array entries. In some embodiments, the interface may be configured to retrieve multiple times, during the testing of the memory array, partial test results obtained by the plurality of testing circuits. In some embodiments, the integrated circuit may include an error correction unit configured to correct at least one error detected during the testing of the memory array. For example, the error correction unit may be configured to fix memory errors using any appropriate technique, for example, by disabling some memory words and replacing them with redundant words. In any of the embodiments described above, the integrated circuit may be a memory chip. For example, integrated circuit may include a distributed processor, wherein the processing array may include a plurality of subunits of the distributed processor, as depicted inFIG.7A. In such embodiments, each one of the processor subunits may be associated with a corresponding, dedicated one of multiple memory banks. In any of the embodiments described above, the information indicative of the test results may indicate a status of at least one memory bank. The status of a memory bank may be provided in one or more granularities—per memory word, per a group of entries, or per the entire memory bank. FIGS.65-66illustrates four steps in a tester testing phase. In the first step, the tester writes (5221) the test sequence and the logic units of the banks write the data to their memories. The logic may also be complex enough to receive a command from the tester and generate the sequence on its own (as explained below). In the second step, the tester writes (5223) to the tested memory the expected results and the logic units compare the expected results to data read from their memory banks, saving a list of errors. Writing the expected results may be simplified if the logic is complex enough to generate on its own the sequence of expected results (as explained below). In the third step, the tester reads (5224) from the logic units the faulty addresses. In the fourth step, the tester acts (5225) upon the results and can fix the errors. For example, it may connect to a specific interface to program fuses in the memory but can also use any other mechanism that allows for programming an error correction mechanism within the memory. In such embodiments, the memory testers may use vectors to test the memory. For example, each vector may be built from an input series and an output series. The input series may include pairs of address and data to write to the memory (in many embodiments, this series could be modeled as a formula that allows a program, such as one executed by the logic units, to generate it when needed). In some embodiments, a test pattern generator may generate such vectors. It should be noted that a vector is an example data structure but some embodiments may use other data structures. The data structures may be compliant with other test data structures generated by testers located outside the integrated circuit. The output series may include address and data pairs comprising expected data to be read back from the memory (in some embodiments, the series could additionally or alternatively be generated by a program at runtime, e.g., by the logic units). Memory testing usually includes executing a list of vectors, each vector writing data to the memory according to the input series and then reading data back according to the output series and comparing it to its expected data. In case of a mismatch, the memory may be either classifies as faulty or, if the memory includes mechanisms for redundancy, may have the redundancy mechanisms activated such that the vectors are tested again on the activated redundancy mechanisms. In embodiments where memories include processor subunits (as described above with respect toFIG.7A) or contain many memory controllers, the entire test may be handled by the logic units of the banks. Thus, a memory controller or processor subunit may perform the tests. The memory controller may be programmed from the tester, and the results of the test may be saved in the controller itself to later be read by the tester. To configure and test the operation of the logic unit, the tester may configure the logic unit for memory access and confirm that the results can be read by the memory access. For example, an input vector may contain programming sequences for the logic unit, and the output vector may contain expected results of such testing. For example, if a logic unit such as a processor subunit comprises a multiplier or adder configured to perform computations on two addresses in the memory, an input vector may include a set of commands that writes data to the memory and a set of commands to the adder/multiplier logic. As long as the adder/multiplier results can be read back to an output vector, the results may be sent to the teste. The testing may further include loading the logic configuration from the memory and having the logic output sent to the memory. In embodiments where the logic unit loads its configuration from the memory (e.g., if the logic is a memory controller), the logic unit may run its code from the memory itself. Accordingly, the input vector may include a program for the logic unit, and the program itself may test various circuits in the logic unit. Thus, the testing may not be limited to receiving vectors in formats used by external testers. If the commands that are loaded to the logic unit instruct the logic unit to write back results into the memory bank, then the tester may read those results and compare them to an expected output series. For example, the vector written to the memory may be or may include a test program for the logic unit (e.g., the testing may assume the memory is valid, but even if not, the test program written would not work, and the test would fail, which is an acceptable result since the chip is invalid anyway) and/or how the logic unit ran the code and wrote back the results to the memory. Since all testing of the logic unit may be done through the memory (e.g., writing logic test inputs to the memory and writing test results back to the memory), the tester may run a simple vector test with an input sequence and expected, output sequence. Logic configuration and results may be accessed as read and/or write commands. FIG.68illustrates a tester520M that sends a write test sequence5221that is a vector. Parts of the vector include test code5232that is split between memory banks5212that are coupled to logic5215of a processing array. Each logic5215may execute the code5232stored in its associated memory bank, and the execution may include accessing one or more memory banks, performing calculations, and storing the results (e.g., test results5231) in the memory banks5212. The test results may be sent back (e.g., read back results5222) by tester5200. This may allow logic5215to be controlled by commands received by the I/O controller5214. InFIG.68, the I/O controller5214is connected to the memory banks and to the logic. In other embodiments, logic may be connected between the I/O controller5214and the memory banks. FIG.70illustrates a method5300for testing memory banks. For example, method5300may be implemented using any of the memory banks described above with respect toFIGS.65-69. Method5300may include steps5302,5310, and5320. Step5302may include receiving a request to test memory banks of an integrated circuit. The integrated circuit may include a substrate, a memory array that is disposed on the substrate and comprises the memory banks, a processing array disposed on the substrate, and an interface disposed on the substrate. The processing array may include a plurality of testing units, as described above. In some embodiments, the request may include configuration information, one or more vectors, commands, and the like. In such embodiments, the configuration information may include expected results of the testing of the memory array, instructions, data, values of output data to be read from memory array entries accessed during the testing of the memory array, a test pattern, and the like. The test pattern may include at least one out of (i) memory array entries to be accessed during the testing of the memory array, (ii) input data to be written to the memory arrays entries accessed during the testing of the memory array, or (iii) expected values of output data to be read from the memory array entries accessed during the testing of the memory array. Step5302may include at least one of the following and/or may followed by at least one of the following:a. receiving by the at least one test pattern generator instructions from the interface for generating the at least one test pattern;b. receiving by the interface and from an external unit that is external to the integrated circuit, configuration information including the instructions for generating the at least one test pattern;c. reading, by the at least one test pattern generator, configuration information including instructions for generating the at least one test pattern from the memory array;d. receiving, by the interface and from an external unit that is external to the integrated circuit, configuration information that comprises instructions that are the at least one test pattern;e. retrieving, by a plurality of testing units and from the memory array, test instructions that, once executed by the plurality of testing units, cause the plurality of testing units to test the memory array; andf. receiving by the plurality of testing units and from the memory array, test instructions that, once executed by the plurality of testing units, cause the plurality of testing units to test the memory array and to test the processing array. Step5302may be followed by step5310. Step5310may include testing, by the plurality of testing units and in response to the request, the multiple memory banks to provide test results. Method5300may further include receiving, by the interface, a plurality of times, during the testing of the memory array, partial test results obtained by the plurality of testing circuits. Step5310may include at least one of the following and/or may followed by at least one of the following:a. generating, by one or more test pattern generators (e.g., included in one, some, or all of the plurality of testing units) test patterns for use by one or more testing units to test at least one of the multiple memory banks;b. testing in parallel, by at least two of the plurality of testing units, at least two of the multiple memory banks;c. testing in series, by at least two of the plurality of testing units, at least two of the multiple memory banks;d. writing values to memory entries, reading the memory entries, and comparing the results; ande. correcting, by an error correction unit, at least one error detected during the testing of the memory array. Step5310may be followed by step5320. Step5320may include outputting, by the interface and outside the integrated circuit, information indicative of the test results. The information indicative of the test results may include identifiers of faulty memory array entries. This may save time by not sending read data regarding each memory entry. Additionally or alternatively, the information indicative of the test results may indicate a status of at least one memory bank. Accordingly, in some embodiments, the information indicative of the test results may be much smaller than the aggregate size of data units written to the memory banks of read from the memory banks during the testing and may be much smaller than the input data that may be sent from a tester that tests the memory without an assistance of the testing unit. The tested integrated circuit may comprise memory chip and/or a distributed processor as illustrated in any of the previous figures. For example, the integrated circuits described herein may be included in, may include, or otherwise comprise a memory chip as illustrated in any one ofFIG.3A,3B,4-6,7A-7D,11-13,16-19,22, or23. FIG.71illustrates an example of method5350for testing memory banks of an integrated circuit. For example, method5350may be implemented using any of the memory banks described above with respect toFIGS.65-69. Method5350may include steps5352,5355, and5358. Step5352may include receiving by an interface of an integrated circuit, configuration information that comprises instructions. The integrated circuit that includes the interface may also include a substrate, a memory army that comprises memory bank and is disposed on the substrate, a processing array disposed on the substrate; and an interface disposed on the substrate. The configuration information may include expected results of the testing of the memory array, instructions, data, values of output data to be read from memory array entries accessed during the testing of the memory array, a test pattern, and the like. Additionally or alternatively, the configuration information may include the instructions, addresses of memory entries to write the instructions, input data, and may also include addresses of memory entries to receive output values calculated during the execution of the instructions. The test pattern may include at least one out of (i) memory array entries to be accessed during the testing of the memory array, (ii) input data to be written to the memory arrays entries accessed during the testing of the memory array, or (iii) expected values of output data to be read from the memory array entries accessed during the testing of the memory array. Step5352may be followed by step5355. Step5355may include executing, by the processing array, the instructions by accessing the memory array, performing computational operations, and providing results. Step5355may be followed by step5358. Step5358may include outputting, by the interface and outside the integrated circuit, information indicative of the results. Cyber-Security and Tamper Detection Techniques Memory chips and/or processors can be targeted by malicious actors and may be subjected to various types of cyber-attacks. In some cases, such attacks may attempt to change data and/or code stored in one or more memory resources. Cyber-attacks may be especially problematic relative to trained neural networks or other types of artificial intelligence (AI) models that depend on significant quantities of data stored in memory. If the stored data is manipulated or even obscured, such manipulation can be harmful. For example, an autonomous vehicle system reliant upon data intensive AI models to identify other vehicles or pedestrians, etc. may incorrectly assess the environment of a host vehicle if the data on which the models rely is corrupted or obscured. As a result, accidents may occur. As AI models become more prevalent across a wide array of technologies, cyber-attacks against the data associated with such models have the potential for major disruptions. In other cases, cyber-attacks may include one or more actors tampering with or attempting to tamper with the operational parameters associated with a processor or other types of integrated circuit based logic elements. For example, a processor is usually designed to operate within certain operational specifications. Cyber attacks involving tampering may seek to change one or more of the operational parameters of processors, memory units, or other circuits such that they exceed their designed operational specifications (e.g., clock speed, bandwidth specifications, temperature limitations, operation rate, etc.). Such tampering may result in the malfunction of the targeted hardware. Conventional techniques for defending against cyber-attacks may include computer programs operating at the processor level (e.g., anti-virus or anti-malware software). Other techniques may include the use of software-based firewalls associated with routers or other hardware. While these techniques may counter cyber-attacks using software programs executed outside of memory units, there remains a need for additional or alternative techniques for efficiently protecting data stored in memory units, especially where the accuracy and availability of that data is critical to the operation of a memory intensive application, such as a neural network, etc. Embodiments of the present disclosure can provide various integrated circuit designs comprising memory that resistant to cyber-attacks against the memory. The retrieval of sensitive information and commands to the integrated circuit in a secure manner (for example during a boot process when interfaces to the outside of the chip/integrated circuit are not yet active) and then maintaining the sensitive information and commands within the integrated circuit without exposing it outside the integrated circuit and completing computations within the integrated circuit can increase the security of the sensitive information and commands. CPUs and other types of processing units are vulnerable to cyber attacks, especially when those CPUs/processing units operate with external memory. The disclosed embodiments, including distributed processor subunits disposed on a memory chip amongst a memory army including a plurality of memory banks may be less susceptible to cyber attacks and tampering (e.g., because the processing occurs within the memory chip). Including any combination of the disclosed safety measures, discussed in more detail below, may further decrease the susceptibility of the disclosed embodiments to cyber attack and/or tampering. FIG.72Ais a diagrammatic representation of an integrated circuit7200including a memory array and a processing array, consistent with embodiments of the present disclosure. For example, integrated circuit7200may include any of the distributed processor-on-a-memory chip architectures (and features) described in the sections above and throughout the disclosure. The memory array and the processing array may be formed on a common substrate, and in certain disclosed embodiments, integrated circuit7200can constitute a memory chip. For example, as discussed above, integrated circuit7200may include a memory chip including a plurality of memory banks and a plurality of processor subunits spatially distributed on the memory chip, where each of the plurality of memory banks is associated with a dedicated one or more of the plurality of processor subunits. In some cases, each processor subunit may be dedicated to one or more memory banks. In some embodiments, the memory array may include a plurality of discrete memory banks7210_1,7210_2, . . .7210_J,7210_Jn, as shown inFIG.72A. According to embodiments of the present disclosure, memory array7210may comprise one or more types of memory including, e.g., volatile memory (such as RAM. DRAM, SRAM, phase-change RAM (PRAM), magnetoresistive RAM (MRAM), resistive RAM (ReRAM), or the like) or non-volatile memory (such as flash or ROM). According to some embodiments of the present disclosure, memory banks7210_1to7210_Jn can include a plurality of MOS memory structures. As noted above, the processing array may include a plurality of processor subunits7220_1to7220_K. In some embodiments, each of processor subunits7220_1to7220_K may be associated with one or more discrete memory banks among the plurality of discrete memory banks7210_1to7210_Jn. While the example embodiment ofFIG.72Aillustrates each processor subunit associated with two discrete memory banks7210, it is appreciated that each processor subunit may be associated with any number of discrete, dedicated memory banks. And vice versa, each memory bank may be associated with any number of processor subunits. According to embodiments of the present disclosure, the number of discrete memory banks included in the memory array of integrated circuit7200may be equal to, less than, or greater than the number of processor subunits included in the processing array of integrated circuit7200. Integrated circuit7200can further include a plurality of first buses7260, consistent with embodiments of the present disclosure (and as described in the sections above). Each bus7260can connect a processor subunit7220_kto a corresponding, dedicated memory bank7210_j. According to some embodiments of the present disclosure, integrated circuit7200can further include a plurality of second buses7261. Each bus7261can connect a processor subunit7220_kto another processor subunit7220_k+1. As shown inFIG.72A, a plurality of processor subunits7220_1to7220_K can be connected to one another via bus7261. WhileFIG.72Aillustrates a plurality of processor subunits7220_1to7220_K forming a loop as they are serially connected via bus7261, it is appreciated that processor units7220can be connected in any other manner. For example, in some cases a particular processor subunit may not be connected to other processor subunits via bus7261. In other cases, a particular processor subunit may be connected to only one other processor subunits, and in still other cases, a particular processor subunit may be connected to two or more other processor subunits via one or more buses7261(e.g., forming series connection(s), parallel connection(s), branched connection(s), etc.). It should be noted that the embodiments of integrated circuit7200described herein are exemplary only. In some cases, integrated circuit7200may have different internal components and connections, and in other cases one or more of the internal components and described connections may be omitted (e.g., depending on the needs of a particular application). Referring back toFIG.72A, integrated circuit7200may include one or more structures for implementing at least one security measure relative to integrated circuit7200. In some cases, such structures may be configured to detect cyber-attacks that manipulate or obscure (or attempt to manipulate or obscure) data stored in one or more of the memory banks. In other cases, such structures may be configured to detect tampering with operational parameters associated with integrated circuit7200or tampering with one or more hardware elements (whether included within integrated circuit7200or outside of integrated circuit7200) that directly or indirectly affect one or more operations associated with integrated circuit7200. In some cases, a contoller7240can be included in integrated circuit7200. Controller7240may be connected, for example, to one or more of the processor subunits7220_1. . .7220_kvia one or more buses7250. Controller7240may also be connected to one or more of the memory banks7210_1. . .7210_Jn. While the example embodiment ofFIG.72Ashows one controller7240, it should be understood, that controller7240may include multiple processor elements and/or logic circuits. In the disclosed embodiments, controller7240may be configured to implement the at least one security measure relative to at least one operation of the integrated circuit7200. Further, in the disclosed embodiments, controller7240may be configured to take (or cause) one or more remedial actions if the at least one security measure is triggered. According to some embodiments of the present disclosure, the at least one security measure can include a controller implemented process for locking access to certain aspects of integrated circuit7200. The locking of access involves having the controller prevent access (for read and/or write) to certain regions of the memory from outside of the chip. The access control may be applied in an address resolution, part of memory bank resolution, memory bank resolution, and the like. In some cases, one or more physical locations in memory associated with integrated circuit7200(e.g., one or more memory banks or any portion of one or more of the memory banks of integrated circuit7200) may be locked. In some embodiments, controller7240may lock access to certain portions of integrated circuit7200associated with execution of an artificial intelligence model (or other type of software-based system). For example, in some embodiments, controller7240may lock access to weights of a neural network model stored in memory associated with integrated circuit7200. It is noted that a software program (i.e., model) may include three components, including: input data to the program, code data of the program, and output data from executing the program. Such components may also be applicable to a neural network model. During operation of such a model, input data may be generated and fed to the model, and executing the model may generate output data for reading. The program code and data values (e.g., predetermined model weights, etc.) associated with executing the model using the received input data, however, may remain fixed. Locking, as described herein, may refer to an operation of a controller, for example, not allowing a read or write operation relative to certain regions of memory initiated from outside of a chip/integrated circuit. The controller, through which the I/O of the chip/integrated circuit may pass, may lock not just full memory banks, but may also lock any range of memory addresses within the memory banks, from a single memory address to a range of addresses including all of the address of the available memory banks (or any range of addresses in between). Because memory locations associated with receiving the input data and storing the output data are associated with changing values and interaction with components outside of integrated circuit7200(e.g., components that supply the input data or receive the output data), locking access to those memory locations may be impractical in some cases. On the other hand, restricting access to memory locations associated with the model code and fixed data values may be effective against certain types of cyber-attack. Thus, in some embodiments, memory associated with program code and data values (e.g., memory not used for writing/receiving input data and for reading/providing output data) can be locked as a security measure. The restricted access may include locking certain memory locations such that no changes can be made to certain program code and/or data values (e.g., those associated with executing a model based on received input data). Additionally, memory areas associated with intermediate data (e.g., data generated during execution of the model) may also be locked against external access. Thus, while various computational logic, whether onboard integrated circuit7200or located outside of integrated circuit7200may provide data to or receive data from memory locations associated with receiving input data or retrieving generated output data, such computational logic will not have the ability to access or modify memory locations storing the program code and data values associated with program execution based on received input data. In addition to locking memory locations on integrated circuit7200to provide a security measure, other security measures may be implemented by restricting access to certain computational logic elements (and to the memory regions they access) configured to execute code associated with a particular program or model. In some cases, such access restriction may be accomplished relative to computational logic (and their associated memory regions) located on integrated circuit7200(e.g., a computational memory (e.g., a memory including computational abilities, such as the distributed processor on a memory chip disclosed herein), etc.). Access to computational logic (and associated memory locations) associated with any execution of code stored in a locked memory portion of integrated circuit7200or with any access to data values stored in a locked memory portion of integrated circuit7200can also be locked/restricted regardless of whether that computational logic is located onboard integrated circuit7200Restricting access to the computational logic responsible for executing a program/model may further ensure that the code and data values associated with operating on received input data remain protected from manipulation, being obscured, etc. Controller-implemented security measures including locking or restricting access to hardware-based regions associated with certain portions of the memory array of integrated circuit7200may be accomplished in any suitable manner. In some embodiments, such locking may be implemented by adding or supplying a command to controller7240configured to cause controller7240to lock certain memory portions. In some embodiments, the hardware-based memory portions to be locked can be designated by particular memory addresses (e.g., addresses associated with any memory elements of memory banks7210_1. . .7210_J2, etc.). In some embodiments, a locked region of memory may remain fixed during program or model execution. In other cases, the locked region may be configurable. That is, in some cases, controller7240may be supplied with commands such that during execution of a program or model, a locked region may change. For example, at particular times, certain memory locations may be added to the locked region of memory. Or, at particular times, certain memory locations (e.g., previously locked memory locations) may be excluded from the locked region of memory. Locking of certain memory locations may be accomplished in any suitable manner. In some cases, a record of locked memory locations (e.g., a file, database, data structure, etc. that stores and identifies locked memory addresses) may be accessible by controller7240such that controller7240may determine whether a certain memory request relates to a locked memory location. In some cases, controller7240maintains a database of locked addresses to use controlling access to certain memory locations. In other cases, the controller may have a table or a set of one or more registers that are configurable until locking and may include fixed, predetermined values identifying memory locations to lock (e.g., to which memory access from outside the chip should be restricted). For example, when a memory access is requested, controller7240may compare the memory address associated with the memory access request to the locked memory addresses. If the memory address associated with the memory access request is determined to be within a list of locked memory addresses, then the memory access request (e.g., whether a read or write operation) can be denied. As discussed above, the at least one security measure can include locking access to certain memory portions of memory array7210that are not used for receiving input data or for providing access to generated output data. In some cases, the memory portions within a locked region may be adjusted. For example, locked memory portions may be unlocked, and non-locked memory portions may be locked. Any suitable method may be used for unlocking a locked memory portion. For example, an implemented security measure can include requiring a passphrase for unlocking one or more portion of a locked memory region. Triggering of an implemented security measure may occur upon detection of any action counter to the implemented security measure. For example, an attempted access (whether a read or a write request) to a locked memory portion may trigger a security measure. Additionally, if an entered passphrase (e.g., seeking to unlock a locked memory portion) does not match a predetermined passphrase, the security measure can be triggered. In some cases, a security measure can be triggered if a correct passphrase is not provided within an allowable threshold number of passphrase entry attempts (e.g., 1, 2, 3, etc.). Memory portions may be locked at any suitable times. For example, in some cases, memory portions can be locked at various times during a program execution. In other cases, memory portions may be locked upon startup or prior to program/model execution. For example, memory addresses to be locked can be determined and identified along with programming of a program/model code or upon generation and storage of data to be accessed by a program/model. Thereby, vulnerability to attacks on memory array7210may be reduced or eliminated during times when or after a program/model execution begins, after data to be used by a program/model has been generated and stored, etc. Unlocking of locked memory may be accomplished by any suitable method or at any suitable times. As described above, a locked memory portion may be unlocked after receipt of a correct passphrase or passcode, etc. In other cases, locked memory may be unlocked by restarting (by a command or by powering off and on) or deleting the entire memory array7210. Additionally or alternatively, a release command sequence can be implemented to unlock one or more memory portions. According to embodiments of the present disclosure, and as described above, controller7240can be configured to control traffic both to and from the integrated circuit7200, especially from sources external to the integrated circuit7200. For example, as shown inFIG.72A, traffic between components external to the integrated circuit7200and components internal to integrated circuit7200(e.g., memory array7210or processor subunit7220) can be controlled by controller7240. Such traffic can pass through controller7240or through one or more buses (e.g.,7250,7260, or7261) that are controlled or monitored by controller7240. According to some embodiments of the present disclosure, integrated circuit7200can receive unchangeable data (e.g., fixed data; e.g., model weights, coefficients, etc.), and certain commands (e.g., code; e.g., identifying memory portions to be locked) during a boot process. Here, the unchangeable data may refer to data that is to remain fixed during execution of a program or model and which may remain unchanged until a subsequent boot process. During program execution, integrated circuit7200may interact with changeable data, which may include input data to be processed and/or output data generated by processing associated with integrated circuit7200. As discussed above, access to memory array7210or processing array7220can be restricted during program or model execution. For example, access can be limited to certain portions of memory array7210or to certain processor subunits associated with processing or interaction with incoming input data to be written or with processing or interaction with generated output data to be read. During program or model execution, memory portions containing unchangeable data may be locked and, thereby, made inaccessible. In some embodiments, unchangeable data and/or commands associated with memory portions to be locked may be included in any appropriate data structure. For example, such data and/or commands may be made available to controller7240via one or more configuration files accessible during or after a bootup sequence. Referring back toFIG.72A, integrated circuit7200can further include a communication port7230. As shown inFIG.72A, controller7240can be coupled between communication port7230and bus7250that is shared between processing subunits7220_1to7220_K. In some embodiments, communication port7230can be indirectly or directly coupled to a host computer7270associated with host memory7280that may include, e.g., a non-volatile memory. In some embodiments, host computer7270can retrieve changeable data7281(e.g., input data to be used during execution of a program or model), unchangeable data7282, and/or commands7283from its associated host memory7280. The changeable data7181, unchangeable data7282, and commands7283can be uploaded from host computer7270to controller7240via7230during a boot process. FIG.728is a diagrammatic representation of a memory region inside an integrated circuit, consistent with embodiments of the present disclosure. As shown,FIG.72Bdepicts examples of data structures included in host memory7280. Reference is now made to FIG.7A3which another example of an integrated circuit, consistent with embodiments of the present disclosure. As shown inFIG.73A, a controller7240may include a cyberattack detector7241and a response module7242. In some embodiments of the present disclosure, controller7240can be configured to store or have access to access control rules7243. According to some embodiments of the present disclosure, access control rules7243can be included in a configuration file accessible to controller7240. In some embodiments, access control rules7243can be uploaded to controller7240during a booting process. Access control rules7243can comprise information indicating access rules associated with any of changeable data7281, unchangeable data7282, and commands7283and their corresponding memory locations. As explained above, access control rules7243or configuration file can include information identifying certain memory addresses among memory array7210. In some embodiments, controller7240can be configured to provide a locking mechanism and/or function that locks various addresses of memory array7210. e.g., addresses for storing commands or unchangeable data. Controller7240can be configured to enforce access control rules7243, e.g., to prevent unauthorized entities from changing unchangeable data or commands. In some embodiments, reading of unchangeable data or commands may be forbidden according to access control rules7243. According to some embodiments of the present disclosure, controller7240can be configured to determine whether an access attempt is made to at least a portion of certain commands or unchangeable data. Controller7240(e.g., including cyberattack detector7241) can compare memory addresses associated with an access request to the memory addresses for unchangeable data and commands to detect whether an unauthorized access attempt has been made to one or more locked memory locations. In this way, for example, cyberattack detector7241of controller7240may be configured to determine whether a suspected cyber attack occurs, e.g., a request to alter one or more commands or to change or obscure unchangeable data associated with one or more locked memory portions. Response module7242can be configured to determine how to respond and/or to implement a response to a detected cyberattack. For example, in some cases, in response to a detected attack on data or commands in one or more locked memory locations, response module7242of controller7240may implement or cause to be implemented a response that may include, for example, halting of one or more operations, such as a memory access operation associated with the detected attack. A response to a detected attack may also include halting one or more operations associated with execution of a program or model, remaining of a warning or other indicator of an attempted attack, asserting an indication line to the host or deleting the entire memory, etc. In addition to locking memory portions, other techniques for protecting against cyber attack may also be implemented to provide the described security measures associated with integrated circuit7200. For example, in some embodiments, controller7240may be configured to duplicate a program or model within different memory locations and processor subunits associated with integrated circuit7200. In this way, the program/model and the duplicate of the program/model may be independently executed, and the results of the independent program/model executions can be compared. For example, a program/model may be duplicated in two different memory banks7210and executed in different processor subunits7220in the integrated circuit7200. In other embodiments, a program/model can be duplicated in two different integrated circuits7200. In either case, the results of the program/model execution can be compared to determine whether any differences exist between the duplicate program/model execution. A detected difference in execution results (e.g., intermediate execution results, final execution results, etc.) may indicate the presence of a cyberattack having altered one or more aspects of a program/model or its associated data. In some embodiments, different memory banks7210and processor subunits7220can be assigned to execute two duplicated models based on the same input data. In some embodiments, intermediate results can be compared during executing the two duplicated models based on same input data and, if there is mismatch between two intermediate results at the same stage, execution can be suspended as a potential remedial action. In the case processor subunits of the same integrated circuit execute the two duplicated models—that integrated circuit may also compare the results. This can be done without informing any entities outside the integrated circuit about the execution of the two duplicated models, in other words, entities outside of the chip are unaware that duplicate models are running in parallel on the integrated circuit. FIG.73Bis a diagrammatic representation of a configuration for executing duplicate models simultaneously, consistent with embodiments of the present disclosure. While a single program/model duplication is described as one example for detection of a possible cyber attack, any number of duplications (e.g., 1, 2, 3, or more) may be used to detect possible cyber attacks. As the number of duplications and independent program/model executions increases, a confidence level in detection of a cyber attack may also increase. A larger number of duplications can also decrease a potential success rate of a cyber attack, as it may be more difficult for an attacker to impact multiple program/model duplicates. The number of program or model duplicates may be determined at runtime to further increase the difficulty of a cyber attacker in successfully impacting a program or model execution. In some embodiments, duplicated models can be non-identical in one or more aspects different with each other. In this example, code associated with two programs/models may be made different from one another, yet the programs/models may be designed such that both return the same output results. In at least this way, the two programs/models may be considered duplicates of one another. For example, two neural network models may have a different ordering of neurons in a layer relative to one another. Yet, despite this change in the model code, both may return the same output results. Duplicating programs/models in this manner may make it more difficult for a cyber attacker to identify these effective duplicates of programs or models to compromise, and as a result, the duplicate models/programs may provide not only a way to provide redundancy to minimize cyber attack impact, but can also enhance cyber attack detection (e.g., by highlighting tampering or unauthorized access where a cyber attacker alters one program/model or its data, but fails to make corresponding changes to the program/model duplicates). In many cases, duplicate programs/models (especially including duplicate programs/models that exhibit code differences) may be designed such that their outputs do not exactly match, but rather constitute soft values (e.g., approximately the same output values) as opposed to exact, fixed values. In such embodiments, the output results from two or more effective program/model duplicates can be compared (e.g., using a dedicated module or with a host processor) to determine whether a difference between their output results (whether intermediate or final results) fall within a predetermined range. Differences in the outputted soft values that do not exceed a predetermined threshold or range may be considered as evidence of no tampering, unauthorized access, etc. On the other hand, if differences in the outputted soft values exceed the predetermined threshold or range, such differences may be considered as evidence that a cyber attack in the form of tampering, unauthorized access to memory, etc. has occurred. In such cases, the duplicate program/model security measure would be triggered and one or more remedial actions may be taken (e.g., halting execution of a program or model, shutting down one or more operations of integrated circuit7200, operating in a safe mode with limited functionality, among many others). Security measures associated with integrated circuit7200may also involve quantitative analysis of data associated with an executing or executed program or model. For example, in some embodiments, controller7240may be configured to calculate one or more checksum/hash/CRC (cyclic redundancy check)/parity values with respect to data stored in at least a portion of memory array7210. The calculated value(s) can be compared to a predetermined value or predetermined values. If there is discrepancy between compared values, such a discrepancy may be interpreted as evidence of tampering on the data stored in the at least portion of memory army7210. In some embodiments, a checksum/hash/CRC/parity value can be calculated for all memory locations associated with memory array7210to identify changes in data. In this example, entire memory (or memory bank) in question can be read by, e.g., host computer7270or processor associated with integrated circuit7200, for calculating a checksum/hash/CRC/parity value. In other cases, a checksum/hash/CRC/parity value can be calculated for a predetermined subset of memory locations associated with memory array7210to identify changes in data associated with the subset of memory locations. In some embodiments, controller7240can be configured to calculate checksum/hash/CRC/parity values associated with a predetermined data path (e.g., associated with a memory access pattern), and the calculated values may be compared to one another or to predetermined values to determine whether tampering or another form of cyber attack has occurred. Integrated circuit7200may be made even more secure against a cyber attack by safeguarding one or more predetermined values (e.g., expected checksum/hash/CRC/parity value, expected difference values in intermediate or final output results, expected difference ranges associated with certain values, etc.) within integrated circuit7200or in a location accessible to integrated circuit7200. For example, in some embodiments, one or more predetermined values can be stored in a register of memory array7210and can be used (e.g., by controller7240of integrated circuit7200) to evaluate intermediate or final output results, checksums, etc., during or after each run of the model. In some cases, the register values may be updated using a “save the last result data” command to calculate predetermined values on the fly, and the calculated value can be saved in the register or in another memory location. In this way, valid output values may be used to update the predetermined values used for comparison after each program or model execution or partial execution. Such a technique may increase the difficulty a cyber attacker may experience in attempting to modify or otherwise tamper with one or more predetermined reference values designed to expose cyber attacker activities. In operation, a CRC calculator may be used to track memory accesses. For example, such a calculation circuit can be disposed at the memory bank level, in the processor subunits, or at the controller, where each may be configured to accumulate to a CRC calculator upon each memory access made. Reference is now made toFIG.74Aproviding a diagrammatic representation of another embodiment of integrated circuit7200. In the example embodiment represented byFIG.74A, controller7240may include a tamper detector7245and a response module7246. Similar to other disclosed embodiments, tamper detector7245can be configured to detect evidence of a potential tampering attempt. According to some embodiments of the present disclosure, a security measure associated with integrated circuit7200and implemented by controller7240, for example, can include a comparison of actual program/model operational patterns to predetermined/allowed operational patterns. The security measure may be triggered if the actual program/model operational patterns are different in one or more aspects from the predetermined/allowed operational patterns. And if the security measure is triggered, a response module7246of controller7240can be configured to implement one or more remedial measures in response. FIG.74Cis a diagrammatic representation of detection dements that may be located at various point within a chip, according to exemplary disclosed embodiments. Detection of cyber attacks and tampering, as described above, may be performed using detection elements located at various point within a chip, as shown for example, inFIG.74C. For example, certain code may be associated with an expected number of processing events within a certain time period. The detector shown inFIG.74Ccan count the number of events (monitored by the event counter) that the system experiences during a certain time period (monitored by the time counter). If the number of events exceeds a certain predetermined threshold (e.g., a number of expected events during a predefined time period), then tampering may be indicated. Such detectors may be included in multiple points of the system to monitor various types of events, as shown inFIG.74C. More specifically, in some embodiments, controller7240can be configured to store or have access to expected program/model operational patterns7244. For example, in some cases, an operational pattern may be represented as a graph7247indicating allowed load per time patterns and forbidden or illegal load per time patterns. Tampering attempts may cause memory array7210or processing array7220to operate outside of certain operational specifications. This may cause memory array7210or processing array7220to generate heat or to malfunction and may enable changes in data or code related to the memory array7210or processing array7220. Such changes may result in operational patterns outside of the allowed operational patterns, as indicated by graph7247. According to some embodiments of the present disclosure, controller7240can be configured to monitor an operational pattern associated with memory array7210or processing array7220. The operational pattern may be associated with the number of access requests, types of access requests, timing of access requests, etc. Controller7240can be further configured to detect tampering attacks if the operation patterns am different from allowable operation patterns. It should be noted that the disclosed embodiments may be used not only to protect against cyber attacks, but also to protect against non-malicious errors in operation. For example, the disclosed embodiments may also be effective for protecting a system, such as integrated circuit7200, against errors resulting from environmental factors such as temperature or voltage changes or levels, especially where such levels are outside of operational specifications for integrated circuit7200. In response to detection of a suspected cyber attack (e.g., as a response to a triggered security measure), any suitable remedial actions may be implemented. For example, remedial actions may include halting one or more operations associated with program/model execution, operating one or more components associated with integrated circuit7200in a safe mode, locking one or more components of integrated circuit7200to additional inputs or access, etc. FIG.748provides a flowchart representation of a method7450of securing an integrated circuit against tampering, according to exemplary disclosed embodiments. For example, step7452may include implementing, using a controller associated with the integrated circuit, at least one security measure with respect to an operation of the integrated circuit. At step7454, one or more remedial actions may be taken if the at least one security measure is triggered. The integrated circuit includes: a substrate; a memory array disposed on the substrate, the memory array including a plurality of discrete memory banks; and a processing array disposed on the substrate, the processing array including a plurality of processor subunits, each one of the plurality of processor subunits being associated with one or more discrete memory banks among the plurality of discrete memory banks. In some embodiments, the disclosed security measures may be implemented in multiple memory chips, and at least one or more of the disclosed security mechanisms can be implemented for each memory chip/integrated circuit. In some cases, each memory chip/integrated circuit may implement the same security measure, but in some cases, different memory chips/integrated circuits may implement different security measures (e.g., when different security measures may be more suitable to a certain type of operation associated with a particular integrated circuit). In some embodiments, more than one security measure may be implemented by a particular controller of an integrated circuit. For example, a particular integrated circuit may implement any number or types of the disclosed security measures. Additionally, a particular integrated circuit controller may be configured to implement multiple different remedial measures in response to a triggered security measure. It should also be noted that two or more of the above described security mechanisms may be combined to improve security against cyber-attacks or tamper attacks. Additionally, security measures may be implemented across different integrated circuits and such integrated circuits may coordinate security measure implementation. For example, model duplication can be performed within one memory chip or may be performed across different memory chips. In such an example, results from one memory chip or results from two or more memory chips can be compared to detect potential cyber-attacks or tamper attacks. In some embodiments, the duplicated security measures applied across multiple integrated circuits may include one or more of the disclosed access locking mechanisms, hash protection mechanisms, model duplication, program/model execution pattern analysis, or any combination of these or other disclosed embodiments. Multi-Port Processor Sub-Units in a DRAM As described above, the presently disclosed embodiments may include a distributed processor memory chip that includes an array of processor subunits and an array of memory banks where each of the processor subunits may be dedicated to at least one of the array of memory banks. As discussed in the section below, the distributed processor memory chip may serve as the basis for a scalable system. That is, in some cases, the distributed processor memory chip may include one or more communication ports configured to transfer data from one distributed processor memory chip to another. In this way, any desired number of distributed processor memory chips may be linked together (e.g., in series, in parallel, in a loop, or any combination thereof) to form a scalable array of distributed processor memory chips. Such an array may provide a flexible solution for efficiently performing memory intensive operations and for scaling the computing resources associated with performance of the memory intensive operations. Because the distributed processor memory chips may include clock having differing timing patterns, the presently disclosed embodiments include features to accurately control data transfers between distributed processor memory chips even in the presence of clock timing differences. Such embodiments may enable efficient data sharing among different distributed processor memory chips. FIG.75Ais a diagrammatic representation of a scalable processor memory system including a plurality of distributed processor memory chips, consistent with embodiments of the present disclosure. According to embodiments of the present disclosure, a scalable processor memory system can include a plurality of distributed processor memory chips such as a first distributed processor memory chip7500, a second distributed processor memory chip7500′, and a third distributed processor memory chip7500″. Each of the first distributed processor memory chip7500, second distributed processor memory chip7500′, and a third distributed processor memory chip7500″ can include any of the configurations and/or features associated with any of the embodiments described in the present disclosure, distributed processor. In some embodiments, each of first distributed processor memory chip7500, second distributed processor memory chip7500′, and third distributed processor memory chip7500″ can be implemented similar to the integrated chip7200shown inFIG.7200. As shown inFIG.75A, first distributed processor memory chip7500can comprise a memory array7510, a processing array7520, and a controller7540. Memory array7510, processing array7520, and controller7540can be configured similarly to memory array7210, processing array7220, and controller7240inFIG.72A. According to embodiments of the present disclosure, first distributed processor memory chip7500may include a first communication port7530. In some embodiments, first communication port7530can be configured to communicate with one or more external entities. For example, communication port7530may be configured to establish a communication connection between distributed processor memory chip7500and an external entity other than another distributed processor memory chip, such as distributed processor memory chips7500′ and7500″. For example, communication port7530can be indirectly or directly coupled to a host computer, e.g., as illustrated inFIG.72A, or any other computing device, communications module, etc. According to embodiments of the present disclosure, first distributed processor memory chip7500can further comprise one or more additional communication ports configured to communicate with other distributed processor memory chips e.g.,7500′ or7500″. In some embodiments, one or more additional communication ports can include a second communication port7531and a third communication port7532, as shown inFIG.75A. Second communication port7531can be configured to communicate with second distributed processor memory chip7500′ and to establish a communication connection between first distributed processor memory chip7500and second distributed processor memory chip7500′. Similarly, third communication port7532can be configured to communicate with third distributed processor memory chip7500′ and to establish a communication connection between first distributed processor memory chip7500and third distributed processor memory chip7500″. In some embodiments, first distributed processor memory chip7500(and any of memory chips disclosed herein) may include a plurality of communication ports, including any appropriate number of communication ports (e.g., 2, 3, 4, 5, 6, 7, 8, 9, 10, 20, 50, 100, 1000, etc). In some embodiments, the first communication port, the second communication port, and the third communication port are associated with a corresponding bus. The corresponding bus may be a bus common to each of the first communication port, the second communication port, and the third communication port. In some embodiments, the corresponding buses associated with each of the first communication port, the second communication port, and the third communication port are all connected to a plurality of discrete memory banks. In some embodiments, the first communication port is connected to at least one of a main bus internal to the memory chip or at least one processor subunit included in the memory chip. In some embodiments, the second communication port is connected to at least one of a main bus internal to the memory chip or at least one processor subunit included in the memory chip. While configurations of the disclosed distributed processor memory chips are explained relative to first distributed processor memory chip7500, it is noted that second processor memory chip7500′ and third processor memory chip7500″ may be configured similar to first distributed processor memory chip7500. For example, second distributed processor memory chip7500′ may also comprise a memory array7510′, a processing array7520′, a controller7540′, and/or a plurality of communication ports such as ports7530′,7531′, and7532′. Similarly, third distributed processor memory chip7500″ may comprise a memory array7510″, a processing array7520″, a controller7540″, and/or a plurality of communication ports such as ports7530″,7531″, and7532″. In some embodiments, second communication port7531′ and third communication port7532′ of second distributed processor memory chip7500′ may be configured to communicate with third distributed processor memory chip7500″ and first distributed processor memory chip7500respectively. Similarly, the second communication port7531″ and third communication port7532″ of third distributed processor memory chip7500″ may be configured to communicate with first distributed processor memory chip7500and second distributed processor memory chip7500′ respectively. This similarly in configurations among the distributed processor memory chips may facilitate scaling of a computational system based on the disclosed distributed processor memory chips. Further, the disclosed arrangement and configuration of the communication ports associated with each distributed processor memory chip may enable a flexible arrangement of an array of distributed processor memory chips (e.g., including connections in series, parallel, in looped, starred, or webbed connections, etc.). According to embodiments of the present disclosure, distributed processor memory chips, e.g., first to third distributed processor memory chips7500,7500′, and7500″ may communicate with each other via bus7533. In some embodiments, bus7533can connect two communication ports of two different distributed processor memory chips. For example, second communication port7531of first processor memory chip7500may be connected to third communication port7532′ of second processor memory chip7500′ via bus7533. According to embodiments of the present disclosure, distributed processor memory chips, e.g., first to third distributed processor memory chips7500,7500′, and7500″ may also communicate with external entities (e.g., a host computer) via a bus, such as bus7534. For example, first communication port7530of first distributed processor memory chip7500can be connected to one or more external entities via bus7534. The distributed processor memory chips may be connected to each other in various ways. In some cases, the distributed processor memory chips may exhibit serial connectivity in which each distributed processor memory chip is connected to a pair of adjacent distributed processor memory chips. In other cases, the distributed processor memory chips may exhibit a higher degree of connectivity, where at least one distributed processor memory chip is connected to two or more other distributed processor memory chips. In some cases, all distributed processor memory chips within a plurality of memory chips may be connected to all other distributed processor memory chips in the plurality. As shown inFIG.75A, bus7533(or any other bus associated with the embodiment ofFIG.75A) may be unidirectional. WhileFIG.75Aillustrates bus7533as unidirectional and having a certain data transfer flow (as indicated by the arrows shown inFIG.75A), bus7533(or any other bus inFIG.75A) may be implemented as a bidirectional bus. According to some embodiments of the present disclosure, a bus connected between two distributed processor memory chips may be configured to have a higher communication speed than that of bus connected between a distributed processor memory chip and an external entity. In some embodiments, communication between distributed processor memory chips and an external entity may occur during limited times, e.g., during an execution preparation (loading a program code, input data, weight data, etc. from a host computer), during a period of outputting results generated by execution of a neural network model, etc. to a host computer. During execution of one or more programs associated with the distributed processors of chips75M,7500′, and7500″, (e.g., during memory intensive operations associated with artificial intelligence applications, etc.) communication between distributed processor memory chips may occur over bus7533,7533′ etc. In some embodiments, communication between a distributed processor memory chip and an external entity may occur less frequently than communication between two processor memory chips. According to communication requirements and embodiments, a bus between a distributed processor memory chip and an external entity may be configured to have a communication speed equal to, greater than, or less than that of a bus between distributed processor memory chips. In some embodiments, as represented byFIG.75A, a plurality of distributed processor memory chips such as first to third distributed processor memory chips7500,7500′, and7500″ may be configured to communicate with one another. As noted, this capability may facilitate assembly of scalable distributed processor memory chip system. For example, memory arrays7510,7510′, and7510″ and processing arrays7520,7520′, and7520″ from first to third processor memory chips7500,7500′, and7500″ when linked by communication channels (such as the busses shown inFIG.75A, for example) may be considered as virtually belonging to a single distributed processor memory chip. According to embodiments of the present disclosure, communication between a plurality of distributed processor memory chips and/or communication between a distributed processor memory chip and one or more external entities can be managed in any suitable manner. In some embodiments, such communications may be managed by processing resources such as processing array7520in distributed processor memory chip7500. In some other embodiments, e.g., to relieve the processing resources provided by the arrays of distributed processors from computing loads imposed by communication management, a controller, such as controller7540,7540′,7540″, etc. of a distributed processor memory chip may be configured to manage communications between distributed processor memory chips and/or communications between distributed processor memory chip(s) and one or more external entities. For example, each controller7540,7540′, and7540″ of first to third processor memory chips7500,7500′, and7500″ may be configured to manage communications related to its corresponding distributed processor memory chip relative to other distributed processor memory chips. In some embodiments, controllers7540,7540′, and7540″ may be configured to control such communications through corresponding communication ports, such as ports7531,7531′,7531″.7532,7532′, and7532″, etc. Controllers7540,7540′, and7540″ may also be configured to manage communications between the distributed processor memory chips while taking into account timing differences that may exist among the distributed processor memory chips. For example, a distributed processor memory chip (e.g.,7500) may be fed by an internal clock, which may be different (e.g, have a different timing pattern) relative to clocks of other distributed processor memory chips (e.g.,7500′ and7500″). Therefore, in some embodiments, controller7540may be configured to implement one or more strategies for accounting for different clock timing patterns among distributed processor memory chips and to manage communications between the distributed processor memory chips by considering possible time deviations between distributed processor memory chips. For example, in some embodiments, a controller7540of a first distributed processor memory chip7500may be configured to enable a data transfer from first distributed processor memory chip7500to a second processor memory chip7500′ under certain conditions. In some cases, controller7540may withhold the data transfer if one or more processor subunits of the first distributed processor memory chip7500are not ready to transfer the data. Alternatively or additionally, controller7540may withhold the data transfer if a receiving processor subunit of the second distributed processor memory chip7500′ is not ready to receive the data. In some cases, controller7540may initiate the data transfer from the sending processor subunit (in chip7500, for example) to the receiving processor subunit (in chip7500′, for example) after establishing that both the sending processor subunit is ready to send the data, and the receiving processor subunit is ready to receive the data. In other embodiments, controller7540may initiate the data transfer based solely on whether the sending processor subunit is ready to send the data, especially if the data may be buffered in controller7540or7540′, for example, until the receiving processor subunit is ready to receive the transferred data. According to embodiments of the present disclosure, controller7540may be configured to determine whether one or more other timing constraints are fulfilled in order to enable data transfers. Such time constraints may be related to a time difference between a transfer time from a sending processor subunit and a receipt time in a receiving processor subunit, to an access request from an external entity (e.g., host computer) to data that is processed, to a refresh operation performed on memory resources (e.g., memory array) associated with sending or receiving processor subunits, among others. FIG.75Eis an example timing diagram, consistent with embodiments of the present disclosure.FIG.75Eillustrates the following example. In some embodiments, controller7540, and other controllers associated with the distributed processor memory chips, may be configured to manage data transfers between chips using a clock enable signal. For example, a processing array7520may be fed by a clock. In some embodiments, whether or not one or more processor subunits responds to a supplied clock signal may be controlled, e.g., by controller7540, using a clock enable signal (e.g., shown as “to CE” inFIG.75A). Each processor subunit, e.g.,7520_1to7520_K may execute a program code and the program code may include communication commands. According to some embodiments of the present disclosure, controller7540can control timing of the communication commands by controlling a clock enable signal to processor subunits7520_1to7520_K. For example, when a sending processor subunit (e.g., in first processor memory chip7500) is programmed to transfer data at a certain cycle (e.g., 1000thclock cycle) and a receiving processor subunit (e.g., in second processor memory chip7500′) is programmed to receive the data at the certain cycle (e.g., 1000thclock cycle), controller7540of first processor memory chip7500and controller7540′ of second processor memory chip7500′ may not allow data transfer until both sending and receiving processor subunits are ready to perform the data transfer according to some embodiments. For example, controller7540may “hold” a data transfer from the sending processor subunit in chip7500by supplying the sending the processor subunit with a certain clock enable signal (e.g., a logic low) that may prevent the sending processor subunit from sending the data in response to the received clock signal. The certain clock enable signal may “freeze” the entire distributed processor memory chip or any part of the distributed processor memory chip. On the other hand, controller7540may cause the sending processor subunit to initiate the data transfer by supplying the sending processor subunit with an opposite clock enable signal (e.g., a logic high) that causes the sending processor subunit to respond to the received clock signal. Similar operation, e.g., receiving or not receiving by a receiving processor subunit in chip7500′ may be controlled using a clock enable signal issued by controller7540′. In some embodiments, clock enable signals can be sent to all processor subunits (e.g.,7520_1to7520_K) in a processor memory chip (e.g.,7500). The clock enable signals, in general, may have the effect of causing the processor subunits to either respond to their respective clock signals or to disregard those clock signals. For example, in some cases, when the clock enable signal is high (depending on the convention of a particular application), a processor subunit may respond to its clock signal and may execute one or more instructions according to its clock signal timing. On the other hand, when the clock enable signal is low, the processor subunit is prevented from responding to its clock signal such that it does not execute instructions in response to clock timing. In other words, when the clock enable signal is low, a processor subunit may disregard received clock signals. Returning to the example ofFIG.75A, any of the controllers7540,7540′, or7540″ may be configured to use a clock enable signal to control operations of the respective distributed processor memory chips by causing one or more processor subunits in the respective arrays to respond or not to respond to received clock signals. In some embodiments, controllers7540,7540′, or7540″ may be configured to selectively advance code execution, e.g., when such code relates to or includes data transfer operations and timing thereof. In some embodiments, controllers7540,7540′, or7540″ may be configured to use clock enable signals to control timing of data transmission through any one of communication ports7531,7531′,7531″,7532,7532′, and7532″, etc. between two different distributed processor memory chips. In some embodiments, controllers7540,7540′, or7540″ may be configured to use clock enable signals to control times of data receipt through any one of communication ports7531,7531′,7531″,7532,7532′, and7532″, etc. between two different distributed processor memory chips. In some embodiments, data transfer timing between two different distributed processor memory chips may be arranged based on compilation optimization steps. The compilation may allow for building of processing routines in which tasks may be efficiently assigned to processing subunits without being affected by transmission delays over buses connected between two different processor memory chips. The compilation may be performed by a compiler in a host computer or transmitted to the host computer. Normally, transfer delays over bus between two different processor memory chips would result in data bottlenecks for processing subunits requiring the data. The disclosed compilation may schedule data transmission in a way that enables processing units to continuously receive data even with disadvantageous transmission delays over buses. While the embodiment ofFIG.75Aincludes three ports per distributed processor memory chip (7500′,7500″,7500′″), any number of ports may be included in the distributed processor memory chips according to the disclosed embodiments. For example, in some cases, the distributed processor memory chips may include more or fewer ports. In the embodiment ofFIG.75B, each distributed processor memory chip (e.g.,7500A-7500I) may be configured with multiple ports. These ports may be substantially the same as one another or may be different. In the example shown, each distributed processor memory chip includes five ports, including a host communication port7570and four chip ports7572. Host communication port7570may be configured to communication (via bus7534) between any of the distributed processor memory chips in an array, as shown inFIG.75Band a host computer, for example, remotely located relative to the array of distributed processor memory chips. Chip ports7572may be configured to enable communication between distributed processor memory chips via busses7535. Any number of distributed processor memory chips may be connected to one another. In the example shown inFIG.75B, including four chip ports per distributed processor memory chip may enable an array in which each distributed processor memory chip is connected to two or more other distributed processor memory chips, and in some cases, certain chips may be connected to four other distributed processor memory chips. Including more chip ports in the distributed processor memory chips may enable more interconnectivity between distributed processor memory chips. Additionally, while distributed processor memory chips7500A-7500I are shown inFIG.75Bwith two different types of communication ports7570and7572, in some cases a single type of communication port may be included in each distributed processor memory chip. In other cases, more than two different types of communication ports may be included in one or more of the distributed processor memory chips. In the example ofFIG.75C, each of distributed processor memory chips7500A′-7500C′ includes two (or more) of the same type of communication port7570. In this embodiment, communication port7570may be configured to enable communication with an external entity, such as a host computer via bus7534, and may also be configured to enable communication between distributed processor memory chips (e.g., between distributed processor memory chips7500B′ and7500C′ over bus7535. In some embodiments, the ports provided on one or more distributed processor memory chips may be used to provide access to more than one host. For example, in the embodiment shown inFIG.75D, a distributed processor memory chip includes two or more ports7570. Ports7570may constitute host ports, chip ports, or a combination of host and chip ports. In the example shown two ports7570and7570′ may provide two different hosts (e.g., host computers or computational elements, or other types of logic units) with access to distributed processor memory chip7500A via buses7534and7534′. Such an embodiment may provide access to distributed processor memory chip7500A to two (or more) different host computers. In other embodiments, however, both buses7534and7534′ may be connected to the same host entity, for example, where that host entity requires additional bandwidth or parallel access to one or more of the processor subunits\memory banks of distributed processor memory chip7500A. In some cases, as shown inFIG.75D, more than one controller7540and7540′ may be used to control accesses to the distributed processor subunits/memory banks of distributed processor memory chip7500A. In other cases, a single controller may be used to handle communications from one or more external host entities. Additionally, one or more buses internal to distributed processor memory chip7500A may enable parallel access to the distributed processor subunits/memory banks of distributed processor memory chip7500A. For example, distributed processor memory chip7500A may include a first bus7580and a second bus7580′ enabling parallel access to, for example, distributed processor subunits7520_1to7520_6and their corresponding, dedicated memory banks7510_1to7510_6. Such an arrangement may allow for simultaneous access to two different locations in distributed processor memory chip7500A. Further, in cases where not all ports are simultaneously used, they can share hardware resources within distributed processor memory chip7500A (e.g., a common bus and/or common controller) and can constitute IOs muxed to that hardware. In some embodiments, some of the computational units (e.g., processor subunits7520_1to7520_6) may be connected to an extra port (7570′) or controller, while others are not. Nevertheless, data from computational units that are not connected to the extra port7570′ may flow through an inner grid of connections to computational units that are connected to port7570′. In this way, communication can be performed at both ports7570and7570′ simultaneously without the need to add additional buses. While communication ports (e.g.,7530to7532) and controllers (e.g.,7540) have been illustrate as separate elements, it is appreciated that communication ports and controllers (or any other components) can be implemented as integrated units according to embodiments of the present disclosure.FIG.76provides a diagrammatic representation of a distributed processor memory chip7600having an integrated controller and interface module, consistent with embodiments of the present disclosure. As shown inFIG.76, processor memory chip7600may be implemented with an integrated controller and interface module7547that is configured to perform functions of controller7540and communication ports7530,7531, and7532inFIG.75. As shown inFIG.76, controller and interface module7547configured to communicate with multiple different entities such as an external entity, one or more distributed processor memory chips, etc. through interfaces7548_1to7548_N similar to communication ports (e.g.,7530,7531, and7532). Controller and interface module7547can be further configured to control communication between distributed processor memory chips or between distributed processor memory chip7600and an external entity, such as a host computer. In some embodiments, controller and interface module7547may include communication interfaces7548_1to7548_N configured to communicate in parallel with one or more other distributed processor memory chips and with an external entity such as a host computer, communication module, etc. FIG.77provides a flow diagram representing a process for transferring data between distributed processor memory chips in a scalable processor memory system shown inFIG.75, consistent with embodiments of the present disclosure. For illustrative purposes, a flow for transferring data will be described referring toFIG.75and assuming that data is transferred from first processor memory chip7500to second processor memory chip7500′. At step S7710, a data transfer request may be received. It should be noted, however, and as described above, in some embodiments a data transfer request may not be necessary. For example, in some cases, timing of a data transfer may be predetermined (e.g., by a particular software code). In such cases, the data transfer may proceed without a separate data transfer request. Step S7710can be performed by, for example, controller7540, among others. In some embodiments, the data transfer request may include a request for transferring data from one processor subunit of a first distributed processor memory chip7500to another processor subunit of a second distributed processor memory chip7500′. At step S7720, data transfer timing may be determined. As noted, data transfer timing may be predetermined and may depend on the order of execution of a particular software program. Step S7720can be performed by, for example, controller7540, among others. In some embodiments, data transfer timing may be determined by considering (1) whether a sending processor subunit is ready to transfer the data and/or (2) whether a receiving processor subunit is ready to receive the data. According to embodiments of the present disclosure, whether one or more other timing constraints are fulfilled to enable such data transfer can also be considered. The one or more time constraints may be related to a time difference between a transfer time from a sending processor subunit and a receipt time at a receiving processor subunit, to an access request from an external entity (e.g., host computer) to data that is processed, to a refresh operation performed on memory resources (e.g., memory array) associated with sending or receiving processor subunits, and etc. According to embodiments of the present disclosure, processing subunits may be fed by a clock. In some embodiments, the clock supplied to the processing subunits can be controlled, e.g., using a clock enable signal. According to some embodiments of the present disclosure, controller7540can control timing of the communication commands by controlling a clock enable signal to processor subunits7520_1to7520_K. At step S7730, data transmission can be performed based on the determined data transfer timing at step S7720. Step S7730can be managed by, for example, controller7540, among others. For example, a sending processor subunit of first processor memory chip7500can transfer data to a receiving processor subunit of second processor memory chip7500′ according to the determined data transfer timing at step S7720. The disclosed architectures may be useful in a variety of applications. For example, in some cases, the architectures above may facilitate sharing among different distributed processor memory chips data such as the weights or neuron values or partial neuron values associated with neural networks (especially large neural networks). Additionally, in certain operations, such as SUM, AVG, etc. may require data from multiple different distributed processor memory chips. In such cases, the disclosed architectures may facilitate sharing of this data from multiple distributed processor memory chips. Further still, the disclosed architectures may facilitate sharing of records between distributed processor memory chips to support join operations of queries, for example. It should also be noted that while the embodiments above have been described relative to distributed processor memory chips, the same principles and techniques may be applied to regular memory chips not including distributed processor sub-units, for example. For example, in some cases, multiple memory chips may be combined together into a multi-port memory chip to form an array of memory chips even without the array of processor subunits. In another embodiment, multiple memory chips may be combined together to form an array of connected memories, providing the host with virtually one larger memory comprised of the multiple memory chips. Internal connections of the ports may be to a main bus or to one of the internal processor sub-units included in the processing array. In-Memory Zero Detection Some embodiments of the present disclosure are directed to memory units for detecting a zero value stored in one or more particular addresses of a plurality of memory banks. This zero value detection feature of the disclosed memory units may be useful in reducing power consumption of a computing system and may, additionally or alternatively, also reduce processing time required for retrieving zero values from memory. This feature may be especially relevant in a system where a lot of data read is actually 0 values and also for calculation operations, such as multiplication\addition\subtraction\and more operations, for which retrieval of a zero value from memory may not be necessary (e.g., a multiplication of a zero value by any other value is zero) and the computation circuit can use the fact that one of the operands is zero and calculate the results more efficiently in time or energy. In such cases, detection of the presence of a zero value may be used in place of a memory access and retrieval of the zero value from memory. Throughout this section, the disclosed embodiments are described relative to a read function. It should be noted, however, that the disclosed architectures and techniques are equally applicable for zero-value write operations or also for other specific predetermined non-zero value operations in case other values may be more likely to appear more often. In the disclosed embodiments, rather than a retrieving a zero value from memory, when such a value is detected at a particular address, the memory unit may return a zero value indicator to one or more circuits outside the memory unit (e.g., one or more processors, CPUs, etc. located outside the memory unit). The zero value is a multiple-bit zero value zero (for example a zero-value byte, a zero-value word, a multi-bit zero value that is less than a byte, more than a byte, and the like). The zero value indicator is an I-bit signal indicating a zero value stored in the memory, thus it is beneficial to transfer the 1-bit zero value indicating signal than transferring n-bits of data stored in the memory. The transmitted zero indication may reduce energy consumption for the transfer by 1/n and may speed up computations, for example, where multiplication operations are involved in calculations of input by weights of neurons, convolutions, applying a kernel on input data, and many other calculations associated with trained neural networks, artificial intelligence, and a wide array of other types of computations. To provide this functionality, the disclosed memory units may include one or more zero value detection logic units that may detect the presence of a zero value in a particular location in memory, prevent retrieval of the zero value (e.g., via a read command), and cause a zero value indicator instead to be transmitted to circuitry outside the memory unit (for example using one or more control lines of the memory, one or more buses associated with the memory unit, etc). The zero value detection may be performed at a memory mat level, at a bank level, at a sub-bank level, at a chip level, etc. It should be noted that while the disclosed embodiments are described relative to the delivery of a zero indicator to a location external to a memory chip, the disclosed embodiments and features may also provide significant benefits in systems where processing may be done inside a memory chip. For example, in embodiments, such as the distributed processor memory chips disclosed herein, processing may be performed on the data in various memory banks by corresponding processor subunits. In many cases, such as the execution of neural networks or data analytics, for which the associated data may include many zeros, the disclosed techniques may speed processing and/or reduce power consumption associated with processing performed by the processor subunits in a distributed processor memory chip. FIG.78Aillustrates a system7800for detecting a zero value stored in one or more particular addresses of a plurality of memory banks implemented in a memory chip7810at a chip level, consistent with embodiments of the present disclosure. System7800may include memory chip7810and host7820. Memory chip7810may include a plurality of control units and each control unit may have a dedicated memory bank. For example, a control unit may be operably connected to a dedicated memory bank. In some cases, for example relative to the distributed processor memory chips disclosed here, which include processor subunits spatially distributed among an array of memory banks, processing within the memory chip may involve memory accesses (whether for reading or writing). Even in the case of processing internal to a memory chip, the disclosed techniques of detecting a zero value associated with a read or write command may allow the internal processor unit or subunits to forego transfer of a actual zero value. Instead, in response to a zero value detection and transmission of a zero value indicator (e.g., to one or more internal processing subunits), the distributed processor memory chip may save energy that would have otherwise been used for transmission of a zero data value within the memory chip. In another example, each of memory chip7810and host7820may include input/output (IO) to enable communications between memory chip7810and host7820. Each IO can be coupled with zero value indicator line7830A and bus7840A. Zero value indicator line7830A may transfer a zero value indicator from memory chip7810to host7820, wherein the zero value indicator may include a 1-bit signal generated by memory chip7810upon detecting a zero value stored in a particular address of a memory bank requested by host7820. Host7820, upon receiving the zero value indicator via zero value indicator line7830A, may perform one or more predefined actions associated with the zero value indicator. For example, if host7820requested to retrieve an operand for a multiplication to memory chip7810, host7820may calculate the multiplication more efficiently because host7820will have confirmation (without receiving the actual memory value) from the received zero value indicator that one of the operands is zero. Host7820may also provide instructions, data, and other input to memory chip7810and read output from memory chip7810via bus7840. Upon receiving communications from host7820, memory chip7810may retrieve data associated with the received communication and transfer the retrieved data to host7820via bus7840. In some embodiments, the host may send a zero value indicator to the memory chip rather than a zero data value. In this way, the memory chip (e.g., a controller disposed on the memory chip) may store or refresh a zero value in memory without having to receive the zero data value. Such an update may occur based on receipt of a zero value indicator (e.g., as part of a write command). FIG.78Billustrates a memory chip7810for detecting a zero value stored in one or more particular addresses of a plurality of memory banks7811A-B at a memory bank level, consistent with embodiments of the present disclosure. Memory chip7810may include a plurality of memory banks7811A-B and IO bus7812. AlthoughFIG.78Bdepicts two memory banks7811A-B implemented in memory chip7810, memory chip7810may include any number of memory banks. IO bus7812may be configured to transfer data to/from an external chip (e.g., host7820inFIG.78A) via bus7840B. Bus7840B may function similar to bus7840A inFIG.78A.107812may also transmit a zero value indicator via zero value indicator line7830B, wherein zero value indicator line7830B may function similar to zero value indicator line7830A inFIG.78A. IO bus7812may also be configured to communicate with memory banks7811A-B via internal zero value indicator line7831and bus7841. IO bus7812may transmit the received data from the external chip to one of memory banks7811A-B. For example, IO bus7812may transfer data comprising instructions to read data stored in a particular address of memory bank7811A via bus7841. A mux can be included between IO bus7812and memory banks7811A-B and may be connected by internal zero value indicator line7831and bus7841A. The mux can be configured to transmit received data from IO bus7812to a particular memory bank and may be further configured to transmit received data or a received zero value indicator from the particular memory bank to IO bus7812. In some cases, a host entity may be configured only to receive regular data transmissions and may be unequipped to interpret or respond to the disclosed zero value indicator. In such a case, the disclosed embodiments (e.g., controller/chip IO, etc.) may re-generate a zero value on the data line to the host IO in place of the zero value indicator signal, and thus may save data transmission power internally within the chip. Each of memory banks7811A-B may include a control unit. The control unit may detect a zero value stored in a requested address of a memory bank. Upon detecting a stored zero value, the control unit may generate a zero value indicator and transmit the generated zero value indicator via an internal zero value indicator line7831to IO bus7812, wherein the zero value indicator is further transferred to an external chip via zero value indicator line7830B. FIG.79illustrates a memory bank7911for detecting a zero value stored in one or more of particular addresses of a plurality of memory mats at a memory mat level, consistent with embodiments of the present disclosure. In some embodiments, memory bank7911may be organized into memory mats7912A-B, each of which may be independently controlled and independently accessed. Memory bank7911may include memory mat controllers7913A-B, that may include zero value detection logic units7914A-B. Each of memory mat controllers7913A-B may allow reads and writes to locations on the memory mats7912A-B. Memory bank7911may further include a read-disable element, local sense amplifiers7915A-B, and/or a global sense amplifier7916. Each of memory mats7912A-B may include a plurality of memory cells. Each of the plurality of memory cells may store one bit of binary information. For example, any of the memory cells may individually store a zero value. If all memory cells in a particular memory mat store zero values, then a zero value may be associated with the entire memory mat. Each of memory mat controllers7913A-B can be configured to access a dedicated memory mat, and read data stored in the dedicated memory mat or write data in the dedicated mat. In some embodiments, zero value detection logic unit7914A or7914B can be implemented in a memory bank7911. One or more zero value detection logic units7914A-B may be associated with memory banks, memory sub-banks, memory mats, and a set of one or more memory cells. Zero value detection logic unit7914A or B may detect that a requested particular address (e.g., memory mat7912A or7912B) stores a zero value. The detection can be performed in many methods. A first method may include using a digital comparator against zero. The digital comparator can be configured to take two numbers as input in binary form and determine whether a first number (retrieved data) is equal to a second number (zero). If the digital comparator determines that two numbers are equal, a zero value detection logic unit may generate a zero value indicator. The zero value indicator may be a 1-bit signal and may disable amplifiers (e.g., local sense amplifiers7915A-B), transmitters, and buffers that may send data bits to the next level (e.g., IO bus7812inFIG.78B). The zero value indicator may be further transmitted via zero value indicator line7931A or7931B to a global sense amplifier7916, but in some cases, may bypass the global sense amplifier. A second method for zero detection may include using an analog comparator. The analog comparator may function similar to the digital comparator except for using voltages of two analog inputs for a comparison. For example, all of the bits may be sensed, and a comparator can act as a logical OR function between the signals. A third method for zero value detection may include using a transferred signal from local sense amplifiers7915A-B into global sense amplifier7916, wherein global sense amplifier7916is configured to sense if any of the inputs is high (non-zero) and use that logic signal to control the next level of amplifiers. Local sense amplifiers7915A-B and global sense amplifier7916may include a plurality of transistors configured to sense low-power signals from the plurality of memory banks and amplify a small voltage swing to higher voltage levels such that data stored in the plurality of memory banks can be interpreted by the at least one controller such as memory mat controller7913A or7913B. For example, memory cells may be laid out in rows and columns on memory bank7911. Each line may be attached to each memory cell in the row. The lines which run along the rows are called wordlines which are activated by selectively applying a voltage to the wordlines. The lines which run along the columns are called bitlines, and two such complementary bitlines may be attached to a sense amplifier at the edge of the memory array. The number of sense amplifiers may correspond to the number of bitlines (columns) on memory bank7911. To read a bit from a particular memory cell, the wordline along the cell's row is turned on, activating all the memory cells in the row. The stored value (0 or 1) from each cell is then available on the bitline associated with the particular cell. The sense amplifier at the end of the two complementary bitlines may amplify the small voltages to a normal logic level. The bit from the desired cell may then be latched from the cell's sense amplifier into a buffer, and put on the output bus. A fourth method for zero value detection may include using an extra bit per each word saved to a memory and stored at write time if the value is 0 and use that extra bit when reading the data out to know if it is zero or not. The method may avoid a write of all zeroes to the memory, thus saving more energy. As described above and throughout the disclosure, some embodiments may include a memory unit (such as memory unit7800) that includes a plurality of processor subunits. These processor subunits may be distributed spatially on a single substrate (e.g., a substrate of a memory chip such as memory unit7800). Moreover, each of the plurality of processor subunits may be dedicated to a corresponding memory bank from among a plurality of memory banks of memory unit7800. And, these memory banks dedicated to corresponding processor subunits may also be spatially distributed on the substrate. In some embodiments, a memory unit7800may be associated with a particular task (e.g., performing one or more operations associated with running a neural network, etc.), and each one of the processor subunits of the memory unit7800may be responsible for performing a portion of this task. For example, each processor subunit may be equipped with instructions that may include data handling and memory operations, arithmetic and logic operations, etc. In some cases, the zero value detection logic may be configured to provide a zero value indicator to one or more of the described processor subunits spatially distributed on the memory unit7800. Reference is now made toFIG.80, which is a flow chart illustrating an exemplary method8000of detecting a zero value stored in a particular address of a plurality of memory banks, consistent with embodiments of the present disclosure. Method8000may be performed by a memory chip (e.g., memory chip7810ofFIG.78B). In particular, a controller (e.g., controller7913A ofFIG.79) and a zero value detection logic unit (e.g., zero value detection logic unit7914A) of the memory unit may perform method8000. In step8010, a read or write operation may be initiated by any suitable technique. In some cases, a controller may receive a request to read data stored in a particular address of a plurality of discrete memory banks (e.g., memory banks depicted inFIG.78). The controller can be configured to control at least one aspect of read/write operations relative to the plurality of discrete memory banks. In step8020, one or more zero value detection circuits may be used to detect the presence of a zero value associated with the read or write command. For example, a zero value detection logic unit (e.g., zero value detection logic unit7830ofFIG.78) may detect a zero value associated with a particular address associated with the read or write. In step8030, the controller may transmit a zero value indicator to one or more circuits outside the memory unit in response to a zero value detection by the zero value detection logic unit in step8020. For example, a zero value detection logic may detect that a requested address stores zero value and may transmit an indication that the value is zero to an entity (e.g., one or more circuits) outside of the memory chip (or within the memory chip, e.g., in the case of the disclosed distributed processor memory chips including processor subunits distributed among an array of memory banks). If a zero value is not detected as associated with the read or write command, then the controller may transmit a data value instead of a zero value indicator. In some embodiments, the one or more circuits to which the zero value indicator is returned may be inside the memory unit. While the disclosed embodiments have been described with respect to zero-value detection, the same principles and techniques would be applicable to detection of other memory values (e.g., 1, etc.). In some cases, in addition to a zero value indicator, the detection logic may return one or more indicators of other values associated with a read or write command (e.g., 1, etc.), and these indicators may be returned/transmitted in the event of a detection of any of the values corresponding to the value indicators. In some cases, the values may be adjusted by a user (e.g., through updating of one or more registers). Such updates may be especially useful where characteristics may be known about data sets, and there is an understanding (e.g., on the part of the user) that certain values may be more prevalent in the data than others. In such cases, one, two, three or more value indicators may be associated with the most prevalent data values associated with a data set. Compensating for DRAM Activation Penalties In certain types of memory (e.g., DRAM), memory cells may be arranged in arrays within a memory bank, and values included in the memory cells may be accessed and retrieved (read) one line of memory cells in the array at a time. This reading process may involve first opening (activating) a line (or row) of memory cells to make the data values stored by the memory cells available. Next, the values of the memory cells in the open line may be sensed simultaneously, and column addresses can be used to cycle through the individual memory cell values or groups of memory cell values (i.e. words) and connect each memory cell value to an external data bus in order to read the memory cell values. These processes take time. In some cases, opening a memory line for reading may require 32 cycles of compute time, and reading the values from the open line may require another 32 cycles. Significant latency may result if a next line to be read is opened only after completing a read operation of a current open line. In this example, during the 32 cycles required to open the next line, no data is being read, and reading each line effectively requires a total of 64 cycles instead of just the 32 it takes to iterate over the line data. Conventional memory systems do not allow opening a second line in the same bank while a first line is being read or written. To save the latency, the next line to open may thus be in a different bank o in a special bank for dual line access, as discussed below in further detail. The current line may all be sampled to flipflops or latches prior to opening the next line and all processing is done on the FF's\latches while the next line can be opened. If the next predicted line is in the same bank (and none of the above exists) then the latency may not be avoided and the system may need to wait. These mechanisms are relevant both for standard memories and especially to memory processing devices. The presently disclosed embodiments may reduce this latency by, for example, making a prediction of the next memory line to be opened before a read operation of a current, open memory line has been completed. That is, if the next line to be opened can be predicted, then the process for opening the next line may begin before a read operation of the current line has been completed. Depending on when in the process the next line prediction is made, the latency associated with opening the next line may be reduced from 32 cycles (in the particular example described above) to less than 32 cycles. In one particular example, if the next line opening is predicted 20 cycles in advance, then the additional latency is only 12 cycles. In another example, if the next line opening is predicted 32 cycles in advance, then there is no latency at all. As a result, rather than requiring a total of 64 cycles to serially open and read each row, by opening the next row while reading the current row, the effective time to read each row may be reduced. The following mechanisms may require the current and predicted lines to be in the same banks but if there is such a bank that can support activating and working simultaneously on a line then they can be also used. In the disclosed embodiments, the next row prediction may be performed using various techniques (discussed in more detail below). For example, the next row prediction may be based on pattern recognition, based on a predetermined row access schedule, based on the output of an artificial intelligence model (e.g., a trained neural network to analyze row accesses and make a prediction of a next row to open), or based on any other suitable prediction technique. In some embodiments, 100% successful predictions can be achieved by using a delayed address generator or formulas as described below or other methods. The prediction may comprise building a system with capacities to sufficiently predict a next line to open prior to an access needed to it. In some cases, the next row prediction may be performed by a next row predictor that may be implemented in various ways. For example, a predicted address generator used to generate the current addresses for reading and/or writing from memory rows. An entity that generates addresses for access in the memory (either reads or writes) may be based on any logic circuit or a controller\CPU executing software instructions. The predicted address generator may include a pattern learning model that observes the accessed rows, identifies one or more patterns associated with the accesses (e.g., sequential line access, access to every second line, access to every third line, etc.), and estimates the next row to be accessed based on the observed patterns. In other examples, the predicted address generator may include a unit that applies a formula/algorithm to predict the next row to be accessed. In still further embodiments, the predicted address generator may include a trained neural network that outputs a predicted next row to access (including one or more addresses associated with the predicted row) based on inputs such as the current address/row being accessed, the last 2, 3, 4 or more addresses/rows that were accessed etc. Predicting the next memory line to access using any of the described predicted address generators may significantly reduce latency associated with memory accesses. The described predicted address/row generators may be useful in any system involving accesses to memory to retrieve data. In some cases, the described predicted address/row generators and the associated techniques for predicting the next memory line access may be especially suited in systems that executed artificial intelligence models, as AI models may be associated with repetitive memory access patterns that may facilitate next row prediction. FIG.81Aillustrates a system8100for activating a next row associated with a memory bank8180based on a next row prediction, consistent with embodiments of the present disclosure. System8100may include a current and predicted address generator8192, bank controller8191, and memory banks8180A-B. Address generator may be an entity generating addresses for access in memory banks8180A-B and may be based on any logic circuits, a controller, or a microprocessor executing software programs. Bank controller8191may be configured to access a current row of memory bank8180A (e.g., using a current row identifier generated by address generator8192). Bank controller8191may also be configured to activate a predicted next row to be accessed within memory bank8180B based on a predicted row identifier generated by address generator8192. The following example describes two banks. In other examples, more banks may be used. In some embodiments, there may be memory banks that allow accessing more than one row at a time (as discussed below) and hence that same process may be done on a single bank. As described above, the activation of the predicted next row to be accessed may begin before a completion of a read operation performed relative to the current row being accessed. Thus, in some cases, address generator8192may predict the next row to access and may send an identifier (e.g., one or more addresses) of the predicted next row to bank controller8191at any time before an access to a current row has been completed. Such timing may allow the bank controller to initiate activation of the predicted next row at any point in time during which the current row is being accessed and before access to the current row is completed. In some cases, bank controller8291may initiate activation of the predicted next row of memory bank8180at the same time (or within a few clock cycles) of when the activation of the current row to be accessed is completed and/or when the reading operation relative to the current row has begun. In some embodiments, the operation relative to the current row associated with the current address may be a read or write operation. In some embodiments, the current row and the next row may be in the same memory bank. In some embodiments, the same memory bank may allow the next row to be accessed while the current row is being accessed. The current row and the next row may be in a different memory bank. In some embodiments, the memory unit may include a processor configured to generate the current address and the predicted address. In some embodiments, the memory unit may include a distributed processor. The distributed processor may include a plurality of processor subunits of a processing array that are spatially distributed among the plurality of discrete memory banks of the memory array. In some embodiments, the predicted address may be generated by a chain of flip flops sampling the address generated in a delay. The delay may be configurable via a mux that selects between flip flops storing the sampled address. It should be noted that the predicted next row may become the current row to be accessed upon confirmation (e.g., after completion of a read operation relative to the current row) that the predicted next row is actually the next row that executing software requests to access. In the disclosed embodiments, because the process for activating the predicted next row can be initiated prior to completing a current row read operation, upon confirmation that the predicted next row is the correct next row to access, the next row to access may already be fully or partially activated. This can significantly reduce latency associated with line activation. Power reduction may be obtained if the next row is activated so that the activation ends before or at the same time the reading of the current row ends. Current and predicted address generator8192may include any suitable logic components, compute units, memory units, algorithms, trained models, etc. configured to identify rows to be accessed in memory bank8180(e.g., based on program execution) and to predict a next row to access (e.g., based on observed patterns in row access, based on predetermined patterns (n+1, n+2), etc.). For example, in some embodiments, current and predicted address generator8192may include a counter8192A, current address generator8192B, and predicted address generator8192C. Current address generator8192B may be configured to generate a current address of a current row to be accessed in memory bank8180based on an output of counter8192A, for example, or based on requests from a compute unit. The address associated with the current row to be accessed may be provided to bank controller8191. Predicted address generator8192C can be configured to determine a predicted address of a next row to be accessed in memory bank8180based on an output of counter8192A, based on a predetermined access pattern (e.g., in conjunction with counter8192A), or based on the output of a trained neural network or other type of pattern prediction algorithm that observes line accesses and predicts the next line to access based, for example, on patterns associated with the observed line accesses. Address generator8192may provide to bank controller8191the predicted next row address from predicted address generator8192C. In some embodiments, current address generator8192B and predicted address generator8192C can be implemented inside or outside of system8100. An external host may also be implemented outside of system8100and further connected to system8100. For example, current address generator8192B may be a software at an external host executing a program and to avoid any latencies, predicted address generator8192C can be implemented inside system8100or outside of system8100. As noted, the predicted next row address may be determined using a trained neural network that predicts the next row to access based on input(s) that may include one or more previously accessed row addresses. The trained neural network or other type of model may run within logic associated with the predicted address generator8192C. In some cases, the trained neural network, etc., may be executed by one or more compute units outside of, but in communication with, predicted address generator8192C. In some embodiments, the predicted address generator8192C may include a duplicate or substantial duplicate of the current address generator8192B. Further, the timing of the operations of current address generator8192B and predicted address generator8192C may be fixed or adjustable relative to one another. For example, in some cases, predicted address generator8192C may be configured to output an address identifier associated with the predicted next row at a fixed time (e.g., a fixed number of clock cycles) relative to when the current address generator8192B issues an address identifier associated with the next row to be accessed. In some cases, the predicted next row identifier may be generated before or after the activation of the current row to be accessed begins, before or after a read operation associated with the current row to be accessed begins, or at any time before the read operation associated with the current row being accessed is completed. In some cases, the predicted next row identifier may be generated at the same time that the activation of the current row to be accessed begins or at the same time that a read operation associated with the current row to be accessed begins. In other cases, the time between generation of the predicted next row identifier and either the activation of the current row to be accessed the initiation of a read operation associated with the current row may be adjustable. For example, in some cases, this time may be lengthened or shortened during operation of the memory unit8100based on values associated with one or more operational parameters. In some cases, a current temperature (or any other parameter value) associated with the memory unit or another component of a computing system may cause the current address generator8192B and the predicted address generator8192C to change their relative timing of operations. In embodiments in which in memory processing, the prediction mechanisms may be part of that logic. Current and predicted address generator8192may generate a confidence level associated with a predicted next row to access determination. Such a confidence level (which may be determined by predicted address generator8192C as part of the prediction process) may be used in determining, for example, whether to initiate activation of the predicted next row during a read operation of the current row (i.e., before the current row read operation has been completed and before the identity of the next row to access has been confirmed). For example, in some cases, the confidence level associated with a predicted next row to access may be compared to a threshold level. If the confidence level falls below the threshold level, for example, memory unit8100may forego activation of the predicted next row. On the other hand, if the confidence level exceeds the threshold level, then memory unit8100may initiate activation of the predicted next row in memory bank8180. The mechanics of testing the confidence level of the predicted next row relative to a threshold level and the subsequent initiation or non-initiation of activation of the predicted next row may be accomplished in any suitable manner. In some cases, for example, if the confidence level associated with the predicted next row falls below the threshold, then predicted address generator8192C may forego outputting its predicted next row result to downstream logic components. Alternatively, in such a case, the current and predicted address generator8192may withhold the predicted next row identifier from bank controller8191, or bank controller (or another logic unit) may be equipped to use the confidence level in the predicted next row to determine whether to begin activation of the predicted next row before completion of a read operation associated with the current row being read. The confidence level associated with the predicted next row may be generated in any suitable manner. In some cases, such as where the predicted next row is identified based on a predetermined, known access pattern, the predicted address generator8192C may generate a high confidence level or may forego generation of a confidence level altogether in view of the predetermined pattern of row accesses. On the other hand, in cases where predicted address generator8192C executes one or more algorithms to monitor row accesses and output a predicted row based on patterns calculated relative to the monitored row accesses or where one or more trained neural networks or other models are configured to output a predicted next row based on inputs including recent row accesses, the confidence level in the predicted next row may be determined based on any relevant parameters. For example, in some cases, the confidence level may depend on whether one or more prior next row predictions proved to be accurate (e.g., a past performance indicator). The confidence level may also be based on one or more characteristics of inputs to the algorithm/model. For example, inputs including actual row accesses that follow a pattern may result in higher confidence levels than actual row accesses that exhibit less patterning. And, in some cases, where randomness is detected relative to a stream of inputs including recent row accesses, for example, a generated confidence may be low. Further, in cases where randomness is detected, the next row prediction process may be aborted altogether, a next row prediction may be ignored by one or more of the components of memory unit8100, or any other action may be taken to forego activation of the predicted next row. In some cases, a feedback mechanism may be included relative to operation of memory8100. For example, periodically or even after each next row prediction, an accuracy of the predicted address generator8192C in predicting the actual next row to be accessed may be determined. In some cases, if there is an error (or after a predetermined number of errors) in predicting the next row to access, the next row prediction operation of predicted address generator8192C may be suspended. In other cases, predicted address generator8192C may include a learning element, such that one or more aspects of its prediction operation may be adjusted based on received feedback regarding its accuracy in predicting the next row to access. Such a capability may refine the operation of predicted address generator8192C, such that address generator8192C can adapt to changing access patterns, etc. In some embodiments, the timing of generation of a predicted next row and/or activation of a predicted next row may depend on the overall operation of memory unit8100. For example, after powering on or after a reset of memory unit8100a prediction of a next row to access (or forwarding of a predicted next row to bank controller8191) may be suspended (e.g., for a predetermined amount of time or clock cycles, until a predetermined number of row accesses/reads have been completed, until a confidence level in a predicted next row exceeds a predetermined threshold, or based on any other suitable criteria). FIG.81Billustrates another configuration of memory unit8100, according to exemplary disclosed embodiments. In system8100B ofFIG.81B, a cache8193may be associated with bank controller8191. Cache8193may be configured to store one or more rows of data after they were accessed and prevent the need to activate them again, for example. Thus, cache8193may enable bank controller8191to access row data from cache8193rather than accessing memory bank8180. For example, cache8193may store last X row data (or any other cache saving strategy) and bank controller8191may fill in cache8193according to predicted rows. Moreover, if a predicted row is already in cache8193, the predicted row does not need to be opened again and bank controller (or a cache controller implemented in cache8193) may protect the predicted row from swapping. Cache8193may provide several benefits. First, since cache8193loads a row to cache8193and bank controller can access cache8193to retrieve row data, special banks or more than one bank for a next row prediction is not required. Second, reading and writing from cache8193may save energy since a physical distance from bank controller8191to cache8193is smaller than a physical distance from bank controller8191to memory bank8180. Third, latencies caused by cache8193are usually lower compare to that of memory bank8180since cache8193is smaller and closer to controller8191. In some cases, an identifier of a predicted next row generated by predicted address generator, for example, may be stored in cache8193as the predicted next row is activated in memory bank8180by bank controller8191. Based on program execution, etc., current address generator8192B may identify an actual next row to access in memory bank8191. An identifier associated with the actual next row to access may be compared to the identifier of the predicted next row stored in cache8193. If the actual next row to access and the predicted next row to access are the same, then bank controller8191may commence a read operation relative to the actual next row to access (which may be fully or partially activated as a result of the next row prediction process) after activation of that row has completed. On the other hand, if the actual next row to access (determined by current address generator8192B) does not match the predicted next row identifier stored in cache8193, then a read operation will not commence with respect to the fully or partially activated predicted next row, but instead, the system will begin activation of the actual next row to be accessed. Dual Activation Bank As discussed, it is valuable to describe several mechanisms that allows building a bank that is capable of activating a row while another is still being processed. Several embodiments can be provided for a bank that activates an additional row while another row is being accessed. While the embodiments only describe two row activations, it is appreciated that it can be applied for more rows. In the first suggested embodiment, a memory bank may be divided into memory sub-banks, and the described embodiments may be used to perform a read operation relative to a line in one sub-bank while activating a predicted or needed next row in another sub-bank. For example, as shown inFIG.81C, memory bank8180may be arranged to include multiple memory sub-banks8181. Further, bank controller8191associated with memory bank8180may include a plurality of sub-bank controllers associated with corresponding sub-banks. A first sub-bank controller of the plurality of sub-bank controllers can be configured to enable access to data included in a current row of a first sub-bank of the plurality of sub-banks while a second sub-bank controller of the plurality of sub-bank controllers may activate a next row in a second sub-bank of the plurality of sub-banks. Only one column decoder may be used when accessing only words in one sub-bank at a time. Two banks may be tied to the same output bus to appear as a single bank. The new single bank input can also be a single address and an additional row-address for opening the next row. FIG.81Cillustrates first and second sub-bank row controllers (8183A,8183B) per each memory sub-bank8181. Memory bank8180may include a plurality of sub-banks8181, as shown inFIG.81C. Further, bank controller8191may include a plurality of sub-bank controllers8183A-B each associated with a corresponding sub-bank8181. A first sub-bank controller8183A of the plurality of sub-bank controllers can be configured to enable access to data included in a current row of a first portion of sub-bank8181while a second sub-bank controller8183B may activate a next row in a second portion of sub-bank8181. Because, activation of a row directly adjacent to a row being accessed may distort the accessed row and/or corrupt data being read from the accessed row, the disclosed embodiments may be configured such that the predicted next row to be activated may be spaced apart, by at least two rows (for example), from the current row in the first sub-bank from which data is being accessed. In some embodiments, rows to be activated may be spaced apart by at least a mat that the activations may be executed in different mats. The second sub-bank controller can be configured to cause access to data included in a current row of the second sub-bank while the first sub-bank controller activates a next row in the first sub-bank. The activated next row of the first sub-bank may be spaced apart, by at least two rows, from the current row in the second sub-bank from which data is being accessed. This predefined distance between rows being read/accessed and rows being activated may be determined by hardware—for example coupling different parts of the memory bank to different row decoders and the software may maintain it in order not to destroy the data. The spacing between the current row may exceed two (for example may be 3, 4, 5, and even more than five). The distance can be changed over time—for example based on evaluations about distortions introduced in stored data. The distortions may be evaluated in various manners—for example by calculating signal to noise ratio, error rates, error codes required to repair distortions, and the like. Two rows can actually be activated if they are far enough and two bank controllers are implemented on the same bank. The new architecture (implementing two controllers on the same bank) may protect from opening lines in the same mat. FIG.81Dillustrates an embodiment for a next row prediction, consistent with embodiments of the present disclosure. The embodiment may include additional a pipeline of flipflops (address registers A-C). The pipeline can be implemented with any number of flipflops (stages) as a delay needed for activation after an address generator and delaying the entire execution to use the delayed address then the prediction can be the new address generated (at the beginning of pipe, below address register C) and the current address is the end of the pipeline. In this embodiment, a duplicate address generator is not needed. A selector (a mux shown inFIG.81D) can be added to configure the delay while address registers provide the delay. FIG.81Eillustrates an embodiment for a memory bank, consistent with embodiments of the present disclosure. The memory bank may be implemented that if newly activated line is far enough from a current line, activating the new line will not destroy the current line. As shown inFIG.81E, memory bank may include additional memory mats (black) between every tow lines of mats. Thus, a control unit (such as row decoders) may activate lines that are spaced apart by a mat. In some embodiments, the memory unit may be configured to receive a first address for processing and a second address to active and access at a predetermined time. FIG.81Fillustrates another embodiment for a memory bank, consistent with embodiments of the present disclosure. The memory bank may be implemented that if newly activated line is far enough from a current line, activating the new line will not destroy the current line. The embodiment depicted inFIG.81Fmay allow row decoders to open line n and n+1 by ensuring that all even lines are implemented at the upper half of memory bank and all odd lines are implemented at the lower half of memory bank. The implementation may allow accessing consecutive lines that are always far enough. A dual control memory bank, according to the disclosed embodiments, may allow accessing and activating different parts of a single memory bank, even when the dual control memory bank is configured to output one data unit at a time. For example, as described, the dual control may enable the memory bank to access a first row while activating a second row (e.g., a predicted next row or a predetermined next row to access). FIG.82illustrates dual control memory bank8280for reducing a memory row activation penalty (e.g., latency), consistent with embodiments of the present disclosure. Dual control memory bank8280may include inputs including a data input (DIN)8290, row address (ROW)8291, column address (COLUMN)8292, first command input (COMMAND_1)8293, and a second command input (COMMAND_2)8294. Memory bank8280may include a data output (Dout)8295. It is assumed that addresses may include row addresses and column addresses, and that there are two row decoders. Other arrangements of addresses may be provided, the number of row decoders may exceed two, and there may be more than a single columns decoder. Row address (ROW)8291may identify a row that is associated with a command such as an activate command. Because a row activation may be followed by reading from the row or writing to the row—then while the row is open (following its activation) there may be no need to send the row address for writing to or reading from the open row. First command input (COMMAND_1)8293may be used to send commands (such as but not limited to an activate command) to rows accessed by a first-row decoder. Second command (COMMAND_2) input8294may be used to send commands (such as but not limited to an activate command) to rows accessed by a second-row decoder. Data input (DIN)8290may be used to feed data when executing a write operation. Because the entire row may not be read at once, single row segments may be read sequentially, and column address (COLUMN)8292may indicate which segment of the row (which columns) is to be read. It may be assumed, for simplicity of explanation, that there are 2Q segments and that the column input has Q bits; Q being a positive integer that exceeds one. Dual control memory bank8280may operate with or without address prediction described above with respect toFIGS.81A-B. Of course, to reduce latency in operation, the dual control memory bank may operate with address prediction, according to the disclosed embodiments. FIGS.83A,83B, and83Cillustrate examples of accessing and activating rows of memory bank8180. It is assumed in one example, as noted above, that reading and activating a row both require 32 cycles (segments). Further, in order to reduce the activation penalty (of a length denoted Delta), it may be beneficial to know in advance (at least Delta before the need to access a next row) that a next row should be opened. In some cases, Delta may equal four cycles. Each memory bank depicted inFIGS.83A,83B, and83Cmay include two or more sub-banks within which, in some embodiments, only one row may be open at any given time. In some cases, even rows may be associated with a first sub-bank, and odd rows may be associated with a second sub-bank. In such an example, the use of the disclosed predictive addressing embodiments may enable initiation of activation of one row of a certain memory sub-bank before (a Delay period before) reaching an end of a read operation relative to a row of another memory sub-bank. In this way, a sequential memory access (e.g., a predefined memory access sequence where rows 1, 2, 3, 4, 5, 6, 7, 8 . . . are to be read, and rows 1, 3, 5, . . . etc. are associated with a first memory sub-bank and rows 2, 4, 6 . . . etc. are associated with a second, different memory sub-bank) can be done in a highly efficient manner. FIG.83Amay illustrate a state for accessing memory rows included in two different memory sub-banks. In the state shown inFIG.83A:a. Row A may be accessible by a first-row decoder. A first segment (leftmost segment marked in gray) may be accessed after the first-row decoder activates row A.b. Row B may be accessible by a second-row decoder. In these state shown inFIG.83A, Row B is closed and has not yet been activated. The state illustrated inFIG.83Amay be preceded by sending an activate command and an address of row A to the first-row decoder. FIG.83Billustrates a state for accessing row B after accessing row A. According to this example: Row A that may be accessible by a first-row decoder. In the state shown inFIG.83B, the first-row decoder activated row A and has accessed all but the four rightmost segments (the four segments not yet marked in gray). Because Delta (white four segments in row A) equals four cycles, a bank controller may enable a second-row decoder to activate row B before accessing the rightmost segments in row A. In some cases, activation of row B may be in response to a predetermined access pattern (e.g., a sequential row access where odd rows are designated in the first sub-bank, and even rows are designated in the second sub-bank). In other cases, activation of row B may be in response to any of the row prediction techniques described above. The bank controller may enable the second-row decoder to activate row B in advance so that when row B is accessed, row B is already activated (opened) instead of waiting for activating row B to open row B. The state illustrated inFIG.83Bmay be preceded by:a. Sending an activate command and an address of row A to the first-row decoder.b. Writing or reading the first twenty-eight segments of row A.c. Following the read or write operations on the twenty-eight segments of row, sending an activate command relative to an address of row B to the second-row decoder. In some embodiments, the even numbered rows are located in one half of the one or more memory banks. In some embodiments, the odd numbered rows are located in one half of the one or more memory banks. In some embodiments, a line of extra redundant mats are placed between each of two mat lines to create distance for allowing activation. In some embodiments, lines in proximity to each other may not be activated at the same time. FIG.83Cmay illustrate a state for accessing row C after accessing row A (e.g., the next odd row included in the first sub-bank). As shown inFIG.83C, row B may be accessible by a second-row decoder. As shown, the second-row decoder has activated row B and has accessed all but the four rightmost segments (the four remaining segments not marked in gray). Because in this example, Delta equals four cycles, a bank controller may enable a first-row decoder to activate row C before accessing the rightmost segments in row B. The bank controller may enable the first-row decoder to activate row C in advance so that when row C is accessed, row C is already activated instead of waiting for activating row C. Operating in this way can reduce or completely eliminate latency associate with memory read operations. Memory Mat as a Register File In computer architecture, processor registers constitute storage locations that are quickly accessible to a computer's processors (e.g., a central processing unit (CPU)). Registers normally include memory units closest to a processor core (L0). Registers may provide the fastest way to access certain types of data. A computer may have several types of registers, each being classified according to the type of information they store or based on the types of instructions that operate on the information in a certain type of register. For example, a computer may include data registers that hold numeric information, operands, intermediate results, and configurations, address registers that store address information that are used by instructions to access primary memory, general purpose registers that store both data and address information, and status registers, among others. A register file includes a logical group of registers available for use by a computer processing unit. In many cases, a computer's register file is located within the processing unit (e.g., the CPU) and is implemented by logic transistors. In the disclosed embodiments, however, computational processing units may not reside in a traditional CPU. Instead, such processing elements (e.g., processor subunits) may be spatially distributed (as described in the sections above) within a memory chip as a processing array. Each processor subunit may be associated with one or more corresponding and dedicated memory units (e.g., memory banks). Through this architecture, each processor subunit may be spatially located proximate to the one or more memory elements that store the data on which a particular processor subunit is to operate. Such an architecture, as described herein, may significantly speed up the operation of certain memory intensive operations by, for example, eliminating the memory access bottleneck experienced by typical CPU and external memory architectures. The distributed processor memory chip architecture described herein, however, may still take advantage of register files including various types of registers for operating on data from memory elements dedicated to corresponding processor subunits. As the processor subunits may be distributed among the memory elements of a memory chip, however, it may be possible to add one or more memory elements (which may benefit from the specific fabrication process compared to logic elements in that same process) in the corresponding processor subunits, to function as register files or caches for the corresponding processor subunits, rather than to serve as primary memory storage. Such an architecture may offer several advantages. For example, as the register files are part of the corresponding processor subunits, the processor subunits may be spatially located close to the relevant register files. Such an arrangement may significantly increase operating efficiency. A conventional register file is implemented by logic transistors. For example, each bit of a conventional register file is made of about 12 logic transistors, and thus a register file of 16 bits is made of 192 logic transistors. Such a register file may require a large number of logic components to access the logic transistors, and therefore, may occupy a large space. As compared to register files implemented by logic transistors, the register files of the presently disclosed embodiments may require significantly less space. This reduction in size may be realized by implementing the register files of the disclosed embodiments using memory mats including memory cells, which are manufactured by processes optimized for fabricating memory structures rather than for fabricating logic structures. The reduction in size may also allow for larger register files or caches. In some embodiments, a distributed processor memory chip may be provided. The distributed processor memory chip may include a substrate, a memory array disposed on the substrate and including a plurality of discrete memory banks, and a processing array disposed on the substrate and including a plurality of processor subunits. Each one of the processor subunits may be associated with a corresponding, dedicated one of the plurality of discrete memory banks. The distributed processor memory chip may also include a first plurality of buses and a second plurality of buses. Each one of the first plurality of buses may connect one of the plurality of processor subunits to its corresponding, dedicated memory bank. Each one of the second plurality of buses may connect one of the plurality of processor subunits to another of the plurality of processor subunits. In some cases, the second plurality of buses may connect one or more of the plurality of processor subunits to two or more other processor subunits among the plurality of processor subunits. One or more of the processor subunits may also include at least one memory mat disposed on the substrate. The at least one memory mat may be configured to act as at least one register of a register file for one or more of the plurality of processing subunits. In some cases, the register file may be associated with one or more logic components to enable the memory mat serve as one or more registers of the register file. For example, such logic components may include switches, amplifiers, inverters, sense amplifiers, among others. In an example where a register file is implemented by a dynamic random access memory (DRAM) mat, logic components may be included to perform refresh operations to keep stored data from being lost. Such logic components may include row and column multiplexers (“muxes”). In addition, the register file implemented by DRAM mats may include redundancy mechanisms to counter drops in yield. FIG.84illustrates a traditional computer architecture8400, which includes a CPU8402and an external memory8406. During operation, values from memory8406may be loaded into registers associated with a register file8504included in CPU8402. FIG.85Aillustrates an exemplary distributed processor memory chip8500a, consistent with disclosed embodiments. In contrast to the architecture ofFIG.84, the distributed processor memory chip8500aincludes memory elements and processor elements disposed on the same substrate. That is, chip8500amay include a memory array and a processing array including a plurality of processor subunits each associated with one or more dedicated memory banks included in the memory array. In the architecture ofFIG.85, registers used by the processor subunits are provided by one or more memory mats disposed on the same substrate on which the memory array and the processing array are formed. As depicted inFIG.85A, distributed processor memory chip8500amay be formed by a plurality of processing groups8510a,8510b, and8510cdisposed on a substrate8502. More specifically, distributed processor memory chip8500amay include a memory array8520and a processing array8530disposed on substrate8502. Memory array8520may include a plurality of memory banks, such as memory banks8520a,8520b, and8520c. Processing array8530may include a plurality of processor subunits, such as processor subunits8530a,8530b, and8530c. Furthermore, each of processing groups8510a,8510b, and8510cmay include a processor subunit and one or more corresponding memory banks dedicated to the processor subunit. In the embodiment depicted inFIG.85A, each one of the processor subunits8530a,8530b, and8530cmay be associated with a corresponding, dedicated memory bank8520a,8520b, or8520c. That is, processor subunit8530amay be associated with memory bank8520a; processor subunit8530bmay be associated with memory bank8520b; and processor subunit8530cmay be associated with memory bank8520c. To allow each processor subunit to communicate with its corresponding, dedicated memory bank(s), distributed processor memory chip8500amay include a first plurality of buses8540a,8540b, and8540cconnecting one of the processor subunits to its corresponding, dedicated memory bank(s). In the embodiment depicted inFIG.85A, bus8540amay connect processor subunit8530ato memory bank8520a; bus8540bmay connect processor subunit8530bto memory bank8520b; and bus8540emay connect processor subunit8530cto memory bank8520c. Moreover, to allow each processor subunit to communicate with other processor subunits, distributed processor memory chip8500amay include a second plurality of buses8550aand8550bconnecting one of the processor subunits to at least one other processor subunit. In the embodiment depicted inFIG.85, bus8550amay connect processor subunit8530ato processor subunit8530b, and bus8550bmay connect processor subunit8530ato processor subunit8550b, etc. Each one of the discrete memory banks8520a,8520b, and8520cmay include a plurality of memory mats. In the embodiment depicted inFIG.84, memory bank8520amay include memory mats8522a,8524a, and8526a; memory bank8520bmay include memory mats8522b,8524b, and8526b; and memory bank8520cmay include memory mats8522c,8524c, and8526c. As previously disclosed with respect toFIG.10, a memory mat may include a plurality of memory cells, and each cell may comprise a capacitor, a transistor, or other circuitry that stores at least one bit of data. A conventional memory mat may comprise, for example, 512 bits by 512 bits, but the embodiments disclosed herein are not limited thereto. At least one of processor subunits8530a,8530b, and8530cmay include at least one memory mat, such as memory mats8532a,8532b, and8532c, that is configured to act as a register file for the corresponding processor subunits8530a,8530b, and8530c. That is, the at least one memory mat8532a,8532b, and8532cprovides at least one register of a register file used by one or more of the processor subunits8530a,8530b, and8530c. The register rile may include one or more registers. In the embodiment depicted inFIG.85A, memory mat8532ain processor subunit8530amay serve as a register file (also referred to as “register file8532a”) for processor subunit8530a(and/or any other processor subunits included in the distributed processor memory chip8500a); memory mat8532bin processor subunit8530bmay serve as a register file for processor subunit8530b: and memory mat8532cin processor subunit8530cmay serve as a register file for processor subunit8530c. At least one of processor subunits8530a,8530b, and8530cmay also include at least one logic component, such as logic components8534a,8534b, and8534c. Each logic component8534a,8534b, or8534cmay be configured to enable the corresponding memory mat8532a,8532b, or8532cto serve as a register file for the corresponding processor subunits8530a,8530b, or8530c. In some embodiments, at least one memory mat may be disposed on the substrate, and the at least one memory mat may contain at least one redundant memory bits configured to provide at least one redundant register for one or more of the plurality of processor subunits. In some embodiments, at least one of the processor subunits may include a mechanism to halt a current task and to trigger a memory refresh operation at certain times to refresh the memory mat. FIG.85Billustrates an exemplary distributed processor memory chip8500b, consistent with disclosed embodiments. Memory chip8500billustrated inFIG.85Bis substantially the same as memory chip8500illustrated inFIG.85A, except that memory mats8532a,8532b, and8532cinFIG.85Bare not included in the corresponding processor subunits8530a,8530b, and8530c. Instead, memory mats8532a,8532b, and8532cinFIG.85Bare disposed outside of, but spatially near, the corresponding processor subunits8530a,8530b, and8530c. In this manner, memory mats8532a,8532b, and8532cmay still serve as registers files for the corresponding processor subunits8530a,8530b, and8530c. FIG.85Cillustrates a device8500c, consistent with the disclosed embodiments. Device8500cincludes a substrate8560, a first memory bank8570, a second memory bank8572, and a processing unit8580. First memory bank8570, second memory bank8572, and processing unit8580are disposed on substrate8560. Processing unit8580includes a processor8584and a register file8582implemented by a memory mat. During operation of processing unit8580, processor8584may access register file8582to read or write data. The distributed processor memory chip8500a,8500b, or device8500cmay provide a variety of functions based on the access of the processor subunits to registers provided by memory mats. For example, in some embodiments, distributed processor memory chip8500aor8500bmay include a processor subunit that functions as an accelerator coupled to memory allowing it to use more memory bandwidth. In the embodiment depicted inFIG.85A, processor subunit8530amay function as an accelerator (also referred to as “accelerator8530a”). Accelerator8530amay use memory mat8532adisposed in accelerator8530ato provide one or more registers of a register file. Alternatively, in the embodiment depicted inFIG.85B, accelerator8530amay use memory mat8532adisposed outside of accelerator8530aas a register file. Still further, accelerator8530amay use any one of memory mats8522b,8524b, and8526bin memory bank8520b, or any one of memory mats8522c,8524c, and8526cin memory bank8520cto provide one or more registers. The disclosed embodiments may be especially useful for certain types of image processing, neural networks, database analysis, compression and decompression, and more. For example, in the embodiment ofFIG.85A or85B, a memory mat may provide one or more registers of a register file for one or more processor subunits included on the same chip as the memory mat. The one or more registers may be used to store data that are frequently accessed by the processor subunit(s). For example, during convolutional image processing, a convolution accelerator may use the same coefficients again and again on an entire image held in the memory. A suggested implementation for such a convolution accelerator may be to hold all of these coefficients in a “close” register file—that is within one or more registers included within a memory mat dedicated to one or more processor subunits located on the same chip as the register file memory mat. Such an architecture may place the registers (and the stored coefficient values) in close proximity to the processor subunits that operate on the coefficient values. Because the register file implemented by a memory mat may serve as a spatially close, efficient cache, significantly lower losses on data transfer and lower latencies in access can be achieved. In another example, the disclosed embodiments may include an accelerator that may input words into registers provided by a memory mat. The accelerator may handle the registers as a cyclic buffer to multiply vectors in a single cycle. For example, in device8500cillustrated inFIG.85C, processor8584in processing unit8580functions as an accelerator, which uses register file8582, implemented by a memory mat, as a cyclic buffer to store data A1, A2, A3, . . . . First memory bank8570stores data B1, B2, B3, . . . to be multiplied with data A1, A2, A3, . . . . Second memory bank8572stores multiplication results C1, C2, C3, . . . . That is, Ci=Ai×Bi. If there is no register file in processing unit8580, then processor8584would need more memory bandwidth and more cycles to read both data A1. A2, A3, . . . and data B1, B2, B3, . . . from an external memory bank, such as memory bank8570or8572, which can cause significant delays. On the other hand, in the present embodiment, data A1, A2, A3, . . . is stored in a register file8582formed within processing unit8580. Thus, processor8584would only need to read data B1, B2, B3, . . . from the external memory bank8570. Thus, the memory bandwidth can be significantly reduced. In a memory process, usually the memory mats allow for one-way access (i.e., single access). In one-way access, there is one port to the memory. As a result, only one access operation, e.g., read or write, from a specific address can be performed at a certain time. However, if the memory mat itself allows for a two-way access, then the two-way access may be a valid option. In two-way access, two different addresses can be accessed at a certain time. The method of accessing the memory mats may be determined based on the area and the requirements. In some cases, register files implemented by memory mats may allow for four-way access, if they are connected to a processor which needs to read two sources and has one destination register. In some cases, when register files are implemented by DRAM mats to store configuration or cache data, the register files may only allow for one-way access. Standard CPUs may include many-way access mats, while one-way access mats may be more preferred for DRAM applications. When a controller or an accelerator is designed in a manner that it requires only single access to registers (in a few instances may be), the memory mat implemented registers may be used instead of a traditional register file. In the single access, only one word can be accessed at a time. For example, a processing unit may access two words from two register files at a certain time. Each one of the two register files may be implemented by a memory mat (e.g., DRAM mat) which allows for only single access. In most technologies, a memory mat IP, which is a closed block (IP) obtained from a manufacturer, would come with wirings, such as word lines and row lines, in place for row and column access. But the memory mat IP does not include the surrounding logic components. Hence the register files implemented by memory mats disclosed in the present embodiments may include logic components. The size of the memory mat may be selected based on the required size of the register file. Certain challenges may arise when using memory mats to provide registers of a register file, and these challenges may depend on the particular memory technology used to form the memory mats. For example, in memory production, not all fabricated memory cells may operate properly after production. This is a known problem, especially where there is a high density of SRAM or DRAM on a chip. To address this issue in memory technology, one or more redundancy mechanisms may be used in order to maintain yield at a reasonable level. In the disclosed embodiments, because the number of memory instances (e.g., memory banks) used to provide registers of a register file may be fairly small, redundancy mechanisms may not be as important as in normal memory applications. On the other hand, the same production issues that affect memory functionality may also impact whether a particular memory mat may function properly in providing one or more registers. As a result, redundancy elements may be included in the disclosed embodiments. For example, at least one redundant memory mat may be disposed on the substrate of the distributed processor memory chip. The at least one redundant memory mat may be configured to provide at least one redundant register for one or more of the plurality of processor subunits. In another example, a mat may be larger than required (e.g., 620×620 rather than 512×512), and the redundancy mechanisms may be built into the region of the memory mat outside of a 512×512 region or its equivalent. Another challenge may relate to timing. Usually the timing of loading the word and bit lines is determined by the size of the memory. Since the register file may be implemented by a single memory mat (e.g., 512×512 bits) which is fairly small, the time required for loading a word from the memory mat would be small, the timing may be sufficient to run fairly fast compared to the logic. Refresh—Some memory types, like DRAM, require refresh periodically. The refresh may be performed when pausing the processor or the accelerator. For a small memory mat, the refreshing time may be a small percentage of the time. Therefore, even if the system stops for a short period of time, the gain obtained by using memory mats as registers may be worth the downtime when looking at overall performance. In one embodiment, a processing unit may include a counter that counts backwards from a predefined number. When the counter reaches “0”, the processing unit may halt a current task performed by the processor (e.g., an accelerator) and trigger a refresh operation in which the memory mat is refreshed line by line. When the refresh operation is finished, the processor may resume its task, and the counter may be reset to count backwards from the predefined number. FIG.86provides a flowchart8600representative of an exemplary method for executing at least one instruction in a distributed processor memory chip, consistent with disclosed embodiments. For example, at step8602, at least one data value may be retrieved from a memory array on a substrate of a distributed processor memory chip. At step8604, the retrieved data value may be stored in a register provided by a memory mat of the memory array on the substrate of the distributed processor memory chip. At step8606, a processor element, such as one or more of the distributed processor subunits onboard the distributed processor memory chip, may operate on the stored data value from the memory mat register. Here and throughout, it should be understood that all references to register files should refer equally to caches, as a register file may be a lowest level cache. Processing Bottlenecks The terms “first” “second”, “third” and the like are merely used to differentiate between different terms. These terms may not indicate an order and/or timing and/or importance of the elements. For example a first process may be preceded by a second process, and the like. The term “coupled” may mean connected directly and/or connected indirectly. The terms “memory/processing”, “memory and processing” and “memory processing” are used in an interchangeable manner. There may be provided multiple methods, computer readable media, memory/processing units, and/or systems, that may be memory/processing units. A memory/processing unit is a hardware unit that has memory and processing capabilities. A memory/processing unit may be a memory processing integrated circuit, may be included in a memory processing integrated circuit or may include one or more memory processing integrated circuits. A memory/processing unit may be a distributed processor as illustrated in PCT patent application publication WO2019025892. A memory/processing unit may include a distributed processor as illustrated in PCT patent application publication WO2019025892. A memory/processing unit may belong to a distributed processor as illustrated in PCT patent application publication WO2019025892. A memory/processing unit may be a memory chip as illustrated in PCT patent application publication WO2019025892. A memory/processing unit may include a memory chip as illustrated PCT patent application publication WO2019025892. A memory/processing unit may belong to a memory chip as illustrated in PCT patent application publication WO2019025892. A memory/processing unit may be a distributed processor as illustrated in PCT patent application serial number PCT/IB2019/001005. A memory/processing unit may include a distributed processor as illustrated in PCT patent application serial number PCT/IB2019/001005. A memory/processing unit may belong to a distributed processor as illustrated in PCT patent application serial number PCT/IB2019/001005. A memory/processing unit may be a memory chip as illustrated in PCT patent application serial number PCT/IB2019/001005. A memory/processing unit may include a memory chip as illustrated in PCT patent application serial number PCT/IB2019/001005. A memory/processing unit may belong to a memory chip as illustrated in PCT patent application serial number PCT/IB2019/001005. A memory/processing unit may be integrated circuits that are connected to each other using a wafer to wafer bond and multiple conductors. Any reference to a distributed processor memory chip, distributed memory processing integrated circuit, a memory chip, a distributed processor, may be implemented as a pair of integrated circuits connected to each other by wafer to wafer bond and multiple conductors. A memory/processing unit may be manufactured by a first manufacturing process that better fits memory cells than logic cells. Thus—the first manufacturing process may be regarded as a memory flavored manufacturing process. A memory cell may include one of more transistors. A logic cell may include one or more transistors. The first manufacturing process may be applied to manufacture memory banks. A logic cell may include one or more transistors implementing together a logic function and may be used as a basic building block of a bigger logic circuit. A memory cell may include one or more transistors implementing together a memory function and may be used as a basic building block of a bigger logic circuit. Corresponding logic cells may implement the same logic function. The memory/processing unit may differ from any one of a processor, a processing integrated circuit and/or a processing unit that are manufactured by a second manufacturing process that better fits logic cells than memory cells. Thus—the first manufacturing process may be regarded as a logic flavored manufacturing process. The second manufacturing process may be sued to manufacture central processing units, graphic processing units, and the like. The memory/processing unit may be more fit to perform less arithmetic intense operations than the processor, a processing integrated circuit and/or a processing unit. For example—memory cells manufactured by the first manufacturing process may exhibit a critical dimension that exceeds, and even greatly exceeds (for example by a factor that exceeds 2, 3, 4, 5, 9, 7, 8, 9, 10, and the like) the critical dimension of a logic circuit manufactured by the first manufacturing process. The first manufacturing process may be an analog manufacturing process, the first manufacturing process may be a DRAM manufacturing process, and the like. A size of a logic cell manufactured by the first manufacturing process may exceed by at least two a size of a corresponding logic cell manufactured by the second manufacturing process. The corresponding logic call may have the same functionality as the logic cell manufactured by the first manufacturing process. The second manufacturing process may be a digital manufacturing process. The second manufacturing process may be any one of complementary metal-oxide-semiconductor (CMOS), bipolar, bipolar-CMOS (BiCOMS), double-diffused metal-oxide-semiconductor (DMOS), silicon on oxide manufacturing process, and the like. A memory/processing unit may include multiple processor subunits. Processor subunits of one or more memory/processing units may operate independently from each other and/or may cooperate with each other and/or perform a distributed processing. The distributed processing may be executed in various manners—for example in a flat manner or in a hierarchical manner. A flat manner may involve having processor subunits perform the same operations (and may or may not output the results of the processing between them). A hierarchical manner may involve executing a sequence of processing operations of different levels—whereas a processing operation of a certain layer follows a processing operation of yet another level. The processor subunits may be allocated (dynamically or statically) to different layers and participate in a hierarchical processing. The distributed processing may also involve other units—for example a controller of a memory/processing unit and/or units that do not belong to the memory/processing unit. The terms logic and processor subunit are used in an interchangeable manner. Any processing mentioned in the application may be executed in any manner—distributed and/or non-distributed, and the like. In the following application various references and/or incorporations by reference—re made in relation to PCT patent application publication WO2019025892 and to PCT patent application serial number PCT/IB2019/001005 Sep. 9, 2019. PCT patent application publication WO2019025892 and/or PCT patent application serial number PCT/IB2019/001005 provides non-limiting examples of various method, systems, processors, memory chips, and the like. Other methods, systems, processors, may be provided. There may be provided a processing system (system) in which a processor is preceded by one or more memory/processing units, each memory and processing unit (memory/processing unit) has processing resources and storage resources. The processor may request or instruct the one or more memory/processing units to perform various processing tasks. The execution of the various processing tasks may offload the processor, reduce the latency, and in some cases reduce the overall bandwidth of information between the one or more memory/processing units and the processor, and the like. The processor may provide instructions and/or requests at different granularity—for example the processor may send instructions aimed to certain processing resources or may send higher level instructions aimed to the memory/processing unit without specifying any processing resources. A memory/processing units may manage its processing and/or memory resources in any manner—dynamic, static, distributed, centralized, off line, on-line, and the like. The management of resources may be executed autonomously, under the control of the processor, following a configuration by the processor, and the like. For example—a task may be partitioned to sub-tasks that may require an execution or one or more instructions by one or more processing resources and/or memory resources of the one or more memory/processing units. Each processing resource may be configured to execute (for example independently or not) at least one instruction. See, for example, the execution of sub-series of instructions by processing resources such as processor subunits of PCT patent application publication WO2019025892. At least the allocation of memory resources may also be provided to entities other than the one or more memory/processing units—for example a direct access memory (DMA) unit that may be coupled to the one or more memory/processing units. The complier may prepare a configuration file per type of task executed by a memory/processing unit. The configuration file includes the memory allocation and the processing resource allocations associated with type of task. The configuration file may include instructions that may be executed by different processing resources and/or may defined memory allocation. For example—a configuration file related to a task of matrix multiplication (multiplying matrix A by matrix B−A*B=C) may indicate where to store elements of matrix A, where to store elements of matrix B, where to store elements of matrix C, where to store intermediate results generated during the matrix multiplication, and may include instructions aimed to the processing resources for performing any mathematical operation related to the matrix multiplication. The configuration file is an example of a data structure—other data structures may be provided. The matrix multiplication may be executed in any manner by the one or more memory/processing units. The one or more memory/processing units may multiply a matrix A by a vector V. This can be done in any manner. For example—this may involve maintaining a row or column of the matrix per processing resource (different rows of columns per different processing resources) and circling (between the different processing resources) the outcome of the multiplications of the rows or columns of the matrix by the vector (during the first iteration) and circling the outcome of a previous multiplication (during second till last iterations). Assuming that matrix A is a 4×4 matrix, the vector V is a 1×4 vector, and there are four processing resources. Under these assumptions the first row of matrix A is stored at the first processor subunit, the second row of matrix A is stored at the second processor sub unit, the third row of matrix A is stored at the third processing resource and the fourth row of matrix A is stored at the fourth processor sub unit. The multiplication starts by sending the first till fourth elements of vector V to the first till forth processing resource and multiplying the first till fourth elements of vector V by the different vectors of A to provide first intermediate results. The multiplication continues by circulating the first intermediate results—sending, by each processing resource, the first intermediate result calculated by the first processing resource to its neighbor processing resource. Each processing resource multiplies the first multiplication result by the vector to provide a second multiplication result. This is repeated multiple times till the multiplication of matrix A by vector V ends. FIG.90Ais an example of a system10900that includes one or more memory/processing unit (collectively denoted10910) and processor10920. Processor10920may send requests or instructions (via link10931) to the one or more memory/processing units10920that in turn fulfill (or selectively fulfill) the requests and/or the instructions and scud results (via link10932) to the processor10920, as illustrated above. The processor10920may further process the results to provide (via link10933) one or more outputs. The one or more memory/processing units may include J (J being a positive integer) memory resources10912(1,1)-10912(1,J) and K (K being a positive integer) processing resources10911(1,1)-10911(1,K). J may equal K, or may differ from K. The processing resources10911(1,1)-1091(1,K) may be, for example, processing groups, or processor subunits as illustrated in PCT patent application publication WO2019025892. The memory resources10912(1,1)-10912(1,J) may be memory instances, memory mats, memory banks as illustrated in PCT patent application publication WO2019025892. There may be any connectivity and/or any functional relationships between any of the resources (memory or processing) of the one or more memory/processing units. FIG.90Bis an example of a memory/processing unit10910(l). InFIG.90B, the K (K being a positive integer) processing resources10911(1,1)-10911(1,K) form a loop as they are serially connected to each other (see link10915). Each processing resource is also coupled to its own pair of dedicated memory resources (for example—processing resource10911(1) is coupled to memory resources10912(1) and10912(2), and processing resource10911(K) is coupled to memory resources10912(J−1) and10912(J). The processing resources may be connected to each other in any other manner. The number of memory resources allocated per each processing resources may differ from two. Examples of a connectivity between different resources is illustrated in PCT patent application publication WO2019025892. FIG.90Cis an example of a system10901that N (N being a positive integer) memory/processing unit10910(1)-10910(N) and processor10920. Processor10920may send requests or instructions (via links10931(1)-10931(N)) to the memory/processing units10920(1)-10910(N) that in turn fulfill the requests and/or the instructions and send results (via links10932(1)-3232(N)) to the processor10920, as illustrated above. The processor10920may further process the results to provide (via link10933) one or more outputs. FIG.90Dis an example of a system10902that includes N (N being a positive integer) memory/processing unit10910(1)-10910(N) and processor10920.FIG.90Dillustrates a preprocessor10909that precedes memory/processing unit10910(1)-10910(N). The preprocessor may perform various preprocessing operations such as frame extraction, header detection, and the like. FIG.90Eis an example of a system10903that includes one or more memory/processing units10910and processor10920.FIG.90Eillustrates a preprocessor10909that precedes the one or more memory/processing units10910and a DMA controller10908. FIG.90Fillustrates method10800for distributed processing of at least one information stream. Method10800may start by step10810of receiving, over first communication channels, the at least one information stream by one or more memory processing integrated circuits; wherein each memory processing integrated unit comprises a controller, multiple processor subunits, and multiple memory units. Step10810may be followed by steps10820and10830. Step10820may include buffering the information streams by the one or more memory processing integrated circuits. Step10830may include performing, by the one or more memory processing integrated circuits, first processing operations on the at least one information streams to provide first processing results. Step10830may involve compression or decompression. Accordingly—an aggregate size of the information streams may exceed an aggregate size of the first processing results. An aggregate size of the information streams may reflect the amount of information received during a period of a given duration. An aggregate size of the first processing results may reflect the amount of first processing results outputted any during period of the same given duration. Alternatively—an aggregate size of the information streams (or any other information entity mentioned in the specification) is smaller than the aggregate size of the first processing results. In this case a compression is obtained. Step10830may be followed by step10840of sending the first processing results to one or more processing integrated circuits. The one or more memory processing integrated circuits may be manufactured by a memory flavored manufacturing process. The one or more processing integrated circuit may be manufactured by a logic flavored manufacturing process. In the memory processing integrated unit each of the memory units may be coupled to a processor subunit. Step10840may be followed by step10850of performing, by the one or more processing integrated circuits, second processing operations on the first processing results to provide second processing results. Step10820and/or step10830may be instructed by the one or more processing integrated circuit, may be requested by the one or more processing integrated circuit, may be executed following a configuration of the one or more memory processing integrated circuits by the one or more processing integrated circuits, or may be executed independently—without intervention from the one or more processing integrated circuit. The first processing operations may be of lower arithmetic intensity than the second processing operations. Step10830and/or step10850may be at least one out of (a) cellular network processing operations, (b) other network related processing operations (processing of networks that differ from cellular networks, (c) database processing operations, (d) database analytics processing operations, (e) artificial intelligence processing operations, or any other processing operations. Disaggregated System Memory/Processing Units and a Method for Distributed Processing There may be provided a disaggregated system, a method for distributed processing, a processing/memory unit, a method for operating the disaggregated system, a method for operating the processing/memory unit, and a computer readable medium that is non-transitory and stores instructions for executing any of the methods. A disaggregated system allocates different subsystems to perform different functions. For example—storage may be mainly implemented in one or more storage subsystems while computing may be mainly in one or more storage subsystems. The disaggregated system may be a disaggregated server, one or more disaggregated servers, and/or may differ from one or more servers. The disaggregated system may include one or more switching subsystems, one or ore computing subsystems, one or more storage subsystems, and one or more processing/memory subsystems. The one or more processing/memory subsystems, the one or more computing subsystems and one or more storage subsystems are coupled to each other via the one or more switching subsystems. The one or more processing/memory subsystems may be included in one or more subsystems of the disaggregated system. FIG.87Aillustrates various examples of disaggregated systems. Any number of any type of subsystems may be provided. A disaggregated system may include one or more additional subsystems of types not included inFIG.87A, may include less types of subsystems, and the like. Disaggregated system7101includes two storage subsystems7130, a computing subsystem7120, switching subsystem7140, and processing/memory subsystem7110. Disaggregated system7102includes two storage subsystems7130, a computing subsystem7120, switching subsystem7140, processing/memory subsystem7110and accelerator subsystem7150. Disaggregated system7103includes two storage subsystems7130, a computing subsystem7120, and A switching subsystem7140that includes processing/memory subsystem7110. Disaggregated system7104includes two storage subsystems7130, a computing subsystem7120, A switching subsystem7140that includes processing/memory subsystem7110, and accelerator subsystem7150. The inclusion of the processing/memory subsystem7110in the switching subsystem7140may reduce the traffic within disaggregated systems7101and7102, may reduce the latency of the switching, and the like. The different subsystems of the disaggregated system may communicate with each other using various communication protocols. It has been found that using Ethernet and even RDMA over Ethernet communication protocols may increase the throughput and may even reduce the complexity of various control and/storage operations related to the exchange of information units between the elements of the disaggregated system. The disaggregated system may perform distributed processing by allowing the processing/memory subsystems to participate in calculations—especially by executing memory intense calculations. For example—assuming that N computing units should share information units between them (all to all sharing)—then (a) the N information units may be sent to one or more processing/memory units of the one or more processing/memory subsystems, (b) the one or more processing/memory units may perform the calculation that required the all to all sharing, and (c) sent N updated information units to the N computing units. This will require the order of N transfer operations. For example—FIG.87Billustrates a distributed processing that updated a model of a neural network (the model includes the weights assigned to the nodes of the neural network). Each one of N computing units PU(1)-PU(N)7120(1)-7120(N) may belong computing subsystem7120of any one of the disaggregated systems7101,7102,7103and7104. The N computing units calculate N partial model updates (N different parts of an updated)7121(1)-7121(N) and send them (over the switching subsystem7140) to the processing/memory subsystem7110. The processing/memory subsystem7110calculates the updated model7122and sends (via the switching subsystem7140) the updated model to the N computing units PU(1)-PU(N)7120(1)-7120(N). FIGS.87C,87D and87Eillustrate examples of memory/processing units7011,7012and7013respectively andFIGS.87F and87Gillustrate integrated circuits7014and7015that include a memory/processing unit9010one or more communication modules such as an Ethernet module and a RDMA over Ethernet module22. The memory/processing units include controller9020, an internal bus9021, and multiple pairs of logic9030and memory banks9040. The controller is configured to operate as communication module or may be coupled to a communication module. The connectivity between the controller9020, and multiple pairs of logic9030and memory banks9040may be implemented in other manners. The memory banks and the logic may be arranged in other manners (not in pairs). One or more memory/processing units9010of processing/memory subsystem7110may process in parallel (using different logics and retrieve in parallel different parts of the model from different memory banks) the model update and benefiting from the large amount of memory resources, very high bandwidth of connections between the memory banks and the logic, may perform such calculations in a highly efficient manner. Memory/processing units7011,7012and7013ofFIGS.87C-87Eand integrated circuits7014and7015ofFIGS.87C-87Einclude one or more communication modules such as an Ethernet module7023(inFIGS.87C-87G) and a RDMA over Ethernet module7022(inFIGS.87E and870). Having such RDMA and/or Ethernet modules (either within the memory/processing unit or within the same integrated circuit as the memory/processing unit) greatly speeds the communication between the different elements of the disaggregated system and in case of RDMA greatly simplifies the communication between different elements of the disaggregated system. It should be noted that a memory/processing unit that includes RDMA and/or Ethernet modules may be beneficial in other environments—even when the memory/processing units are not included in a disaggregated system. It should also be noted that the RDMA and/or Ethernet modules may be allocated per a group of memory/processing units—for example—for cost reduction reasons. It should be noted that a memory/processing unit, a group of memory/processing units and even a processing/memory subsystem may include other communication ports—for example a PCIe communication port. Using a RDMA and/or Ethernet module may be cost effective as is may eliminate the need to connect to a memory/processing unit to a bridge that is connected to a network integrated circuit (NIC) that may have an Ethernet por. Using a RDMA and/or Ethernet module may cause the Ethernet (or an RDMA over Ethernet) to be native in the memory/processing units. It should be noted that the Ethernet is merely an example of a local area network (LAN) protocol. PCIe is merely an example of another communication protocol that may be used over larger distances than the Ethernet. FIG.87Hillustrates method7000for distributed processing. Method7000may include one or more processing iterations. A processing iteration may be executed by one or more memory processing integrated circuits of a disaggregated system. A processing iteration may be executed by one or more processing integrated circuits of a disaggregated system. A processing iteration executed by the more memory processing integrated circuits may be followed by a processing iteration executed by the one or more processing integrated circuits. A processing iteration executed by the more memory processing integrated circuits may be preceded by a processing iteration executed by the one or more processing integrated circuits. Yet another processing iteration may be executed by other circuits of the disaggregated system. For example—one or more preprocessing circuits may perform any type of preprocessing—including preparing the information units for a processing iteration by the one or more memory processing integrated circuits. Method7000may include step7020of receiving information units by one or more memory processing integrated circuits of a disaggregated system. Each memory processing integrated unit may include a controller, multiple processor subunits, and multiple memory units. The one or more memory processing integrated circuits may be manufactured by a memory flavored manufacturing process. The information units may convey parts of a model of a neural network. The information units may convey partial results of at least one database query. The information units may convey partial results of at least one aggregative database query. Step7020may include receiving the information units from one or more storage subsystems of the disaggregated system. Step7020may include receiving the information units from one or more computing subsystems of the disaggregated system, the one or more computing subsystems may include multiple processing integrated circuits that are manufactured by a logic flavored manufacturing process. Step7020may be followed by step7030of performing, by the one or more memory processing integrated circuits, processing operations on the information units to provide processing results. The aggregate size of the information units may exceed, may equal or may be smaller than an aggregate size of the processing results. Step7030may be followed by step7040of outputting the processing results by the one or more memory processing integrated circuits. Step7040may include outputting the processing results to one or more computing subsystems of the disaggregated system, the one or more computing subsystems may include multiple processing integrated circuits that are manufactured by a logic flavored manufacturing process. Step7040may include outputting the processing results to one or more storage subsystems of the disaggregated system. The information units may be sent from different groups of processing units of the multiple processing integrated circuits and may be different parts of an intermediate result of a process that is executed in the distributed manner by the multiple processing integrated circuits. A group of processing units may include at least one processing integrated circuit. Step7030may include processing the information units to provide a result of the entire process. Step7040may include sending the result of the entire process to each one of the multiple processing integrated circuits. The different pans of the intermediate result may be different parts of an updated neural network model, and wherein the result of the entire process is the updated neural network model. Step7040may include sending the updated neural network model to each one of the multiple processing integrated circuits. Step7040may be followed by step7050of performing another processing by the multiple processing integrated circuits based, at least in part, on the processing results to the multiple processing integrated circuits. Step7040may include outputting the processing results using switching subunits of the disaggregated system. Step7020may include receiving information units that are preprocessed information units. FIG.87Iillustrates method7001for distributed processing. Method7001differs from method7000by including step7010of preprocessing information to provide the preprocessed information units by the multiple processing integrated circuits. Step7010may be followed by steps7010,7020,7030and7040. Database Analytics Acceleration There is provided a device, a method and a computer readable medium that stores instructions for performing at least the filtering by filtering units that belong to the same integrated circuit as the memory unit, whereas the filters may indicate which entries are relevant to a certain database query. An arbitrator or any other flow control manager may send the relevant entries to the processor and not send irrelevant entries to the processor—thus saving virtually most of the traffic to the processor and from the processor. See, for example,FIG.91Athat shows a processor (CPU9240), an integrated circuit that includes a memory and filtering system9220. The memory and filtering system9220may include filtering units9224that are coupled to memory unit entries9222and to one or more arbitrators—such as arbitrator9229for sending relevant entries to the processor. Any arbitration process may be applied. There may be any relationship between the number of entries, the number of filtering units and the number of arbitrators. The arbitrator may be replaced by any unit capable of controlling the flow of information—for example a communication interface, a flow controller, and the like. Referring to the filtering—it is based on one or more relevancy/filtering criteria. The relevancy may be set per database query and may be indicated in any manner—for example—the memory units may store relevancy flags9224′ indicating which entry is relevant. There is also a storage devices9210that store K database segments9220(k), whereas k ranges between 1 and K. It should be noted that the entire database may be stored in memory units and not in a storage device (said solution is also referred to as volatile memory stored database). The memory unit entries may be too small for storing the entire database—and thus may receive one segment at a time. The filtering units may perform filtering operations such as comparing a value of a field to a threshold, comparing a value of a field to a predefined value, determining if a value of a field is within a predefined range, and the like. Thus—the filtering unit may perform known data base filtering operations and may be a compact and cheap circuit. The outcome of the filtering operation (for example—content of relevant database entries)9101is sent to CPU9420for processing. The memory and filtering system9220may be replaced by a memory and processing system—as illustrated inFIG.91B. A memory and processing system9229includes processing units9225that are coupled to memory unit entries9222. The processing units9225may perform filtering operation and may participate, at least in part, in executing the one or more additional operations on the relevant records. A processing unit may be tailored to perform specific operations and/or may be a programmable unit configured to perform multiple operations. For example—the processing unit may be a pipelined processing unit, may include an ALU, may include multiple ALUs, and the like. The processing units9225may perform the entire one or more additional operations. Alternatively—a part of the one or more additional operations are executed by the processing units, and the processor (CPU9240) may execute another part of the one or more additional operations. The outcome of the processing operation (for example—partial response9102to the database query or a full response9103) is sent to CPU9420. A partial response requires further processing. FIG.92Aillustrates memory/processing system9228that includes memory/processing units9227that are configured to perform the filtering and additional processing. The memory/processing system9228implements the processing units and the memory units ofFIG.91by memory/processing unit9227. The role of the processor may include controlling the processing units, executing at least a part of the one or more additional operations, and the like. The combination of the memory entries and the processing units may be implemented, at least in part by one or more memory/processing units. FIG.92Billustrates an example a memory/processing unit9010. The memory/processing unit9010includes controller9020, an internal bus9021, and multiple pairs of logic9030and memory banks9040. The controller is configured to operate as communication module or may be coupled to a communication module. The connectivity between the controller9020, and multiple pairs of logic9030and memory banks9040may be implemented in other manners. The memory banks and the logic may be arranged in other manners (not in pairs). Multiple memory banks may be coupled to and/or managed by a single logic. A database query9100is received, by a memory/processing system, via an interface9211. Interface9211may be a bus, a port, an input/output interface, and the like. It should be noted that the response to a database query can be generated by at least one out (or a combination of one or more out of): one or more memory/processing systems, one or more memory and processing systems, one or more memory and filtering systems, by one or more processors located outside these systems, and the like. It should be noted that the response to a database query can be generated by at least one out (or a combination of one or more out of): one or more filtering units, one or more memory/processing units, one or more processing units, one or more other processor (such as one or more other CPUs, and the like. Any process may include finding the relevant database entries and them processing the relevant database entries. The processing may be executed by one or more processing entities. A processing entity may be at least one out of a processing unit of a memory and processing system (for example processing units9225of memory and processing system9229), a processor subunit (or logic) of a memory/processing unit, another processor (for example—CPU9240ofFIGS.91A,91B and74), and the like. The processing involved in the generation of the response to a database query may be generated by any one of the following or a combination thereof:a. Processing units9225of memory and processing system9229.b. Processing units9225of different memory and processing systems9229.c. Processor subunits (or logic9030) of one or more memory/processing units9227of a memory/processing system9228.d. Processor subunits (or logic9030) of memory/processing units9227of different memory/processing systems9228.e. Controllers of one or more memory/processing units9227of a memory/processing system9228.f. Controllers of one or more memory/processing units9227of different memory/processing systems9228. Thus, a processing involved in a responding to database query may be executed by any combination or sub-combination of (a) one or more controllers of one or more memory/processing units, (b) one or more processing units of memory processing systems, (c) one or more processor subunits of one or more memory/processing units, and (d) one or more other processors, and the like. A processing executed by more than one processing entities may be referred to as a distributed processing. It should be noted that the filtering may be executed by filtering entities out of one or more filtering unit and/or one or more processing units and/or one or more processor subunits. In this sense a processing unit and/or a processor subunit that performs a filtering operation may be referred to as a filtering unit. A processing entity may be a filtering entity or may differ from a filtering entity. A processing entity may perform processing operations of database entries that were deemed to be relevant by another filtering entity. A processing entity may also perform the filtering operation. A response to a database query may utilize one or more filtering entities and one or more processing entities. The one or more filtering entities and the one or more processing entities may belong to the same system (for example—memory/processing system9228, memory and processing system9229, memory and filtering system9220) or to different systems. A memory/processing unit may include multiple processor subunits. The processor subunits may operate independently from each other, may partially collaborate with each other, may participate in a distributed processing, and the like. FIG.92Cillustrates multiple memory and filtering systems9220, multiple other processors (such as CPU9240) and a storage device9210. The multiple memory and filtering systems9220may participate (concurrently or not) in the filtering of one or more database entries—based on one or more filtering criteria within one of more database queries. FIG.92Dillustrates multiple memory and processing systems9229, multiple other processors (such as CPU9240) and a storage device9210. The multiple memory and processing systems9229may participate (concurrently or not) in the filtering and at least partially processing involved in responding to one of more database queries. FIG.92Fillustrates multiple memory/processing systems9228, multiple other processors (such as CPU9240) and a storage device9210. The multiple memory/processing systems9228may participate (concurrently or not) in the filtering and at least partially processing involved in responding to one of more database queries. FIG.92Gillustrate method9300method for database analytics acceleration. Method9300may start by step9310of receiving, by a memory processing integrated circuit, a database query that comprises at least one relevancy criterion indicative of database entries of a database that are relevant to the database query. The database entries of a database that are relevant to the database query may be none, one, some or all of the database entries of the database. The memory processing integrated circuit may include a controller, multiple processor subunits, and multiple memory units. Step9310may be followed by step9320of determining, by the memory processing integrated circuit and based on the at least one relevancy criterion, a group of relevant database entries stored in the memory processing integrated circuit. Step9320may be followed by step9330of sending the group of relevant database entries to one or more processing entities for further processing without substantially sending irrelevant data entries stored in the memory processing integrated circuit to the one or more processing entities. The phrase “without substantially sending” means either not sending at all (during the responding to the database query) or sending an insignificant number of irrelevant entries. Insignificant—may mean up to 1, 2, 3, 4, 5, 9, 7, 8, 9, 10 percent—or sending any amount that does not have a significant effect on bandwidth. Step9330may be followed by step9340of processing the group of relevant database entries to provide a response to the database query. FIG.92Hillustrate method9301for database analytics acceleration. It is assumed that the filtering and the entire processing required for responding to the database query are executed by a memory processing integrated circuit. Method9301may start by step9310of receiving, by a memory processing integrated circuit, a database query that comprises at least one relevancy criterion indicative of database entries of a database that are relevant to the database query. Step9310may be followed by step9320of determining, by the memory processing integrated circuit and based on the at least one relevancy criterion, a group of relevant database entries stored in the memory processing integrated circuit. Step9320may be followed by step9331of sending the group of relevant database entries to one or more processing entities of the for memory processing integrated circuit for fully processing without substantially sending irrelevant data entries stored in the memory processing integrated circuit to the one or more processing entities. Step9331may be followed by step9341of fully processing the group of relevant database entries to provide a response to the database query. Step9341may be followed by step9351of outputting the response to the database query from the memory processing integrated circuit. FIG.92Iillustrate method9302for database analytics acceleration. It is assumed that the filtering and only a part of the processing required for responding to the database query are executed by a memory processing integrated circuit. The memory processing integrated circuit will output partial results that will be processed by one or more other processing entities located outside the memory processing integrated circuit. Method9301may start by step9310of receiving, by a memory processing integrated circuit, a database query that comprises at least one relevancy criterion indicative of database entries of a database that are relevant to the database query. Step9310may be followed by step9320of determining, by the memory processing integrated circuit and based on the at least one relevancy criterion, a group of relevant database entries stored in the memory processing integrated circuit. Step9320may be followed by step9332of sending the group of relevant database entries to one or more processing entities of the for memory processing integrated circuit for partially processing without substantially sending irrelevant data entries stored in the memory processing integrated circuit to the one or more processing entities. Step9332may be followed by step9342of partially processing the group of relevant database entries to provide an intermediate response to the database query. Step9342may be followed by step9352of outputting the intermediate response to the database query from the memory processing integrated circuit. Step9352may be followed by step9390of further processing the intermediate response to provide the response to the database. FIG.92Jillustrate method9303for database analytics acceleration. It is assumed that the filtering and not the processing of the relevant database entries are executed by a memory processing integrated circuit. The memory processing integrated circuit will output group of relevant database entries that will be fully processed by one or more other processing entities located outside the memory processing integrated circuit. Method9301may start by step9310of receiving, by a memory processing integrated circuit, a database query that comprises at least one relevancy criterion indicative of database entries of a database that are relevant to the database query. Step9310may be followed by step9320of determining, by the memory processing integrated circuit and based on the at least one relevancy criterion, a group of relevant database entries stored in the memory processing integrated circuit. Step9320may be followed by step9333of sending the group of relevant database entries to one or more processing entities located outside the memory processing integrated circuit without substantially sending irrelevant data entries stored in the memory processing integrated circuit to the one or more processing entities. Step9333may be followed by step9391of fully processing the intermediate response to provide the response to the database. FIG.92Killustrates method9304of database analytics acceleration. Method9303may start by step9315of receiving, by an integrated circuit, a database query that comprises at least one relevancy criterion indicative of database entries of a database that are relevant to the database query; wherein the integrated circuit comprises a controller, filtering units, and multiple memory units. Step9315may be followed by step9325of determining, by the filtering units and based on the at least one relevancy criterion, a group of relevant database entries stored in the integrated circuit. Step9325may be followed by step9335of sending the group of relevant database entries to one or more processing entities located outside the integrated circuit for further processing without substantially sending irrelevant data entries stored in the integrated circuit to the one or more processing entities. Step9335may be followed by step9391. FIG.92Lillustrates method9305of database analytics acceleration. Method9305may start by step9314of receiving, by an integrated circuit, a database query that comprises at least one relevancy criterion indicative of database entries of a database that are relevant to the database query; wherein the integrated circuit comprises a controller, processing units, and multiple memory units. Step9314may be followed by step9324of determining, by the processing units and based on the at least one relevancy criterion, a group of relevant database entries stored in the integrated circuit. Step9324may be followed by step9334of processing the group of relevant database entries by the processing units without processing irrelevant data entries stored in the integrated circuit by the processing units to provide processing results. Step9334may be followed by step9344of outputting the processing results from the integrated circuit. In any one of methods9300,9301,9302,9304and9305the memory processing integrated circuit outputs an output. The output may be the group of relevant database entries, one or more intermediate results or one or more (full) results. The outputting may be preceded by retrieving one or more relevant data base entries and/or one or more results (full or intermediate) from filtering entities and/or processing entities of the memory processing integrated circuit. The retrieving may be controlled in one or more manner and may be controlled by an arbiter and/or one or more controller of the memory processing integrated circuit. The outputting and/or the retrieval may include controlling one or more parameters of the retrieval and/or of the outputting. The parameters may include timing of retrieval, rate of the retrieval, source of the retrieval, bandwidth, order or retrieval, timing of outputting, rate of the outputting, source of the outputting, bandwidth, order or outputting, type of retrieval method, type of arbitration method, and the like. The outputting and/or the retrieving may perform a flow control process. The outputting and/or the retrieving (for example the applying of the flow control process) may be responsive to indicators, outputted from the one or more processing entities, regarding a completion of a processing of database entries of the group. An indicator may indicate whether an intermediate result is ready to be retrieved from a processing entity or not. The outputting may include attempting to match a bandwidth used during the outputting to a maximal allowable bandwidth over a link that coupled the memory processing integrated circuit to a requestor unit. The link may be a link to a recipient of the output of the memory processing integrated circuit. The maximal allowable bandwidth may be dictated by the capacity and/or availability of the link, by the capacity and/or availability of the recipient of the outputted content, and the like. The outputting may include attempting to output the outputted content in an optimal or suboptimal manner. The outputting of the outputted content may include attempting to maintain fluctuations of output traffic rate below a threshold. Any method of methods9300,9301,9302and9305may include generating by the one or more processing entities processing status indicators that may be indicative of a progress of the further processing of the group of relevant database entries. When the processing included in any of the mentioned above methods is executed by more than a single processing entity—then the processing may be regarded as a distributed processing—as it is executed in a distributed manner. As indicated above—the processing may be executed in a hierarchical manner or in a flat manner. Any one of methods9300-9305may be executed by multiple systems—that may respond to one or multiple database queries—concurrently or sequentially. Word Embedding As mentioned above—word embedding is the collective name for a set of language modeling and feature learning techniques in natural language processing (NLP) where words or phrases from the vocabulary are mapped to vectors of elements. Conceptually it involves a mathematical embedding from a space with many dimensions per word to a continuous vector space with a much lower dimension. The vectors may be mathematically processed. For example—the vectors that belong to the matrix may be summed to provide a summed vector. Yet for another example—the covariance of the matrix (of the sentence) may be calculated. This may include multiplying the matrix by its transposed matrix. A memory/processing unit may store the vocabulary. Especially—parts of the vocabulary may be stored in multiple memory banks of the memory/processing unit. Thus—the memory/processing unit may be accessed with access information that (such as retrieval keys) will represent the set of words of phrases of a sentence—so that the vectors that represent the words of phrases of the sentence will be retrieved from at least some of the memory banks of the memory/processing unit. Different memory banks of the memory/processing unit may store different parts of the vocabulary—and may be accessed in parallel (depending on the distribution of indexes of a sentence). Even when a more than a single line of a memory bank needs to be sequentially accessed—prediction may reduce the penalty. The allocation of words of the vocabulary between the different memory banks of the memory/processing unit may be optimized or highly beneficial in the senses that it may increase the chances of parallel access to different memory banks of the memory/processing unit per sentence. The allocation can be learnt per user, can be learnt per the general population or may be learnt per a group of persons. In addition—the memory/processing unit may also be used to perform at least some of the processing operations (by its logic) and thereby may reduce bandwidth required from busses outside the memory/processing unit, may calculate in an efficient manner (even in parallel) multiple computations (using multiple processors of the memory/processing unit in parallel. A memory bank may be associated with a logic. At least a part of the processing operations may be executed by one or more additional processors (such as vector processors—including but not limited to vector adders). A memory/processing unit may include one or more additional processor that may be allocated to some or all of the memory bank—logic pairs. Thus—a single additional processor may be allocated to all or some of the memory bank—logic pairs. Yet for another example—the additional processors may be arranged in a hierarchical manner so that an additional processor of some level processes output from additional processors of a lower level. It should be noted that the processing operation may be executed without using any additional processors—but may be executed by the logic of the memory/processing unit. FIGS.89A,89B,89C,89D,89E,89F and89Gillustrate examples of memory/processing units9010,9011,9012,9013,9014,9015and9019respectively. Memory/processing unit9010includes a controller9020, an internal bus9021, and multiple pairs of logic9030and memory banks9040. It should be noted that the logic9030and memory banks9040may be coupled to the controller and/or to each other in other manners—for example multiple buses may be provided between the controller and the logics, the logics may be arranged in multiple layers, a single logic may be shared by multiple memory banks (see for exampleFIG.89E), and the like. The length of a page of each memory bank within the memory/processing unit9010can be defined in any manner—for example it may be small enough, and the number of memory banks can be big enough to enable to output in parallel a large number of vectors without wasting many bits on irrelevant information. Logic9020may include a full ALU, a partial ALU, a memory controller, a partial memory controller, and the like. A partial ALU (memory controller) unit is capable of executing only a part of the functions executable by the full ALU (memory controller). Any logic or sub-processors illustrates in the application may include a full ALU, a partial ALU, a memory controller, a partial memory controller, and the like. The connectivity between the controller9020, and multiple pairs of logic9030and memory banks9040may be implemented in other manners. The memory banks and the logic may be arranged in other manners (not in pairs). The memory/processing unit9010may not have additional vectors—and the processing of the vectors (from the memory banks) is done by the logic9030. FIG.89Billustrates an additional processor such as vector processor9050that is coupled to internal bus9021. FIG.89Cillustrates additional processors such as vector processors9050that are coupled to internal bus9021. The one or more additional processors execute (alone or in cooperation with the logic) the processing operations. FIG.89Dalso illustrates a host9018coupled via bus9022to the memory/processing unit9010. FIG.89Dalso illustrates the vocabulary9070that maps word/phrases9072to vectors9073. The memory/processing unit is accessed using retrieval keys9071, each retrieval key represents a previously recognized word or phrase. Host9018sends to the memory/processing unit multiple retrieval keys9071that represent a sentence and memory/processing unit may output the vectors9070or an outcome of a processing operation applied by the vectors related to the sentence. The word/phrases are usually not stored in the memory/processing unit9010. Memory controller functionality for controlling the memory banks may be included (solely or partially) in the logic, may be included (solely or partially) in controller9020and/or may be included (solely or partially) in one or more memory controllers (not shown) within the memory/processing unit9010. The memory/processing unit may be configured to maximize the throughput of vectors/results sent to the host9018—or may apply any process for controlling internal memory/processing unit traffic and/or controlling traffic between the memory/processing unit and the host computer (or any other entity outside the memory/processing unit). The different logic9030are coupled to the memory banks9040of the memory/processing unit and may perform (preferably in parallel) mathematical operations on the vectors to generate processed vectors. One logic9030may send to another logic a vector (see, for example line38ofFIG.89G) and the other logic may apply a mathematical operation on the received vector and the vector it calculated. The logic may be arranged in levels and a logic of a certain level may process vectors or intermediate results (generated from applying mathematical operations) from a previous level logic. The logics may form a tree (binary, trinary, and the like) When the aggregate size of processed vectors exceeds the aggregate size of the results then a reduction of output bandwidth (out of the memory/processing unit) is obtained. For example—when K vectors are summed by the memory/processing unit to provide a single output vecto—then a K:1 reduction in bandwidth is obtained. Controller9020may be configured to open multiple memory banks in parallel by broadcasting the addresses of the different vectors to be accessed. The controller may be configured to control the order of retrieving the different vectors from the multiple memory banks (or from any intermediate buffer or storage circuit that stores the different vectors—see buffers9033ofFIG.89D) based, at least in part, on the order of the words or phrases in the sentence. The controller9020may be configured to manage the retrieval of the different vectors based on one or more parameters related to outputting the vectors outside the memory/processing unit9010—for example the rate of the retrieval of the different vectors from the memory banks may be set to substantially equal the allowable rate of outputting the different vectors from the memory/processing unit9010. The controller may output the different vectors outside the memory/processing unit9010by applying any traffic shaping process. For example—the controller9020may aim to output the different vectors at a rate that is close as possible to a maximal rate allowable by the host computer or the link coupling the memory/processing unit9010to the host computer. Yet for another example—the controller may output the different vectors while minimizing or at least substantially reducing traffic rate fluctuations over time. The controller9020belongs to the same integrated circuit as the memory banks9040and logic9030, and thus can easily receive feedback from the different logic/memory banks regarding the status of the retrieval of the different vectors (for example whether a vector is ready, whether a vector is ready but another vector is being retrieved or is about to be retrieved from the same memory bank), and the like. The feedback may be provided in any manner—over a dedicated control line, over a shared control line. Using a status bit or status bits, and the like (see status lines9039ofFIG.89F). The controller9020can independently control the retrieval and outputting of the different vectors—and thus may reduce the involvement of the host computer. Alternatively—the host computer may be unaware of the management capabilities of the controller and may continue to send detailed instructions—and in this case the memory/processing unit9010may ignore the detailed instructions, may conceal the management capabilities of the controller and the like. The mentioned above solutions may be used based on the protocol that is manageable by the host computer. It has been found that performing processing operations in the memory/processing unit is very beneficial (energy wise) even when these operations are more power consuming than the processing operations in the host- and even when these operations are more power consuming than the transfer operations between the host and the memory/processing unit. For example, assuming large enough vectors—energy consumption of transferring a data unit is 4 pJ, energy consumption of a processing operation of the data unit (by the host) is 0.1 pJ—then the processing of the data unit by the memory/processing unit was more effective when the energy consumption of processing of the data unit by the memory/processing unit was lower than 5 pJ. Each vector (of the matrix that represents the sentence) may be represented by a sequence of words (or other multiple-bit segments). For simplicity of explanation it is assumed that the multiple bit segments are words. Additional power saving may be obtained when a vectors includes zero-value words. Instead of outputting the entire zero-value word, a zero value flag (even conveyed over a dedicated control line) that is shorter than a word (for example—a bit) may be outputted—instead of the entire word. Flags may be allocated to other values (for example 1-valued word). FIG.88Aillustrates method9400for embedding—or rather may be a method for retrieving features vector related information. The features vector related information may include the feature vectors and/or a result of processing the feature vectors. Method940may start by step9410of receiving, by a memory processing integrated circuit, retrieval information for a retrieval of multiple requested feature vectors that may be mapped to multiple sentence segments. The memory processing unit may include a controller, multiple processor subunits, and multiple memory units. Each of the memory units may be coupled to a processor subunit. Step9410may be followed by step9420of retrieving the multiple requested feature vectors from at least some of the multiple memory units. The retrieving may include concurrently requesting, from two or more memory units, requested feature vectors stored in the two or more memory units. The requesting is executed based on a known mapping between sentence segments and locations of feature vectors mapped to the sentence segments. The mapping may be uploaded during a boot process of the memory processing integrated circuit. It may be beneficial to retrieve as many requested feature vectors at once—but this depends on where the requested feature vectors are stored and the number of different memory units. If more than one requested feature vector is stored in the same memory bank—the predictive retrieval may be applied—for reducing the penalty associated with retrieving information from memory banks. Various method for reduction of penalty are illustrated in various sections of the application. The retrieving may include applying a predictive retrieval of at least some requested feature vectors of a set of requested feature vectors stored in a single memory unit. The requested feature vectors may be distributed between the memory units in an optimal manner. The requested feature vectors may be distributed between the memory units based on expected retrieval patterns. The retrieving of the multiple requested feature vectors may be executed according to the certain order. For example—the order of the sentence segments in one or more sentences. The retrieving of the multiple requested feature vectors may be executed, at least in part, out of order; and wherein the retrieving further may include re-ordering the multiple requested feature vectors. The retrieving of the multiple requested feature may include buffering the multiple requested feature vectors before they may be read by the controller. The retrieving of the multiple requested feature may include generating buffer state indicators that indicate when one or more buffers associated with the multiple memory units store one or more requested feature vectors. The method may include conveying the buffer state indicators over dedicated control lines. The one dedicated control line may be allocated per memory unit. The buffer state indicator may be status bits stored in one or more the buffers. The method may include conveying the buffer state indicators over one or more shared control lines. Step9420may be followed by step9430of processing the multiple requested feature vectors to provide processing results. Additionally or alternatively, step9420may be followed by step9440of outputting from the memory processing integrated circuit, an output that may include at least one out of (a) the requested feature vectors and (b) a result of processing the requested feature vectors. The at least one out of (a) requested feature vectors and (b) a result of processing the requested feature vectors is also referred to as feature vector related information. When step9430is executed then step9440may include outputting (at least) the result of processing the requested feature vectors. When step9430is skipped then step9440includes outputting the requested feature vectors and may not include outputting the result of processing the requested feature vectors. FIG.88Billustrates method9401for embedding. It is assumed that the output includes requested feature vectors but not the result of processing the requested feature vectors. Method9401may start by step9410of receiving, by a memory processing integrated circuit, retrieval information for a retrieval of multiple requested feature vectors that may be mapped to multiple sentence segments. Step9410may be followed by step9420of retrieving the multiple requested feature vectors from at least some of the multiple memory units. Step9420may be followed by step9431of outputting from the memory processing integrated circuit, an output that includes the requested feature vectors and not a result of processing the requested feature vectors. FIG.88Cillustrates method9402for embedding. It is assumed that the output includes results of processing the requested feature vectors. Method9402may start by step9410of receiving, by a memory processing integrated circuit, retrieval information for a retrieval of multiple requested feature vectors that may be mapped to multiple sentence segments. Step9410may be followed by step9420of retrieving the multiple requested feature vectors from at least some of the multiple memory units. Step9420may be followed by step9430of processing the multiple requested feature vectors to provide processing results. Step9430may be followed by step9442of outputting from the memory processing integrated circuit, an output that may include a result of processing the requested feature vectors. The outputting of the output may include applying traffic shaping on the output. The outputting of the output may include attempting to match a bandwidth used during the outputting to a maximal allowable bandwidth over a link that coupled the memory processing integrated circuit to a requestor unit. The outputting of the output may include attempting to maintain fluctuations of output traffic rate below a threshold. Any step out of the retrieving and the outputting may be executed under the control of the host and/or independently or partially independently by the controller. The host may send retrieval commands of different granularity—from generally sending retrieval information regardless of the locations of the requested feature vectors within the multiple memory units—till sending detailed retrieval information based on the location of the requested feature vectors within the multiple memory units. The host may control (or attempt to control) the timing of different retrieval operations within the memory processing integrated circuit—but may be indifferent to the timing. The controller may be controlled in various levels by the host- and may even ignore the detailed commands of the host and independently control at least the retrieval and/or the outputting. The processing of the requested feature vectors can be executed by at least one out (or a combination of one or more out of): one or more memory/processing units and one or more processors located outside the one or more memory/processing units, and the like. It should be noted that the processing of the requested feature vectors can be executed by at least one out (or a combination of one or more out of): one or more processor subunits, a controller, one or more vector processors, and one or more memory/processing units located outside the one or more memory/processing units. The processing of the requested feature vectors can be executed by may be generated by any one of the following or a combination thereof:a. Processor subunits (or logic9030) of a memory/processing unit.b. Processor subunits (or logic9030) of multiple memory/processing units.c. A controller of a memory/processing unitd. A controller of multiple memory/processing units.e. One or more vector processors of a memory/processing unit.f. One or more vector processors multiple memory/processing units. Thus, a processing of the requested feature vectors may be executed by any combination or sub-combination of (a) one or more controllers of one or more memory/processing units, (b) one or more processor subunits of one or more memory/processing units, (c) one or more vector processors of one or more memory/processing units, and (d) one or more other processors located outside the one or more memory/processing units. A processing executed by more than one processing entities may be referred to as a distributed processing. A memory/processing unit may include multiple processor subunits. The processor subunits may operate independently from each other, may partially collaborate with each other, may participate in a distributed processing, and the like. The processing may be executed in a flat manner in which all processor subunits perform the same operations (and may or may not output the results of the processing between them). The processing may be executed in a hierarchical manner in which the processing involves a sequence of processing operations of different levels—whereas a processing operation of a certain layer follows a processing operation of yet another level. The processor subunits may be allocated (dynamically or statically) to different layers and participate in a hierarchical processing. Any processing of the requested feature vectors can be executed by more than one processing entity (processor subunit, controller, vector processor, other processor) may be distributed processed—in any manner—flat, hierarchical or other. For example—processor subunits may output their processed results to a controller that may further process the results. One or more other processors located outside the one or more memory/processing units may further processed the output of the memory processing integrated circuit. It should be noted that the retrieval information may also include information for retrieval of requested feature vectors that are not mapped to the sentence segments. These feature vectors may be mapped to one or more persons, devices or any other entity that may be related to sentence segments. For example—a user of a device that sensed the sentence segments, a device that sensed the segment, a user that was identified as the source of the sentence segments, a web site that was accessed while the sentence was generated, a location in which the sentence was captured, and the like. Methods9400,9401and9402are applicable, mutatis mutandis to the processing and/or requested retrieval vectors that are not mapped to the sentence segments. Non-limiting example of processing of feature vectors may include a sum, a weighted sum, an average, a subtraction or an applying of any other mathematical function. Hybrid Device As processor speeds and memory sizes both continue to increase, a significant limitation on effective processing speeds is the von Neumann bottleneck. The von Neumann bottleneck results from throughput limitations resulting from conventional computer architecture. In particular, data transfer from memory (that is external to the logic die—such as an external DRAM memory) to the processor is often bottlenecked compared to actual computations undertaken by the processor. Accordingly, the number of clock cycles to read and write f om memory increases significantly with memory-intensive processes. These clock cycles result in lower effective processing speeds because reading and writing from memory consumes clock cycles that cannot be used for performing operations on data. Moreover, the computational bandwidth of the processor is generally larger than the bandwidth of the buses that the processor uses to access the memory. These bottlenecks are particularly pronounced for memory-intensive processes, such as neural network and other machine learning algorithms; database construction, indexing searching, and querying; and other tasks that include more reading and writing operation than data processing operations. The present disclosure describes solutions for mitigating or overcoming one or more of the problems set forth above, among other problems in the prior art. There may be provided a hybrid device for memory intense processing, the hybrid device may include a base die, multiple processors, first memory resources of at least one other die and second memory resources of at least one further dies. The base die and the at least one other die are connected to each other by wafer on wafer bonding. The multiple processors are configured to perform processing operations, and to retrieve retrieved information stored in the first memory resources. The second memory resources are configured to send additional information from the second memory resources to the first memory resources. An overall bandwidth of first paths between the base die and the at least one other die exceed an overall bandwidth of second paths between the at least one other die and the at least one further die, and a storage capacity of the first memory resources is a fraction of a storage capacity of the second memory resource. The second memory resources are high bandwidth memory (HBM) resources. The at least one further dies are a stack of high bandwidth memory (HBM) chips. The at least some of the second memory resources may belong to a further die that is connected to the base die by a connectivity that differs from wafer to wafer bonding. The at least some of the second memory resources belong to a further die that is connected to an other die by a connectivity that differs from wafer to wafer bonding. The first memory resources and the second memory resources are cache memories of different levels. The first memory resources are positioned between the base die and the second memory resources. The first memory resources are positioned to the side of the second memory resources. The an other die is configured to perform additional processing, wherein the other die comprises a plurality of processor sub-units and the first memory resources. The each processor sub-unit is coupled to a unique portion of the first memory resources allocated to the processor sub-unit. The unique portion of the first memory resources is at least one memory bank. The multiple processors are a plurality of processor subunits included in a memory processing chip that the first memory resources. The base die comprises the multiple processors, wherein the multiple processors am a plurality of processor subunits that are coupled via conductors formed in the wafer to wafer bonding, to the first memory resources. The each processor sub-unit is coupled to a unique portion of the first memory resources allocated to the processor sub-unit. There may be provided a hybrid integrated circuit may utilize wafer on wafer (WOW) connectivity to couple at least a portion of a base die to second memory resources that are included in one or more further dies and are connected using connectivity that differs from WOW connectivity. An example of the second memory resources may be high bandwidth memory (HBM) memory resources. In various figures the second memory resources are included in a stack of HBM memory units may be coupled to a controller using through silicon via (TSV) connectivity. The controller may be included in the base die or coupled (for example via micro-bumps) to the at least portion of the base die. The base die may be a logic die but may be a memory/processing unit. The WOW connectivity is used to couple one or more portions of the base die to one or more portions of another die (WOW-connected die) that may be a memory die or a memory/processing unit. The WOW connectivity is an extremely high throughput connectivity. The stack of high bandwidth memory (HBM) chips may be coupled to the base die (directly or through a WOW-connected die) and may provide a high throughput connection and very extensive memory resources. The WOW connected-die may be coupled between the stack of HBM chips and the base die to form an HBM memory chip stack with TSV connectivity and having a WOW-connected die at its bottom. The HBM chip stack with TSV connectivity and having a WOW-connected die at it bottom may provide a multi-layer memory hierarchy in which the WOW-connected die may be used as the lower level memory (for example a level 3 cache) that can be accessed to the base die, wherein fetch and/or pre-fetch operations from the higher level HBM memory stack fill the WOW-connected die. The HBM memory chips may be HBM DRAM chips but any other memory technology may be used. Using the combination of WOW connectivity and HMB chips enable to provide a multi-level memory structure that may include multiple memory layers that may provide different tradeoffs between bandwidth and memory density. The suggested solution may serve as an additional, brand new, memory hierarchy between traditional DRAM memory/HBM to internal cache of the logic die, enabling more bandwidth and better management and reuse on DRAM side. This may provide a new memory hierarchy on DRAM side that better manages memory reads in fast manner. FIGS.93A-93Iillustrates hybrid integrated circuits11011′-11019′ respectively. FIG.93Aillustrates an HBM DRAM stack with TSV connectivity and micro-bumps at lowest level (collectively denoted11030) that includes a stack of HDM DRAM memory chips11032coupled to each other and to a first memory controller11031of a base die using TSVs (11039). FIG.93Aalso illustrates a wafer with at least memory resources and coupled using WOW technology (collectively denoted11040) that includes a second memory controller11022of the base die11019that is coupled via one or more WOW intermediate layers (11023) to a DRAM wafer (11021). The one or more WOW intermediate layers can be made of different materials but may differ from pad connectivity and/or may differ from TSV connectivity. Conductors11022′ pass through the one or more WOW intermediate layers are electrically couple the DRAM die to components of the base die. The base die11019is coupled to an interposer11018that in turn is coupled to a package substrate11017using micro-bumps. The package substrate has at its lower surface an array of micro-bumps. The micro-bumps may be replaced by other connectivity. The interposer11018and the package substrate11017may be replaced by other layers. The first and/or second memory controllers (11031and11032respectively) may be positioned (at least in part) outside the base die11019—for example in the DRAM wafer, between the DRAM wafer and the base die, between the stack of HBM memory units and the base die, and the like. The first and second memory controllers (11031and11032respectively) may belong to a same controller or may belong to different controllers. One or more of the HBM memory units may include logic as well as memory—and may be or may include a memory/processing unit. The first and second memory controller are coupled to each other by multiple buses11016—for conveying information between the first and second memory resources.FIG.93Aalso illustrates bus11014from the second memory controller to components (for example—multiple processors) of the base die.FIG.93Afurther illustrates bus11015from the first memory controller to components (for example—multiple processors—as shown inFIG.93C) of the base die. FIG.93Billustrated hybrid integrated circuit11012that differs from hybrid integrated circuit11011ofFIG.93Aby having a memory/processing unit11021′ instead of the DRAM die11021. FIG.93Cillustrated hybrid integrated circuit11013that differs from hybrid integrated circuit11011ofFIG.93Aby having an HBM memory chip stack with TSV connectivity and having a WOW-connected die at its bottom (collectively denoted11040) that includes a DRAM die11021between the stack of HBM memory units and the base die11018. The DRAM die11021is coupled to the first memory controller11031of the base die11019using WOW technology (see WOW intermediate layers11023). One or more of the HBM memory dies11032may include logic as well as memory—and may be or may include a memory/processing unit. The lowermost DRAM die (denoted DRAM die11021inFIG.93C) may be an HBM memory die or may differ from an HBM die. The lowermost DRAM die (DRAM die11021) may be replaced by a memory/processing unit11021′—as illustrated by hybrid integrated circuit11014ofFIG.93D. FIGS.93E-93Gillustrate hybrid integrated circuits11015,11016and11016′ respectively in which base die11019is be coupled to multiple instances of HBM DRAM stack with TSV connectivity and microbumps (11020) at lowest level and wafer with at least memory resources and coupled using WOW technology (11030), and/or to multiple instances of HBM memory chip stack with TSV connectivity and having a WOW-connected die at its bottom (11040). FIG.93Hillustrates hybrid integrated circuit11014′ that differs from hybrid integrated circuit11014ofFIG.93Dby illustrating a memory unit53, a level two cache memory (L2 cache52), multiple processors11051. Multiple processors11051is coupled to the L2 cache11052and can be fed by coefficients and/or data stored in memory unit11053and L2 cache11052. Any of the mentioned above hybrid integrated circuits may be used for Artificial Intelligence (AI) processing—which are bandwidth intensive. A memory/processing unit11021′ ofFIGS.93D and93H, when coupled to a memory controller using WOW-technology may perform AI calculations and may receive both data and coefficients at a very high rate from the HBM DRAM stack and/or from the WOW-connected die. Any memory/processing unit may include a distributed memory array and processor array. The distributed memory and processor arrays may include multiple memory banks and multiple processors. The multiple processor may form a processing army. Referring toFIGS.93C,93D and93H, and assuming that the hybrid integrated circuits (11013,11014or11014′) are required to execute general matrix-vector multiplications (GEMV) that includes calculating a multiplication of a matrix by a vector. This type of calculation is bandwidth intensive because there is no re-use of retrieved matrix data. Thus the entire matrix needs to be retrieved and used only once. GEMV may be a part of a sequence of mathematical operations that involves (i) multiplying a first matrix (A) by a first vector (V1) to provide a first intermediate vector, applying a first non-liner operation (NLO1) on the first intermediate vector to provide a first intermediate result, (ii) multiplying a second matrix (B) by the first intermediate result to provide a second intermediate vector, applying a second non-liner operation (NLO2) on the second intermediate vector to provide a second intermediate result, and so on (till receiving an N'th intermediate result, N may exceed 2). Assuming that each matrix is large (for example 1 Gb), that the calculation will requite 1 Tbs computational power and a bandwidth/throughput of 1 Tbs. The computation and the calculations may be executed in parallel. Assuming that the GEMV calculation exhibits N=4 and has the following form: Result=NLO4(D*(NLO3(C*(NLO2(B*(NLO1(A*V1))))))). Also assuming that DRAM die11021(or memory/processing unit11021′) do not have enough memory resources to store A, B, C and D at the same time—then at least some of these matrixes will be stored in the HDM DRAM dies11032. It is assumed that the base die is a logic dies that include calculation units such as but not limited to processors, arithmetic logic units, and the like. While the first die calculates A*V1, the first memory controller11031retrieves from the one or mom HBM DRAM dies11032the missing parts of the other matrixes for the next calculations. Referring toFIG.93H—and assuming that (a) DRAM die11021has bandwidth of 2 TBs and capacity of 512 Mb, (b) HBM DRAM die11032has 0.2 TBs bandwidth and capacity of 8 Gb, and (c) the L2 cache11052is SRAM that has a bandwidth of 6 Ts and capacity of 10 Mb. Multiplication of matrixes involve re-using data—segmenting large matrixes to segments (for example 5 Mb segments—to fit the L2 cache—that may be used at a double buffer configuration) and multiplying a fetches first matrix segment by segments of a second matrix—one second matrix segment after the other. While a first matrix segments is multiplied by a second matrix segment—another second matrix segment is fetched from the DRAM die11021(of memory processing unit11021′) to the L2 cache. Assuming that the matrixes are 1 Gb each—while the fetching and calculation are executed—the DRAM die11021or the memory/processing unit11021′ are fed by matrix segments from the HBM DRAM dies11032. The DRAM die11021or the memory/processing unit11021′ aggregate matrix segments and the matrix segments are then fed over the WOW intermediate layers (11023) to the base die11019. The memory/processing unit11021′ may reduce the amount of information sent through the WOW intermediate layers (11023) to the base die11019—by performing calculations and sending results—instead of sending the intermediate values that are calculated to provide the results. When multiple (Q) intermediate values are processed to provide a result—then the compression ratio can be Q to 1. FIG.93Iillustrates an example of a memory processing unit11019′ that is implemented using WOW technology. Logic units9030(may be processor sub-units), a controller9020and a bus9021are located in one chip111061, the memory banks9040allocated to the different logic units are located in a second chip11062, whereas the first and second chips are connected to each other using conductors11012′ that pass through a WOW bonding11061—that may include one or more WOW intermediate layers. FIG.93Jis an example of a method11100for memory intense processing. The memory intense means that the processing requires or is associated with high bandwidth memory consumption. Method110may start by steps11110,11120and11130. Step11110includes performing processing operations by multiple processors a hybrid device that comprises a base die, first memory resources of at least one other die, and second memory resources of at least one further dies; wherein the base die and the at least one other die are connected to each other by wafer on wafer bonding. Step11120includes retrieving, by the multiple processors, retrieved information stored in the first memory resources. Step11130may include sending additional information from the second memory resources to the first memory resources, wherein an overall bandwidth of first paths between the base die and the at least one other die exceed an overall bandwidth of second paths between the at least one other die and the at least one further die, and wherein a storage capacity of the first memory resources is a fraction of a storage capacity of the second memory resource. Method11100may also include step11140of performing additional processing by an other die that includes a plurality of processor sub-units and the first memory resources. Each processor sub-unit may be coupled to a unique portion of the first memory resources allocated to the processor sub-unit. The unique portion of the first memory resources is at least one memory bank. Steps11110,11120,11130and11140may be executed concurrently, in a partly overlapping manner, and the like. The second memory resources may be high bandwidth memory (HBM) memory resources or may differ from HBM memory resources. The at least one further dies are a stack of high bandwidth memory (HBM) memory chips. Communication Chip Databases include many entries that include multiple fields. Database processing usually included executing one or more queries that include one or more filtering parameter (for example identity one or more relevant fields and one or more relevant field values) and also include one or more operation parameters that may determine a type of operation to be executed, a variable or constant to be used when applying the operation, and the like. The data processing may include database analytics or other database processes. For example—a database query may request to perform a statistical operation (operational parameter) on all records of the database in which a certain field has a value within a predefined range (filtering parameter). Yet for another example—a database query may request to delete (operation parameter) records that have a certain field that is smaller than a threshold (filtering parameter). A large database is usually stored in storage devices. In order to respond to a query, the database is sent to a memory unit—usually one database segment after the other. The entries of the database segments are sent from the memory unit to a processor that does not belong to the same integrated circuit as the memory unit. The entries are then processed by the processor. For each database segment of the database stored in the memory unit the processing includes the following steps: (i) selecting a record of the database segment, (ii) sending the record to the processor from the memory unit, (iii) filtering the record by the processor to determine whether the record is relevant, and (iv) performing one or more additional operations (summing, applying any other mathematical and/or statistical operation) on the relevant records. The filtering process ends after all the records were sent to the processor and the processor determined which records were relevant. In case where the relevant entries of a database segment are not stored in the processor—then there is a need to send these relevant records to the processor to be further processed (applying the operation that follows the processing) after the filtering stage. When multiple processing operations follow a single filtering then the results of each operation may be sent to the memory unit and then sent again to the processor. This process is bandwidth and time consuming. There is a growing need to provide an efficient manner to perform database processing. There may be provided a device that may include a database acceleration integrated circuit. There may be provided a device that may include one or more groups of database acceleration integrated circuits that may be configured to exchange information and/or accelerated results (outcome of processing done by a database acceleration integrated circuit) between database acceleration integrated circuits of the one or more groups of database acceleration integrated circuits. The database acceleration integrated circuits of a group may be connected to same printed circuit board. The database acceleration integrated circuits of a group may belong to a modular unit of a computerized system. The database acceleration integrated circuits of different groups may be connected to different printed circuit boards. The database acceleration integrated circuits of different groups may belong to different modular units of a computerized system. The device may be configured to execute distributed process by the database acceleration integrated circuits of the one or more groups. The device may be configured to use at least one switch for exchanging, at least one out (a) information and (b) database accelerated results between database acceleration integrated circuits of different groups of the one or more groups. The device may be configured to execute distributed process by some of the database acceleration integrated circuits of some of the one or more groups. The device may be configured to perform a distributed process of a first and second data structures, wherein an aggregate size of the first and second data structures exceeds a storage capability of the multiple memory processing integrated circuits. The device may be configured to perform the distributed process by performing multiple iterations of (a) performing a new allocation of different pairs a first data structure portion and a second data structure portion to different database acceleration integrated circuits, and (b) processing the different pairs. FIGS.94A and9Billustrate examples of a storage system11560, a computer system11150and one or more devices for database acceleration11520. The one or more devices for database acceleration11520may monitor the communication between storage system11560and the computer system11150in various manner—either by sniffing or by being positioned between the computer system11150and the storage system11560. The storage system11560may include many (for example—more than 20, 50, 100, 100 and the like) storage units (such as disks or raids of disks) and may, for example, store more than 100 Tbytes of information. The compute system11510may be a vast computer system and may include tens, hundreds and even thousands of processing units. The compute system11510may include multiple compute nodes11512that are controlled by a manager11511. The compute nodes may control or otherwise interact with the one or more devices for database acceleration11520. The one or more devices for database acceleration11520may include one or more database acceleration integrated circuits (see for example database acceleration integrated circuit11530ofFIGS.94A and94B), and memory resources11550. The memory resources may belong to one or more chips dedicated for memory but may belong to memory/processing units. FIGS.94C and94Dillustrate examples of a computer system11150and one or more devices for database acceleration11520. One or more database acceleration integrated circuits of the one or more devices for database acceleration11520may be controlled by a management unit11513that may be located within the computer system (seeFIG.94C) or within the one or more devices for database acceleration11520(FIG.94D). FIG.94Eillustrated a device for database acceleration11520that includes a database acceleration integrated circuit11530and multiple memory processing integrated circuits1151. Each memory processing integrated circuit may include a controller, multiple processor subunits, and multiple memory units. The database acceleration integrated circuit11530is illustrated as including network communication interface11531, first processing units11532, memory controllers11533, database acceleration unit11535, interconnect11536, and management unit11513. The network communication interface (11531) may be configured to receive (for example via first ports of network communication interface11531(1)) a vast amount of information from a large number of storage units. Each storage unit may output information at rate that exceeds tens and even hundreds of megabyte per second, while the data transfer rate is expected to increase over time (for example double each 2-3 years). The number of storage data units (large number) may exceed 10, 50, 100, 200 and even more. The vast amount of information may exceed tens, hundreds of Gigabytes per second, and even may be in the range of terabytes per second and petabytes per second. The first processing units11532may be configured to first process (pre-process) the vast amount of information to provide first processed information. The memory controllers11533may be configured to send over a vast throughput interface11534, the first processed information to the multiple memory processing integrated circuits. The multiple memory processing integrated circuits11551may be configured to second process (process) at least parts of the first processed information by the multiple memory processing integrated circuits to provide second processed information. The memory controllers11533may be configured to retrieve retrieved information from the multiple memory processing integrated circuits. The retrieved information may include at least one out of (a) at least one portion of the first processed information, and (b) at least one portion of the second processed information. The database acceleration unit11535may be configured to perform database process operations on the retrieved information, to provide database accelerated results. The database acceleration integrated circuit may be configured to output the database accelerated results—for example through one or more second ports11531(2) of the network communication interface. FIG.94Ealso illustrates a management unit11513that is configured to manage at least one of a retrieval of the retrieved information, a first process (pre-process), the second process (process) and the third process (database processing). The management unit11513may be located outside the database acceleration integrated circuit. The management unit may be configured to perform said management based on an execution plan. The execution plan may be generated by the management unit or may be generated by an entity located outside the database acceleration integrated circuit. The execution plan may include at least one out of (a) instructions to be executed by the various components of the database acceleration integrated circuit, (b) data and/or coefficients required for the implementation of the execution plan, (c) memory allocation of instructions and/or data. The management unit may be configured to perform the management by allocating at least some out of (a) network communication network interface resources, (b) decompression unit resources, (c) memory controllers resources, (d) multiple memory processing integrated circuits resources, and (c) database acceleration units resources. As illustrated inFIGS.94E and94G, the network communication network interface may include different types of network communication ports. The different types of network communication ports may include storage interface protocol ports (for example SATA ports, ATA ports, ISCSI ports, network file system, fiber channel ports) and storage interface over general network protocol ports (for example ATA over Ethernet, fiber channel over Ethernet, NVME, Roce, and more). The different types of network communication ports may include storage interface protocol ports and PCIe ports. FIG.94Fincludes dashed lines that illustrate the flow of the vast information, first processed information, retrieved information and database accelerated results.FIG.94Fillustrates the database acceleration integrated circuit11530as being coupled to multiple memory resources11550. The multiple memory resources11550may not belong to a memory processing integrated circuit. The device for database acceleration11520may be configured to execute multiple tasks concurrently by the database acceleration integrated circuit11530—as the network communication interface11531may receive multiple streams of information (concurrently), first processing units11532may perform first processing on multiple information units concurrently, the memory controllers11533may send multiple first processed information units concurrently to the multiple memory processing integrated circuits11551, the database acceleration unit11535may process multiple retrieved information units concurrently. The device for database acceleration11520may be configured to execute at least one out of the retrieve, first process, send and third process, based on an execution plan sent to the database acceleration integrated circuit by a compute node of a vast compute system. The device for database acceleration11520may be configured to manage at least one of the retrieve, first process, send and third process in a manner that substantially optimizes the utilization of the database acceleration integrated circuit. The optimization considers the latency, throughput and any other timing or storage or processing consideration and attempts to keep all components along the flow path busy and without bottlenecks. The database acceleration integrated circuit may be configured to output the database accelerated results—for example through one or more second ports11531(2) of the network communication interface. The device for database acceleration11520may be configured to substantially optimize a bandwidth of traffic exchanged by the network communication network interface. The device for database acceleration11520may be configured to substantially prevent a formation of bottlenecks in at least one of the retrieve, first process, send and third process in a manner that substantially optimizes the utilization of the database acceleration integrated circuit. The device for database acceleration11520may be configured to allocate resources of the database acceleration integrated circuit according to temporal I/O bandwidth. FIG.94Gillustrated a device for database acceleration11520that includes a database acceleration integrated circuit11530and multiple memory processing integrated circuits1151.FIG.94Galso illustrates various units that are coupled to the database acceleration integrated circuit11530-remote RAM11546, Ethernet memory DIMMs11547, storage system11560, local storage unit11561, and nonvolatile memory (NVM)11563(the nonvolatile memory may be an NVM express unit—NVME). The database acceleration integrated circuit11530is illustrated as including Ethernet ports11531(1), RDMA unit11545, serial scale up port11531(15), SATA controller11540, PCIe port11531(9), first processing units11532, memory controllers11533, database acceleration unit11535, interconnect11536, management unit11513, cryptographic engine11537for executing cryptographic operations, and level two static random access memory (L2 SRAM)11538. The database acceleration unit is illustrated as including a DMA engine11549, a level three L3 memory11548, and database acceleration subunits11547. The database acceleration subunits11547may be configurable units. The Ethernet ports11531(1), RDMA unit11545, serial scale up port11531(15), SAT controller11540, PCIe port11531(9) may be regarded as parts of network communication interface11531. The remote RAM11546, Ethernet memory DIMMs11547, storage system11560are coupled to Ethernet ports11531(1) that in turn is coupled to RDMA unit11545. The local storage unit11561is coupled to the SATA controller11540. The PCIe port11531(9) is coupled to NVM11563. The PCIe port may also be used for exchanging commands—for example for management purposes. FIG.94His an example of a database acceleration unit11535. The database acceleration unit11535may be configured to perform concurrently database process instructions by database processing subunits11573, wherein the data base acceleration unit may include a group of database accelerator subunits that share a shared memory unit11575. Different combinations of database acceleration subunit11535may be dynamically linked to each other (via configurable links or interconnects11576) to provide execution pipelines required for execute a database process operation that may include multiple instructions. Each database process subunit may be configured to execute a specific type of database process instructions (for example—filter, merge, accumulate, and the like). FIG.94Halso illustrates independent data base processing units11572coupled to caches11571. The data base processing units11572and the caches11571may be provided instead of the Reconfigurable array of DB accelerators11574or in addition to the Reconfigurable array of DB accelerators11574. The device may facilitate scale-in and/or scale-out—therefor enable multiple database acceleration integrated circuit11530(and their associated memory resources11550or their associated multiple memory processing integrated circuits11551) to co-operate with each other—for example by participating in distributed processing of database operations. FIG.94Iillustrates a modular unit such as blade11580that includes two database acceleration integrated circuits11530(and their associated memory resources11550). The blade may include one, two, or more than two memory processing integrated circuits11551and their associated memory resources11550. The blade may also include one or more nonvolatile memory unit, an Ethernet switch, a PCIe switch and an ethernet switch. Multiple blades may communicate with each other using any communication method, communication protocol and connectivity. FIG.94Iillustrates four database acceleration integrated circuit11530(and their associated memory resources11550) that are fully connected to each other—each database acceleration integrated circuit11530is connected to all three other database acceleration integrated circuits11530. The connectivity may be achieved using any communication protocol—for example by using RDMA over Ethernet protocol. FIG.94Ialso illustrates a database acceleration integrated circuit11530that is connected to its associated memory resources11550and to unit11531that includes RAM memory and an Ethernet port. FIGS.94J,94K,94L and94Millustrate four groups11580of database acceleration integrated circuits, each group including four database acceleration integrated circuits11530(that are fully connected to each other) and their associated memory resources11550. The different groups are connected to each other via switch11590. The number of groups may be two, three or more than four. The number of database acceleration integrated circuits per group may be two, three or more than four. The number of groups may be the same as (or may differ from) the number of database acceleration integrated circuits per group. FIG.94Killustrates two tables A and B that are too big (for example 1 Tbyte) for being efficiently joined at once. The tables are virtually segmented to patches and the join operation is applied on pairs that include a patches of table A and a patch of table B. The groups of database acceleration integrated circuits may process the patches in various manner. For example, the device may be configured to perform the distributed process by:g. Allocating different first data structure portions (patches of table A—for example first till sixteenth patches A0-A15) to different database acceleration integrated circuits of the one or more groups.h. Perform multiple iterations of: (i) newly allocating different second data structure portions (patches of table B—for example first till sixteenth patches B0-B15) to different database acceleration integrated circuits of the one or more groups, and (ii) processing by the database acceleration integrated circuits the first and second data structure portions. The device may be configured to execute the newly allocating of a next iteration in an at least partially time overlapping manner with a processing of a current iteration. The device may be configured to execute the newly allocating by exchanging second data structure portions between the different database acceleration integrated circuits. The exchanging may be executed in an at least partially time overlapping manner with the process. The device may be configured to execute the newly allocating by exchanging second data structure portions between the different database acceleration integrated circuits of a group; and once the exchanging has been exhausted—exchanging second data structure portions between different groups of database acceleration integrated circuits. InFIG.94Kfour cycles of some of joint operations are shown—for example—referring to the upper left database acceleration integrated circuit11530of the upper left group—the four cycles include calculating Join(A0, B0), Join(A0, B3), Join(A0, B2), and Join(A0, B1). During these four cycles A0 stays at the same database acceleration integrated circuit11530while patches of matrix B (B0, B1, B2 and B3) are rotated between members of the same group of database acceleration integrated circuits11530. InFIG.94Lthe patches of the second matrix are rotated between the different group—(a) patches B0, B1, B2 and B3 (previously processed by the upper left group) are sent from the upper left group to the lower left group, (b) patches B4, B5, B6 and B7 (previously processed by the lower left group) are send from the lower left group to the upper right group, (c) patches B8, B9, B10 and B11 (previously processed by the upper right group) are send from the upper right group to the lower right group, and (d) patches B12, B13, B14 and B15 (previously processed by the lower right group) are send from the lower right group to the upper left group. FIG.94Nis an example of a system that includes multiple blades11580, SATA controller11540, local storage unit11561, NVME11563, PCIe switch11601, Ethernet memory DIMMs11547, and Ethernet ports11531(4). The blades11580may be coupled to each one of PCIE switch11601, Ethernet ports11531and SATA controller11540. FIG.94Oillustrates two systems11621and11622. System11621may include one or more devices for data base acceleration11520, a switching system11611, storage system11612and compute system11613. The switching system11611provides connectivity between the one or more devices for data base acceleration11520, the storage system11612and the compute system11613. System11622may include storage system and one or more devices for data base acceleration11615, a switching system11611, and compute system11613. The switching system11611provides connectivity between the storage system and one or more devices for data base acceleration11615and the compute system11613. FIG.95Aillustrates a method11200for database acceleration. Method11200may start by step11210of retrieving, by a network communication network interface of a database acceleration integrated circuit, a vast amount of information from a large number of storage units. The connection to a large number of storage units (for example using multiple different buses) enables the network communication network interface to receive the vast amount of information—even when a single storage unit has a limited throughput. Step11210may be followed by first processing the vast amount of information to provide first processed information. The first processing may include buffering, extraction of information from payloads, removing headers, decompressing, compressing, decrypting, filtering database queries, or performing any other processing operations. The first processing may also be limited to buffering. Step11210may be followed by step11220of sending, by memory controllers of the database acceleration integrated circuit and over a vast throughput interface, the first processed information to multiple memory processing integrated circuits, wherein each memory processing integrated circuit may include a controller, multiple processor subunits, and multiple memory units. The memory processing integrated circuits may be memory/processing units or distributed processor or memory chips as illustrated in any other part of this patent application. Step11220may be followed by step11230of second processing at least pans of the first processed information by the multiple memory processing integrated circuits to provide second processed information. Step11230may include executing multiple tasks concurrently by the database acceleration integrated circuit. Step11230may include performing concurrently database processing instructions by database processing subunits, wherein the data base acceleration unit may include a group of database accelerator subunits that share a shared memory unit. Step11230may be followed by step11240of retrieving retrieved information from the multiple memory processing integrated circuits, by the memory controllers of the database acceleration integrated circuit, wherein the retrieved information may include at least one out of (a) at least one portion of the first processed information, and (b) at least one portion of the second processed information. Step11240may be followed by step11250of performing, by a database acceleration unit of the database acceleration integrated circuit, database processing operations on the retrieved information, to provide database accelerated results. Step11250may include allocating resources of the database acceleration integrated circuit according to temporal I/O bandwidth. Step11250may be followed by step11260of outputting the database accelerated results. Step11260may include dynamically linking database processing subunits to provide execution pipelines required for executing a database processing operation that may include multiple instructions. Step11260may include outputting the database accelerated results to a local storage and retrieving the database accelerated results from the local storage. It should be noted that steps11210,11220,11230,11240,11250and11260or any other step of method11100may be executed in a pipelined manner. These steps may be executed concurrently, or in an order that differs from the order mentioned above. For example—step1120may be followed by step11250—so that first processed information is further processed by the database acceleration unit. Yet for another example—first processed information may be sent to the multiple memory processing integrated circuits and then sent (without being processed by the multiple memory processing integrated circuits) to the database acceleration unit. Yet for a further example—first processed information and/or second processed information may be outputted from the database acceleration integrated circuit—without being database processed by the database acceleration unit. The method may include executing at least one out of the retrieving, first processing, sending and third processing based on an execution plan sent to the database acceleration integrated circuit by a compute node of a vast compute system. The method may include managing at least one of the retrieving, first processing sending and third processing in a manner that substantially optimizes the utilization of the database acceleration integrated circuit. The method may include substantially optimizing a bandwidth of traffic exchanged by the network communication network interface. The method may include substantially preventing a formation of bottlenecks in at least one of the retrieving, first processing sending and third processing in a manner that substantially optimizes the utilization of the database acceleration integrated circuit. Method11200may also include at least one of the following steps: Step11270may include managing, by a management unit of the database acceleration integrated circuit, at least one of the retrieving, first processing sending and third processing. The managing may be executed based on an execution plan generated by the management unit of the database acceleration integrated circuit. The managing may be executed based on an execution plan received by not generated by the management unit of the database acceleration integrated circuit. The managing may include allocating at least some out of (a) network communication network interface resources, (b) decompression unit resources, (c) memory controllers resources, (d) multiple memory processing integrated circuits resources, and (e) database acceleration units resources. Step11271may include controlling at least one of the at least one of the retrieving, first processing sending and third processing by a compute node of a vast compute system. Step11272may include managing, by a management unit located outside the database acceleration integrated circuit, at least one of the retrieving, first processing sending and third processing. FIG.95Billustrates method11300for operating a group of database acceleration integrated circuits. Method11300may start by step11310of performing database acceleration operations by database acceleration integrated circuits. Step11310may include executing one or more steps of method11200. Method11300may also include step11320of exchanging at least one out (a) information and (b) database accelerated results between database acceleration integrated circuits of one or more groups of database acceleration integrated circuits. The combination of steps11310and11320may amount to executing distributed processing by the database acceleration integrated circuits of the one or more groups. The exchanging may be executed using network communication network interfaces of the database acceleration integrated circuits of one or more groups. The exchanging may be executed over multiple groups that may be connected to each other by a star-connection. Step11320may include using at least one switch for exchanging, at least one out (a) information and (b) database accelerated results between database acceleration integrated circuits of different groups of the one or more groups. Step11310may include step11311of executing distributed processing by some of the database acceleration integrated circuits of some of the one or more groups. Step11311may include performing a distributed processing of a first and second data structures, wherein an aggregate size of the rust and second data structures exceeds a storage capability of the multiple memory processing integrated circuits. The performing of the distributed processing may include performing multiple iterations of (a) performing a new allocation of different pairs a first data structure portion and a second data structure portion to different database acceleration integrated circuits, and (b) processing the different pairs. The performing of the distributed processing may include executing a database join operation. Step11310may include (a) step11312of allocating different first data structure portions to different database acceleration integrated circuits of the one or more groups; and (b) performing multiple iterations of: step11314of newly allocating different second data structure portions to different database acceleration integrated circuits of the one or more groups, and step11316of processing by the database acceleration integrated circuits the first and second data structure portions. Step11314may be executed in an at least partially time overlapping manner with the processing of current iteration. Step11314may include exchanging second data structure portions between the different database acceleration integrated circuits. Step11320may be executed in an at least partially time overlapping manner with step11310. Step11314may include exchanging second data structure portions between the different database acceleration integrated circuits of a group; and once the exchanging has been exhausted—exchanging second data structure portions between different groups of database acceleration integrated circuits. FIG.95Cillustrates method11350for database acceleration. Method11350may include step11352of retrieving, by a network communication network interface of a database acceleration integrated circuit, a vast amount of information from a large number of storage units. Step11352may be followed by step11354of first processing the vast amount of information to provide first processed information. Step11352may be followed by step11354of sending, by memory controllers of the database acceleration integrated circuit and over a vast throughput interface, the first processed information to multiple memory resources. Step11354may be followed by step11356of retrieving retrieved information from the multiple memory resources. Step11356may be followed by step11358of performing, by a database acceleration unit of the database acceleration integrated circuit, database processing operations on the retrieved information, to provide database accelerated results. Step11358may be followed by step11359of outputting the database accelerated results. The method may also include step11355of second processing the first processed information to provide second processed information. The second processing is executed by multiple processors located in one or more memory processing integrated circuits that further comprise the multiple memory resources. Step11355follows step11354and precedes step11356. The aggregate size of the second processed information may be smaller than an aggregate size of the first processed information. The aggregate size of the first processed information may be smaller than an aggregate size of the vast amount of information. The first processing may include filtering database entries. Thus—filtering out database entries that are not relevant to a query—thereby saving bandwidth, storage resources and further processing resources before performing any further processing and/or even before storing the irrelevant database entries in the multiple memory resources. The second processing may include filtering database entries. The filtering may be applied when a filtering condition may be complex (includes multiple conditions) and may require receiving multiple data base entry fields before the filtering can be done. For example—when searching for (a) persons that are above a certain age and like bananas and (b) persons that are above another age and like apples. Database The following examples may refer to a data base. The database may be a data center, may be a part of a data center, or may not belong to a data center. The database may we coupled over one or more networks to multiple users. The database may be a cloud database. There may be provided a database that include one or more management units and multiple database accelerator boards that include one or more memory/processing units. FIG.96Billustrates database12020that includes a management unit12021and multiple DB accelerator boards12022—each including a communication/management processor (processor12024) and multiple memory/processing units12026. The processor12024may support various communication protocols—such as but not limited PCIe. ROCE like protocols, and the like. The database commands may be executed by the memory/processing units12026and the processor may rout traffic between the memory/processing units12026, between different DB accelerator boards12022and with the management unit12021. Using the multiple memory/processing units12026especially when including large internal memory banks dramatically accelerates the execution of data base commands and avoids communication bottlenecks. FIG.96Cillustrates a DB accelerator board12022—that includes processor12024and multiple memory/processing units12026. The processor12024includes multiple communication dedicated components such as DDR controllers12033for communicating with the memory/processing units12026, RDMA engines12031, DB query database engines12034and the like. DDR controllers are examples of communication controllers and R DMA engines are examples of any communication engines. There may be provided a method for operating the system (or of operating any part of the system) of any one ofFIGS.96B,96C and96D. It should noted that the database acceleration integrated circuit11530may be associated with multiple memory resources that are not included in multiple memory processing integrated circuits—or otherwise not associated with processing units. In this case the processing is executed mainly and even solely by the database acceleration integrated circuit. FIG.94Pillustrates method11700for database acceleration. Method11700may include step11710of retrieving, by a network communication interface of a database acceleration integrated circuit, information from storage units. Step11710may be followed by step11720of first processing the amount of information to provide first processed information. Step11720may be followed by step11730of sending, by memory controllers of the database acceleration integrated circuit and over a throughput interface, the first processed information to multiple memory resources. Step11730may be followed by step11740of retrieving information from the multiple memory resources. Step11740may be followed by step11750of performing, by a database acceleration unit of the database acceleration integrated circuit, database processing operations on the retrieved information, to provide database accelerated results. Step11750may be followed by step11760of outputting the database accelerated results. The first processing and/or the second processing may include filtering database entries—determining which database entries should be further processed. The second processing comprises filtering database entries. Hybrid System A memory/processing unit may be highly effective when executing calculations that may be memory intensive and/or in which the bottleneck is related to retrieval operations. Processing oriented (and less memory oriented) processor units (such as but not limited graphic processing units, central processing units) may be more effective when the bottleneck is related to computing operations. A hybrid system may include both one or more processor units and one or more memory/processing units that may be fully or partially connected to each other. A memory/processing unit (MPU) may be manufactured by a first manufacturing process that better fits memory cells than logic cells. For example—the memory cells manufactured by the first manufacturing process may exhibit a critical dimension that is smaller, and even much smaller (for example by a factor that exceeds 2, 3, 4, 5, 6, 7, 8, 9, 10, and the like) than the critical dimension of a logic circuit manufactured by the first manufacturing process. For example—the first manufacturing process may be an analog manufacturing process, the first manufacturing process may be a DRAM manufacturing process, and the like. The processor may be manufactured by a second manufacturing process that better fits logic. For example—the critical dimension of a logic circuit manufactured by the second manufacturing process may be smaller and even much smaller than the critical dimension of a logic circuit manufactured by the first manufacturing process. Yet for another example—the critical dimension of a logic circuit manufactured by the second manufacturing process may be smaller and even much smaller than the critical dimension of a memory cells manufactured by the first manufacturing process. For example—the second manufacturing process may be a digital manufacturing process, the second manufacturing process may be a CMOS manufacturing process, and the like. Tasks may be allocated between the different units in a static or dynamic manner—by taking into account the benefits of each unit and any penalty related to transfer of data between the units. For example—a memory intensive process may be allocated to a memory/processing unit while a processing intensive memory light process may be allocated to the processing units. The processor may request or instruct the one or more memory/processing units to perform various processing tasks. The execution of the various processing tasks may offload the processor, reduce the latency, and in some cases reduce the overall bandwidth of information between the one or more memory/processing units and the processor, and the like. The processor may provide instructions and/or requests at different granularity—for example the processor may send instructions aimed to certain processing resources or may send higher level instructions aimed to the memory/processing unit without specifying any processing resources. FIG.96Dis an example of a hybrid system12040that includes one or more memory/processing unit (MPUs)12043and processor12042. Processor12042may send requests or instructions to the one or more MPUs12043that in turn fulfill (or selectively fulfill) the requests and/or the instructions and send results to the processor12042, as illustrated above. The processor12042may further process the results to provide one or more outputs. Each MPU includes memory resources, processing resources (such as compact microcontrollers12044), and cache memories12049. The microcontrollers may have limited computational capabilities (for example may include mainly a multiply accumulate unit). The microcontrollers12044may apply a process for in-memory acceleration purposes, can also be a CPU or a full DB processing engine or a subset of them. MPU12043may include microprocessors and Packet processing units that may be connected in a mesh/ring/or other topology for fast inter-bank communication. More than one DDR controller can be present for fast inter DIMM communication. Goal of in-memory packet processors is to reduce BW, data movement, power consumption and increase performance. Using them will yield a dramatic increase in Perf/TCO over standard solutions. It should be noted that the management unit is optional. Each MPU may operate as an Artificial intelligence (AI) memory/processing unit—as it may perform AI calculations and return only the results to the processor—thereby reducing the amount of traffic—especially when the MPU receives and stores neural network coefficients to be used in multiple calculations- and does not need to receive from an outside chip the coefficients each time a portion of a neural network is used to processed new data. The MPU may determine when a coefficient is zero and inform the processor that there is no need to perform a multiplication that includes the zero value coefficient. It should be noted that the first processing and the second processing may include filtering database entries. The MPU may be any memory processing unit illustrated in this specification, in any one of PCT patent applications WO2019025862 and PCT patent application serial number PCT/IB2019/001005. There may be provided an AI computing system (and a system executable by the system) in which a the network interface card has AI processing capabilities and is configured to perform some AT processing tasks in order to reduce the amount of traffic to be sent over a network that couples multiple AI acceleration servers. For example—in some inference systems, the input is the network (e.g multiple streams of IP cameras connected to an AI server). In such cases, leveraging RDMA+AI on processing and networking unit can reduce the load of the CPU and PCIe bus and provide processing on the processing and networking units instead by a GPU not included in a processing and networking unit. For example—instead of calculating initial results and sending the initial results to a target AI acceleration server (that applies one or more AI processing operations)—the processing and networking units may perform pre-processing that reduces the amount of values sent to the target AI acceleration server. The target AI computing server is an AI computing server allocated to perform calculations on values provided by other AI acceleration servers. This reduces the bandwidth of traffic exchanged between the AI acceleration servers and also reduces the load of the target AI acceleration server. The target AI acceleration server may be allocated in dynamic or static manner, by using load balancing or other allocation algorithm. There may be more than a single target AI acceleration server. For example—if the target AI acceleration server adds multiple losses—the processing and networking units may add losses generated by their AI acceleration server and send sums of losses to the target AI acceleration server—thereby reducing bandwidth. The same benefit may be gained when performing other pre-processing operations such as derivative calculation and aggregation, and the like. FIG.97Billustrates a system12060that include sub-systems, each sub-system includes a switch12061for connecting AI processing and networking units12063having server motherboards12064to each other. The server motherboards include one or more AI processing and networking units12063that have network capabilities and has AI processing capabilities. The AI processing and networking unit12063may include one or more NICs, and ALU; or other calculation circuits for performing the pre-processing. An AI processing and networking unit12063may be a chip, or may include more than a single chip. It may be beneficial to have an AI processing and networking unit12063that is a single chip. The AI processing and networking unit12063may include (solely or mainly) processing resources. The AI processing and networking unit12063may include in-memory computing circuits or may not include in-memory computing circuits or may not include significant in-memory computing circuits. The AI processing and networking unit12063may be an integrated circuit, may include more than a single integrated circuit, may be a part of an integrated circuit, and the like. The AI processing and networking unit12063may convey (see, for example,FIG.97C) traffic (for example by using communication ports such as DDR channels, network channels and/or PCIe channels, between the AI acceleration server that include the AI processing and networking unit12063and other AI acceleration servers. The AI processing and networking unit12063may also be coupled to external memories such as DDR memories. The processing and networking unit may include memories and/or may include memory/processing units. InFIG.97Cthe AI processing and networking unit12063is illustrating as including local DDR connections, DDR channels, AI accelerators, RAM memory, encryption/decryption engines, PCIe switches, PCIe interfaces, multiple core processing array, fast networking connections, and the like. There may be provided a method for operating the system (or of operating any part of the system) of any one ofFIGS.97B and97C. Any combination of any steps of any method mentioned in this application may be provided. Any combination of any unit, integrated circuit, memory resources, logic, processing subunits, controller, components mentioned in this application may be provided. Any reference to “including” and/or “comprising” may be applied, mutatis mutandis to “consisting” “substantially consisting”. The foregoing description has been presented for purposes of illustration. It is not exhaustive and is not limited to the precise forms or embodiments disclosed. Modifications and adaptations will be apparent to those skilled in the art from consideration of the specification and practice of the disclosed embodiments. Additionally, although aspects of the disclosed embodiments are described as being stored in memory, one skilled in the art will appreciate that these aspects can also be stored on other types of computer readable media, such as secondary storage devices, for example, hard disks or CD ROM, or other forms of RAM or ROM, USB media, DVD, Blu-ray, 4K Ultra HD Blu-ray, or other optical drive media. Computer programs based on the written description and disclosed methods are within the skill of an experienced developer. The various programs or program modules can be created using any of the techniques known to one skilled in the art or can be designed in connection with existing software. For example, program sections or program modules can be designed in or by means of .Net Framework, .Net Compact Framework (and related languages, such as Visual Basic, C, etc.), Java, C++, Objective-C, HTML, HTML/AJAX combinations, XML, or HTML with included Java applets. Moreover, while illustrative embodiments have been described herein, the scope of any and all embodiments having equivalent elements, modifications, omissions, combinations (e.g., of aspects across various embodiments), adaptations and/or alterations as would be appreciated by those skilled in the art based on the present disclosure. The limitations in the claims are to be interpreted broadly based on the language employed in the claims and not limited to examples described in the present specification or during the prosecution of the application. The examples are to be construed as non-exclusive. Furthermore, the steps of the disclosed methods may be modified in any manner, including by reordering steps and/or inserting or deleting steps. It is intended, therefore, that the specification and examples be considered as illustrative only, with a true scope and spirit being indicated by the following claims and their full scope of equivalents.
656,251
11860783
DETAILED DESCRIPTION Examples described in this disclosure relate to systems and methods direct swap caching with noisy neighbor mitigation and dynamic address range assignment. Certain examples relate to leveraging direct swap caching for use with a host operating system (OS) in a computing system or a multi-tenant computing system. The multi-tenant computing system may be a public cloud, a private cloud, or a hybrid cloud. The public cloud includes a global network of servers that perform a variety of functions, including storing and managing data, running applications, and delivering content or services, such as streaming videos, electronic mail, office productivity software, or social media. The servers and other components may be located in data centers across the world. While the public cloud offers services to the public over the Internet, businesses may use private clouds or hybrid clouds. Both private and hybrid clouds also include a network of servers housed in data centers. Compute entities may be executed using compute and memory resources of the data center. As used herein, the term “compute entity” encompasses, but is not limited to, any executable code (in the form of hardware, firmware, software, or in any combination of the foregoing) that implements a functionality, a virtual machine, an application, a service, a micro-service, a container, or a unikernel for serverless computing. Alternatively, compute entities may be executing on hardware associated with an edge-compute device, on-premises servers, or other types of systems, including communications systems, such as base stations (e.g., 5G or 6G base stations). Consistent with the examples of the present disclosure, a host OS may have access to a combination of near memory (e.g., the local DRAM) and an allocated portion of a far memory (e.g., pooled memory or non-pooled memory that is at least one level removed from the near memory). The far memory may relate to memory that includes any physical memory that is shared by multiple compute nodes. As an example, the near memory may correspond to double data rate (DDR) dynamic random access memory (DRAM) that operates at a higher data rate (e.g., DDR2 DRAM, DDR3 DRAM, DDR4 DRAM, or DDR5 DRAM) and the far memory may correspond to DRAM that operates at a lower data rate (e.g., DRAM or DDR DRAM). Other cost differences may be a function of the reliability or other differences in quality associated with the near memory versus the far memory. As used herein the term “near memory” and “far memory” are to be viewed in relative terms. Thus, near memory includes any memory that is used for storing any data or instructions that is evicted from the system level cache(s) associated with a CPU and the far memory includes any memory that is used for storing any data or instruction swapped out from the near memory. Another distinction between the near memory and the far memory relates to the relative number of physical links between the CPU and the memory. As an example, assuming the near memory is coupled via a near memory controller, thus being at least one physical link away from the CPU, the far memory is coupled to a far memory controller, which is at least one more physical link away from the CPU. FIG.1is a block diagram of a system100including compute nodes110,140, and170coupled with a far memory system180in accordance with one example. Each compute node may include compute and memory resources. As an example, compute node110may include a central processing unit (CPU)112; compute node140may include a CPU142; and compute node170may include a CPU172. Although each compute node inFIG.1is shown is having a single CPU, each compute node may include additional CPUs, and other devices, such as graphics processor units (GPUs), field programmable gate arrays (FPGAs), application specific integrated circuits (ASICs), or other devices. In addition, each compute node may include near memory, which may be organized as memory modules. As an example, compute node110may include memory modules122,124,126,128,130, and132. Compute node140may include memory modules152,154,156,158,160, and162. Compute node170may include memory modules182,184,186,188,190, and192. Examples of such memory modules include, but are not limited to, dual-in-line memory modules (DIMMs) or single-in-line memory modules (SIMMs). Memory included in these modules may be dynamic random access memory (DRAM), flash memory, static random access memory (SRAM), phase change memory, magnetic random access memory, or any other type of memory technology that can allow the memory to act as local memory. With continued reference toFIG.1, each compute node may include one or more memory controllers. As an example, compute node110may include memory controller118, compute node140may include memory controller148, and compute node170may include memory controller178. The memory controller included in such nodes may be a double dynamic rate (DDR) DRAM controller in case the memory modules include DDR DRAM. Each compute node may be configured to execute several compute entities. In this example, compute node110may have host OS114installed on it; compute node140may have host OS144installed on it, and compute node170may have host OS174installed on it. Far memory system180may include pooled memory (or non-pooled memory), which may include several memory modules. Examples of such memory modules include, but are not limited to, dual-in-line memory modules (DIMMs) or single-in-line memory modules (SIMMs). Memory included in these modules may be dynamic random access memory (DRAM), flash memory, static random access memory (SRAM), phase change memory, magnetic random access memory, or any other type of memory technology that can allow the memory to act as far memory. Any of host OS (e.g., host OS114,144, or174), being executed by any of compute nodes (e.g., compute node110,140, or170), may access at least a portion of the physical memory included as part of far memory system180. A portion of memory from far memory system180may be allocated to the compute node when the compute node powers on or as part of allocation/deallocation operations. The assigned portion may include one or more “slices” of memory, where a slice refers to any smallest granularity of portions of memory managed by the far memory controller (e.g., a memory page or any other block of memory aligned to a slice size). A slice of memory is allocated at most to only one host at a time. Any suitable slice size may be used, including 1 GB slices, 2 GB slices, 8 GB slices, or any other suitable slice sizes. The far memory controller may assign or revoke assignment of slices to compute nodes based on an assignment/revocation policy associated with far memory system180. As explained earlier, the data/instructions associated with a host OS may be swapped in and out of the near memory from/to the far memory. In one example, compute nodes110,140, and170may be part of a data center. As used in this disclosure, the term data center may include, but is not limited to, some or all of the data centers owned by a cloud service provider, some or all of the data centers owned and operated by a cloud service provider, some or all of the data centers owned by a cloud service provider that are operated by a customer of the service provider, any other combination of the data centers, a single data center, or even some clusters in a particular data center. In one example, each cluster may include several identical compute nodes. Thus, a cluster may include compute nodes including a certain number of CPU cores and a certain amount of memory. Instead of compute nodes, other types of hardware such as edge-compute devices, on-premises servers, or other types of systems, including communications systems, such as base stations (e.g., 5G or 6G base stations) may also be used. AlthoughFIG.1shows system100as having a certain number of components, including compute nodes and memory components, arranged in a certain manner, system100may include additional or fewer components, arranged differently. FIG.2shows a block diagram of an example far memory system200corresponding to far memory system180shown inFIG.1. Far memory system200may include a switch202for coupling the far memory system controllers to compute nodes (e.g., compute nodes110,130, and150ofFIG.1). Far memory system200may further include several far memory controllers and associated far memory modules. As an example, far memory system200may include far memory controller (FMC)210, FMC220, FMC230, FMC240, FMC250, and FMC260coupled to switch202, as shown inFIG.2. Each of FMC210, FMC220, FMC230, FMC240, FMC250, and FMC260may further be coupled to fabric manager280. FMC210may further be coupled to memory modules212,214,216, and218. FMC220may further be coupled to memory modules222,224,226, and228. FMC230may further be coupled to memory modules232,234,236, and238. FMC240may further be coupled to memory modules242,244,246, and248. FMC250may further be coupled to memory modules252,254,256, and258. FMC260may further be coupled to memory modules262,264,266, and268. Each memory module may be a dual-in-line memory module (DIMM) or a single-in-line memory module (SIMM). With continued reference toFIG.2, in one example, each of the far memory controllers may be implemented as a Compute Express Link (CXL) specification compliant memory controller. In this example, each of the memory modules associated with far memory system200may be configured as Type 3 CXL devices. Fabric manager280may communicate via bus206with data center control plane290. In one example, fabric manager280may be implemented as a CXL specification compliant fabric manager. Control information received from data center control plane290may include control information specifying which slices of memory from the far memory are allocated to any particular compute node at a given time. In response to this control information, fabric manager280may allocate a slice of memory from within the far memory to a specific compute node in a time-division multiplexed fashion. In other words, at a time a particular slice of memory could only be allocated to a specific compute node and not to any other compute nodes. As part of this example, transactions associated with CXL.io protocol, which is a PCIe-based non-coherent I/O protocol, may be used to configure the memory devices and the links between the CPUs and the memory modules included in far memory system200. The CXL.io protocol may also be used by the CPUs associated with the various compute nodes in device discovery, enumeration, error reporting, and management. Alternatively, any other I/O protocol that supports such configuration transactions may also be used. The memory access to the memory modules may be handled via the transactions associated with CXL.mem protocol, which is a memory access protocol that supports memory transactions. As an example, load instructions and store instructions associated with any of the CPUs may be handled via CXL.mem protocol. Alternatively, any other protocols that allow the translation of the CPU load/store instructions into read/write transactions associated with memory modules included in far memory system200may also be used. AlthoughFIG.2shows far memory system200as having a certain number of components, including far memory controllers and memory modules, arranged in a certain manner, far memory system200may include additional or fewer components, arranged differently. As an example, the far memory may be implemented as memory modules that are coupled in the same manner as the near memory (e.g., memory modules shown as part of system100inFIG.1). The far memory modules, however, may be implemented using cheaper or lower speed versions of the memory. FIG.3shows an example system address map300for use with the system100ofFIG.1. In this example, in order to use direct swap caching in the context of system100ofFIG.1, the near memory must have a fixed ratio with the far memory. In this example, it is assumed that near memory includes both a non-swappable range and a swappable range. This means that in this example any access to memory within the non-swappable range will be guaranteed to get a “hit” in the near memory (since this range is not being swapped). Any access to a location in memory within the swappable range will operate in the direct swap cache manner. Thus, these accesses will first perform a lookup within the memory designated as the near memory. A hit in the near memory will be serviced directly out of the local memory, whereas a miss in the near memory will cause a swap operation between the corresponding far memory and near memory locations. Swapping operations (e.g., swapping data from the locations in the far memory into the locations in the near memory or swapping data out from the locations in the near memory into the locations in the far memory) may be performed at a granularity level of a cache line. Each cache line may include a combination of a data portion (e.g., 512 bits) and a metadata portion (e.g., 128 bits). The data portion may contain data representing user data or instructions executed by a compute node. The metadata portion may include data representing various attributes of the data in the data portion. The metadata portion can also include error checking and correction bits or other suitable types of information. In addition, the metadata portion may include a tag having an appropriate number of bit(s) to distinguish between the location of a cache line. In this example, since the swappable memory region in the near memory has the same size as the swappable memory region in the far memory (a ratio of 1), a single bit may be used. Thus, a logical value of “1” may indicate that the cache line is in a location corresponding to the near memory whereas a logical value of “0” may indicate that the cache line is in a location corresponding to the far memory. The present disclosure, however, is not limited to the use of a fixed ratio of 1:1 between the near memory and the far memory. As an example, a ratio of 1:3 may be used. In such a case, additional tag bits may be required to encode the information concerning the location of the cache line in terms of the region of the memory having the cache line. With continued reference toFIG.3, one of the potential issues that can occur with respect to direct swap caching is that conflicting cache lines in near-memory may be allocated to separate tenants (e.g., VMs, containers, etc.) in a virtualized system. In such a scenario, one tenant's swapping of cache lines can impact the memory bandwidth and the memory capacity of another tenant. The present disclosure describes an example mechanism that allows one to build isolation between tenants such that one tenant cannot impact the memory bandwidth and the memory capacity of another tenant. To that end, the present disclosure describes an address mapping arrangement such that conflict sets map to the same tenant—that is, one tenant's addresses do not conflict with another. System address map300includes both swappable range and non-swappable range. In this example, an address bit is used to carve up the swappable range into smaller granular regions. As an example, assuming 2 terabytes (TB) of memory range is available for use with system address map300, 1 TB is configured as a non-swappable range and 1 TB is configured as a swappable range. A low order address bit is used to carve this memory range (swappable range) into smaller granular regions, each having a size of 512 MB. In this arrangement, as long as a tenant (e.g., any of VM1, VM2, . . . VM N) is allocated an address range equal to or higher that 1 GB (at least twice the size of the smaller granular regions), then the tenants' addresses do not conflict with each other. The address range allocated to each tenant can be viewed as having a conflict set size (e.g., 1 GB), which in this example is selected to be of the same size as the page size associated with the system. The host OS (e.g., a hypervisor) can allocate memory to the tenants in 1 GB increments. Each 1 GB increment need not be contiguous. Each conflict set (having two conflicting 512 MB swappable regions) corresponds to a single 512 MB region in the physical memory accessible to a tenant (e.g., the DRAM). Thus, a single 1 GB page corresponds to a single 512 MB region in the physical memory. In this example, a low order address bit (e.g., address bit29) can have a logical value of “0” or “1” to distinguish between the two 512 MB conflicting regions. When the logical value for the address bit29is “0,” then the cache line's address corresponds to one of the 512 MB conflicting regions and when the logical value for the address bit29is “1,” then the cache line's address corresponds to the other 512 MB conflicting region. Other types of encodings may also be used as part of the addressing to distinguish between the two conflicting regions. Although the granularity of address allocation can be arbitrary, an interesting property of using the size of 512 MB is the following: if the first-level page tables (the tables that map the Guest Physical Address to the System Physical Address) use a 1 GB page size, then this method of carving up the address space may ensure perfect noisy-neighbor isolation even if the 1 GB pages are allocated in a dis-contiguous fashion across the system physical address (SPA) space. Still referring toFIG.3, system address map300also includes a non-swappable range. That range can be allocated to a set of high-priority tenants (e.g., VMs X, Y . . . Z) that use the non-swapped space that are also isolated from all the tenants using the swappable region prone to conflicts. This example further assumes that the compute node (e.g., the host server) is a two-socket server system that allows access to two non-uniform memory access (NUMA) sets: INTERLEAVED SET A (NUMA-0) and INTERLEAVED SET B (NUMA-1). These different sets can offer different NUMA characteristics to the tenants. As an example, the non-swappable range of system address map300can be mapped to the NUMA-0 set that allows for local access to memory that is faster relative to the NUMA-1 set. In one example, the swappable range and the non-swappable range can be advertised through the Advanced Configuration and Power Interface (ACPI) as two separate ranges. As noted earlier, each range can be mapped to memory with different NUMA characteristics. In addition, each of the swappable range and the non-swappable range can have different attributes as provided via the respective Heterogenous Memory Attributes Tables (HMATs). FIG.4is a diagram showing a transaction flow400related to a read operation and a write operation when the location of the data is in the near memory. The transactions associated with the read operation are shown in portion410of transaction flow400and the transactions associated with the write operation are shown in flow portion420of transaction flow400. During a read operation, a CPU (e.g., any of CPUs112,142, or172ofFIG.1) can issue a command to a memory controller (e.g., any of memory controllers118,148, and178ofFIG.1) to read data corresponding to address A1. Upon the read operation resulting in a miss with respect to the last level cache, address A1is first decoded to the near memory (e.g., any of the local memory associated with the CPU). The read from the local memory location results in a retrieval of a cache line including both the data portion and the metadata portion (including the tag). In this case, the tag indicates that the data portion corresponds to the address being looked up and hence it is a hit. As a result, the data in the cache line is returned to the requesting CPU. As shown in portion420of transaction flow400, when a cache line is being written to the memory, every write operation needs to be preceded by a read operation to ensure that the memory location contains the address being written. In this case, the data is being written to address A2, which is located within the near memory and thus the write operation is also a hit. FIG.5is a diagram showing a transaction flow500relating to the transactions that occur when the data associated with a read operation is located in the far memory (e.g., the pooled memory). If the tag indicates that the near memory location does not contain the address of the data being requested, then it is a miss. Upon a miss, to prevent conflict and race conditions, a blocking entry may be set in the memory controller for the four entries that map to the memory location in the local memory. Next, the tag may be used to decode which location in the far memory contains the data corresponding to the address being requested. As described earlier, the far memory may be implemented as CXL compliant type 3 devices. In such an implementation, the memory controller may spawn a CXL.mem read request to the appropriate address. Once the data is retrieved, the data is sent to the original requester and thus completes the read operation. The data is also written to the near memory and the original data read from the local memory is written to the same location in the far memory from which the read happened—thereby performing the cache line swap. FIG.6is a diagram showing a transaction flow600relating to the transactions that occur when the data associated with a write operation is located in the far memory. For a write (e.g., write (A3)) that misses the near memory (local memory), the data is written to the far memory. FIG.7shows a block diagram of an example system700for implementing at least some of the methods for integrated memory pooling and direct swap caching. System700may include processor(s)702, I/O component(s)704, memory706, presentation component(s)708, sensors710, database(s)712, networking interfaces714, and I/O port(s)716, which may be interconnected via bus720. Processor(s)702may execute instructions stored in memory706. I/O component(s)704may include components such as a keyboard, a mouse, a voice recognition processor, or touch screens. Memory706may be any combination of non-volatile storage or volatile storage (e.g., flash memory, DRAM, SRAM, or other types of memories). Presentation component(s)708may include displays, holographic devices, or other presentation devices. Displays may be any type of display, such as LCD, LED, or other types of display. Sensor(s)710may include telemetry or other types of sensors configured to detect, and/or receive, information (e.g., collected data). Sensor(s)710may include telemetry or other types of sensors configured to detect, and/or receive, information (e.g., memory usage by various compute entities being executed by various compute nodes in a data center). Sensor(s)710may include sensors configured to sense conditions associated with CPUs, memory or other storage components, FPGAs, motherboards, baseboard management controllers, or the like. Sensor(s)710may also include sensors configured to sense conditions associated with racks, chassis, fans, power supply units (PSUs), or the like. Sensor(s)710may also include sensors configured to sense conditions associated with Network Interface Controllers (NICs), Top-of-Rack (TOR) switches, Middle-of-Rack (MOR) switches, routers, power distribution units (PDUs), rack level uninterrupted power supply (UPS) systems, or the like. Still referring toFIG.7, database(s)712may be used to store any of the data collected or logged and as needed for the performance of methods described herein. Database(s)712may be implemented as a collection of distributed databases or as a single database. Network interface(s)714may include communication interfaces, such as Ethernet, cellular radio, Bluetooth radio, UWB radio, or other types of wireless or wired communication interfaces. I/O port(s)716may include Ethernet ports, Fiber-optic ports, wireless ports, or other communication or diagnostic ports. AlthoughFIG.7shows system700as including a certain number of components arranged and coupled in a certain way, it may include fewer or additional components arranged and coupled differently. In addition, the functionality associated with system700may be distributed, as needed. FIG.8shows a data center800for implementing a system for direct swap caching with noisy neighbor mitigation and dynamic address range assignment in accordance with one example. As an example, data center800may include several clusters of racks including platform hardware, such as compute resources, storage resources, networking resources, or other types of resources. Compute resources may be offered via compute nodes provisioned via servers that may be connected to switches to form a network. The network may enable connections between each possible combination of switches. Data center800may include server1810and serverN830. Data center800may further include data center related functionality860, including deployment/monitoring870, directory/identity services872, load balancing874, data center controllers876(e.g., software defined networking (SDN) controllers and other controllers), and routers/switches878. Server1810may include CPU(s)811, host hypervisor812, near memory813, storage interface controller(s) (SIC(s))814, far memory815, network interface controller(s) (NIC(s))816, and storage disks817and818. As explained earlier, memory815may be implemented as a combination of near memory and far memory. ServerN830may include CPU(s)831, host hypervisor832, near memory833, storage interface controller(s) (SIC(s))834, far memory835, network interface controller(s) (NIC(s))836, and storage disks837and838. As explained earlier, memory835may be implemented as a combination of near memory and far memory. Server1810may be configured to support virtual machines, including VM1819, VM2820, and VMN821. The virtual machines may further be configured to support applications, such as APP1822, APP2823, and APPN824. ServerN830may be configured to support virtual machines, including VM1839, VM2840, and VMN841. The virtual machines may further be configured to support applications, such as APP1842, APP2843, and APPN844. With continued reference toFIG.8, in one example, data center800may be enabled for multiple tenants using the Virtual eXtensible Local Area Network (VXLAN) framework. Each virtual machine (VM) may be allowed to communicate with VMs in the same VXLAN segment. Each VXLAN segment may be identified by a VXLAN Network Identifier (VNI). AlthoughFIG.8shows data center800as including a certain number of components arranged and coupled in a certain way, it may include fewer or additional components arranged and coupled differently. In addition, the functionality associated with data center800may be distributed or combined, as needed. FIG.9shows a flow chart900of an example method for direct swap caching with noisy neighbor mitigation. In one example, steps associated with this method may be executed by various components of the systems described earlier (e.g., system100ofFIG.1and system200ofFIG.2). Step910may include provisioning a compute node with both near memory and far memory. Step920may include granting to a host operating system (OS), configured to support a first set of tenants associated with the compute node, access to: (1) a first swappable range of memory addresses associated with the near memory and (2) a second swappable range of memory addresses associated with the far memory to allow for swapping of cache lines between the near memory and the far memory. As explained earlier, with respect toFIG.3, assuming 2 terabytes (TB) of memory range is available for use with system address map300, 1 TB is configured as a non-swappable range and 1 TB is configured as a swappable range. A low order address bit may be used to carve this swappable range into smaller granular regions, each having a size of 512 MB. Step930may include allocating memory in a granular fashion to any of the first set of tenants such that each allocation of memory to a tenant includes memory addresses corresponding to a conflict set having a conflict set size, and where the conflict set comprises: a first conflicting region associated with the first swappable range of memory addresses associated with the near memory and a second conflicting region associated with the second swappable range of memory addresses associated with the far memory, and where each of the first conflicting region and the second conflicting region having a same size that is selected to be equal to or less than half of the conflict set size. As explained earlier with respect to the arrangement shown inFIG.3, as long as a tenant (e.g., any of VM1, VM2, . . . VM N) is allocated an address range equal to or higher that 1 GB (at least twice the size of the conflicting regions), then the tenants' addresses do not conflict with each other. The address range allocated to each tenant can be viewed as having a conflict set size (e.g., 1 GB), which in this example is selected to be of the same size as the page size associated with the system. Advantageously, having the conflict set size being the same size as the page size associated with the system may result in the highest quality of service possible with respect to memory operations (e.g., read/write operations). The host OS (e.g., a hypervisor) can allocate memory to the tenants in 1 GB increments. Each 1 GB increment need not be contiguous. Each conflict set (having two conflicting 512 MB swappable regions) corresponds to a single 512 MB region in the physical memory accessible to a tenant (e.g., the DRAM). Thus, a single 1 GB page corresponds to a single 512 MB region in the physical memory. In this example, a low order address bit (e.g., address bit29) can have a logical value of “0” or “1” to distinguish between the two 512 MB conflicting regions. When the logical value for the address bit29is “0” then the cache line is in one of the 512 MB conflicting regions and when the logical value for the address bit29is “1” then the cache line is in the other 512 MB conflicting regions. As shown earlier with respect toFIG.3, the host OS can have initial access to a certain size of swappable range of memory addresses and a certain size of non-swappable range of memory addresses. Traditionally, any changes to this initial allocation have required modifications to hardware registers that may be programmed as part of the firmware associated with the boot sequence of the compute node. As an example, the basic input-output system (BIOS) associated with the system (e.g., a system including a compute node) may set up the hardware registers based on firmware settings. The host OS does not have access to the hardware registers. Accordingly, the host OS cannot change the system address map. Typically, any modifications to such hardware registers would require reprogramming of the firmware (e.g., the BIOS firmware). Reprogramming of the firmware, or other hardware, necessitates rebooting the compute node. This in turn deprives the tenants of the access to the compute node during the time that the compute node is being reprogrammed and restarted. The present disclosure describes techniques to change the initial allocation of the size of the swappable region and the non-swappable region without requiring reprogramming of the hardware registers. In sum, this is accomplished by provisioning any number of different configurations and then switching between the configurations, as required, without having to reprogram the hardware registers. Advantageously, the switching between the configurations provides run-time flexibility with respect to the type of workloads that can be run using the system. As an example, initially the host OS for a system may have an equal amount of swappable and non-swappable range of addresses. The non-swappable range of addresses may be allocated to a set of high-priority tenants (e.g., VMs X, Y . . . Z) that use the non-swapped space and thus, are also isolated from all the tenants using the swappable region prone to conflicts. If, during runtime, the host OS discovers a higher demand for memory usage from the high-priority tenants, then the host OS may make a runtime switch to a different configuration of a system address map that includes a larger amount of non-swappable address space. If, however, the demand pattern is the reverse of this example, then the host OS may make a runtime switch to yet another configuration of a system address map that includes a larger amount of swappable address space. FIG.10shows a configuration A of a system address map1000for use with system100ofFIG.1. The configuration A described with respect to system address map1000assumes a non-swappable range of N gigabytes (GB) and a swappable range of M GB. A low order address bit is used to carve the swappable range into smaller granular regions (e.g., each having a size of 512 MB). These granular regions can be allocated to the tenants (e.g., any of VM1, VM2, . . . VM N). The non-swappable range can be allocated to tenants having a higher priority (e.g., any of VM X, Y, and Z). This example further assumes that the compute node (e.g., the host server) is a two-socket server system that allows access to two non-uniform memory access (NUMA) sets: INTERLEAVED SET A (NUMA-0) and INTERLEAVED SET B (NUMA-1). These different sets can offer different NUMA characteristics to the tenants. As an example, the non-swappable range of system address map1000can be mapped to the NUMA-0 set that allows for local access to memory that is faster relative to the NUMA-1 set. With continued reference toFIG.10, as part of this configuration, in addition to the swappable range of N GB and the swappable range of M GB, system address map1000is further used to reserve two M/2 GB non-swappable address ranges. One of the M/2 GB non-swappable address ranges is mapped to near memory (e.g., DDR INTERLEAVED SET 3) and the other M/2 non-swappable address range is mapped to the far memory (e.g., CXL NON-INTERLEAVED SET 4). Hardware registers (e.g., hardware address decoders) associated with the compute node are set up such that each of the M/2 GB address ranges are mapping to the same near memory (e.g., the DRAM) locations. As such, these address ranges are reserved initially and are indicated to the host OS as unavailable. Thus, in the beginning, these two address ranges are marked as offline. As such, the address ranges marked as reserved are not mapped to any physical memory. Accordingly, in the beginning the host OS can only access the N GB non-swappable range and the M GB swappable range. Assuming, at a later time, the ratio of the swappable range to the non-swappable range requires a change such that there is a need for an additional X GB of non-swappable range that is accessible to the host OS. To accomplish this, system address map1000is switched from the configuration A shown inFIG.10to the configuration B shown inFIG.11 With continued reference toFIG.11, the switch to configuration B is accomplished by the host OS without invoking the BIOS, including without any reprogramming of the hardware registers. The host OS takes X GB of the swappable range offline. Prior to taking this range offline, the host OS invalidates all page table mappings in the system physical address table. This effectively means that the host OS can no longer access the address range taken offline. At the same time, the host OS brings two X/2 GB memory address ranges online from the previously reserved non-swappable range (e.g., M GB non-swappable range shown as part of system address map1000ofFIG.10). One of the X/2 GB non-swappable address range maps to the far memory (e.g., CXL NON-INTERLEAVED SET 4) and the other X/2 GB non-swappable address range maps to the near memory (e.g., DDR INTERLEAVED SET 3). In this manner, the host OS has effectively converted X GB swappable address range into a non-swappable address range. AlthoughFIGS.10and11describe specific configurations, using similar techniques as described with respect to these figures, other configurations can also be deployed. These configurations allow for dynamic address range assignments that can be modified on the fly without requiring to reprogram the hardware registers used at the boot time. FIG.12shows a flow chart1200of an example method for direct swap caching with noisy neighbor mitigation. In one example, steps associated with this method may be executed by various components of the systems described earlier (e.g., system100ofFIG.1and system200ofFIG.2). Step1210may include provisioning a compute node with both near memory and far memory, where a host operating system (OS) associated with the compute node is granted access to a first system address map configuration and a second system address map configuration different from the first system address map configuration. Step1220may include granting to the host OS, configured to support a first set of tenants, access to a first non-swappable address range associated with the near memory. As an example, as shown with respect to system address map1000ofFIG.10, certain tenants having a higher priority (e.g., any of VM X, Y, and Z) than the other tenants may be granted access to N GB of non-swappable address range. Step1230may include granting to the host OS, configured to support a second set of tenants, different from the first set of tenants, access to: (1) a first swappable address range associated with the near memory and (2) a second swappable address range associated with the far memory to allow for swapping of cache lines between the near memory and the far memory. As an example, as shown with respect to system address map1000ofFIG.10, a set of tenants (e.g., any of VM1, VM2, . . . VM N) may be granted access to a swappable range of M GB. A low order address bit is used to carve the swappable range into smaller granular regions (e.g., each having a size of 512 MB). Step1240may include increasing a size of the first non-swappable address range by switching from the first system address map configuration to the second system address map configuration. As explained earlier with respect toFIGS.10and11, the host OS may increase the size of the non-swappable address range for the higher priority tenants by switching from system address map1000ofFIG.10to system address map1100ofFIG.11. As explained earlier with respect toFIG.11, the switch is accomplished by the host OS without invoking the BIOS, including without any reprogramming of the hardware registers. The host OS may perform several actions in order to perform the switch. As an example, the host OS takes X GB of the swappable range offline. Prior to taking this range offline, the host OS invalidates all page table mappings in the system physical address table. This effectively means that the host OS can no longer access the address range taken offline. At the same time, the host OS brings two X/2 GB memory address ranges online from the previously reserved non-swappable range (e.g., M GB non-swappable range shown as part of system address map1000ofFIG.10). In conclusion, the present disclosure relates to a system including a compute node providing access to both near memory and far memory. The system may further include a host operating system (OS), configured to support a first set of tenants associated with the compute node, where the host OS having access to: (1) a first swappable range of memory addresses associated with the near memory and (2) a second swappable range of memory addresses associated with the far memory to allow for swapping of cache lines between the near memory and the far memory. The system may further include the host OS configured to allocate memory in a granular fashion to any of the first set of tenants such that each allocation of memory to a tenant includes memory addresses corresponding to a conflict set having a conflict set size. The conflict set may include a first conflicting region associated with the first swappable range of memory addresses associated with the near memory and a second conflicting region associated with the second swappable range of memory addresses associated with the far memory, and where each of the first conflicting region and the second conflicting region having a same size that is selected to be equal to or less than half of the conflict set size. The host OS may have access to a first non-swappable range of memory addresses associated with the near memory and the host OS may further be configured to allocate memory addresses to a second set of tenants, having a higher priority than the first set of tenants, from within only the first non-swappable range of memory addresses associated with the near memory. The conflict set size may be selected to be equal to a size of a page of memory used by the host OS for page-based memory management. A ratio of a size of the first swappable range of memory addresses associated with the near memory and a size of the second swappable range of memory addresses associated with the far memory may be fixed. The host OS may further be configured to increase a size of the first non-swappable range of memory addresses without requiring reprogramming of hardware registers associated with the compute node. The system may further comprise a near memory controller for managing the near memory and a far memory controller, configured to communicate with the near memory controller, for managing the far memory. The near memory controller may further be configured to analyze a metadata portion associated with a cache line to determine whether the near memory contains the cache line or whether the far memory contains the cache line. In addition, the present disclosure relates to a method including provisioning a compute node with both near memory and far memory. The method may further include granting to a host operating system (OS), configured to support a first set of tenants associated with the compute node, access to: (1) a first swappable range of memory addresses associated with the near memory and (2) a second swappable range of memory addresses associated with the far memory to allow for swapping of cache lines between the near memory and the far memory. The method may further include allocating memory in a granular fashion to any of the first set of tenants such that each allocation of memory to a tenant includes memory addresses corresponding to a conflict set having a conflict set size. The conflict set may include a first conflicting region associated with the first swappable range of memory addresses associated with the near memory and a second conflicting region associated with the second swappable range of memory addresses associated with the far memory, and where each of the first conflicting region and the second conflicting region having a same size that is selected to be equal to or less than half of the conflict set size. The host OS may have access to a first non-swappable range of memory addresses associated with the near memory and the host OS is further configured to allocate memory addresses to a second set of tenants, having a higher priority than the first set of tenants, from within only the first non-swappable range of memory addresses associated with the near memory. The conflict set size may be selected to be equal to a size of a page of memory used by the host OS for page-based memory management. A ratio of a size of the first swappable range of memory addresses associated with the near memory and a size of the second swappable range of memory addresses associated with the far memory may be fixed. The method may further include increasing a size of the first non-swappable range of memory addresses without requiring reprogramming of hardware registers associated with the compute node. The method may further include analyzing a metadata portion associated with a cache line to determine whether the near memory contains the cache line or whether the far memory contains the cache line. In addition, the present disclosure relates to a method including provisioning a compute node with both near memory and far memory, where a host operating system (OS) associated with the compute node is granted access to a first system address map configuration and a second system address map configuration different from the first system address map configuration. The method may further include granting to the host OS, configured to support a first set of tenants, access to a first non-swappable address range associated with the near memory. The method may further include granting to the host OS, configured to support a second set of tenants, different from the first set of tenants, access to: (1) a first swappable address range associated with the near memory and (2) a second swappable address range associated with the far memory to allow for swapping of cache lines between the near memory and the far memory. The method may further include increasing a size of the first non-swappable address range by switching from the first system address map configuration to the second system address map configuration. The increasing the size of the first non-swappable address range is accomplished without requiring a reprogramming of hardware registers associated with the compute node. The first system address map configuration may include a first reserved non-swappable address range mapped to the near memory and a second reserved non-swappable address range mapped to the far memory, where all addresses associated with both the first reserved non-swappable address range and the second reserved non-swappable address range are marked as offline. The second address map configuration may include a portion of the first reserved non-swappable address range marked as online and a portion of the second reserved non-swappable address range marked as online. The second address map configuration may further include a portion of the first swappable address range marked as offline, where the portion of the first swappable address range marked as offline has a same size as a combined size of the first reserved non-swappable address range marked as online and the portion of the second reserved non-swappable address range marked as online. The method may further include allocating memory in a granular fashion to any of the first set of tenants such that each allocation of memory includes memory addresses corresponding to a conflict set having a conflict set size. The conflict set may include a first conflicting region associated with the first swappable range of memory addresses associated with the near memory and a second conflicting region associated with the second swappable range of memory addresses associated with the far memory, and where each of the first conflicting region and the second conflicting region having a same size that is selected to be equal to or less than half of the conflict set size. The conflict set size may be selected to equal to a size of a page of memory used by the host OS for page-based memory management. It is to be understood that the methods, modules, and components depicted herein are merely exemplary. Alternatively, or in addition, the functionality described herein can be performed, at least in part, by one or more hardware logic components. For example, and without limitation, illustrative types of hardware logic components that can be used include Field-Programmable Gate Arrays (FPGAs), Application-Specific Integrated Circuits (ASICs), Application-Specific Standard Products (ASSPs), System-on-a-Chip systems (SOCs), Complex Programmable Logic Devices (CPLDs), etc. In an abstract, but still definite sense, any arrangement of components to achieve the same functionality is effectively “associated” such that the desired functionality is achieved. Hence, any two components herein combined to achieve a particular functionality can be seen as “associated with” each other such that the desired functionality is achieved, irrespective of architectures or inter-medial components. Likewise, any two components so associated can also be viewed as being “operably connected,” or “coupled,” to each other to achieve the desired functionality. Merely because a component, which may be an apparatus, a structure, a system, or any other implementation of a functionality, is described herein as being coupled to another component does not mean that the components are necessarily separate components. As an example, a component A described as being coupled to another component B may be a sub-component of the component B, the component B may be a sub-component of the component A, or components A and B may be a combined sub-component of another component C. The functionality associated with some examples described in this disclosure can also include instructions stored in a non-transitory media. The term “non-transitory media” as used herein refers to any media storing data and/or instructions that cause a machine to operate in a specific manner. Exemplary non-transitory media include non-volatile media and/or volatile media. Non-volatile media include, for example, a hard disk, a solid-state drive, a magnetic disk or tape, an optical disk or tape, a flash memory, an EPROM, NVRAM, PRAM, or other such media, or networked versions of such media. Volatile media include, for example, dynamic memory such as DRAM, SRAM, a cache, or other such media. Non-transitory media is distinct from, but can be used in conjunction with transmission media. Transmission media is used for transferring data and/or instruction to or from a machine. Exemplary transmission media include coaxial cables, fiber-optic cables, copper wires, and wireless media, such as radio waves. Furthermore, those skilled in the art will recognize that boundaries between the functionality of the above described operations are merely illustrative. The functionality of multiple operations may be combined into a single operation, and/or the functionality of a single operation may be distributed in additional operations. Moreover, alternative embodiments may include multiple instances of a particular operation, and the order of operations may be altered in various other embodiments. Although the disclosure provides specific examples, various modifications and changes can be made without departing from the scope of the disclosure as set forth in the claims below. Accordingly, the specification and figures are to be regarded in an illustrative rather than a restrictive sense, and all such modifications are intended to be included within the scope of the present disclosure. Any benefits, advantages, or solutions to problems that are described herein with regard to a specific example are not intended to be construed as a critical, required, or essential feature or element of any or all the claims. Furthermore, the terms “a” or “an,” as used herein, are defined as one or more than one. Also, the use of introductory phrases such as “at least one” and “one or more” in the claims should not be construed to imply that the introduction of another claim element by the indefinite articles “a” or “an” limits any particular claim containing such introduced claim element to inventions containing only one such element, even when the same claim includes the introductory phrases “one or more” or “at least one” and indefinite articles such as “a” or “an.” The same holds true for the use of definite articles. Unless stated otherwise, terms such as “first” and “second” are used to arbitrarily distinguish between the elements such terms describe. Thus, these terms are not necessarily intended to indicate temporal or other prioritization of such elements.
51,928
11860784
DETAILED DESCRIPTION A technique for operating a cache is disclosed. The technique includes recording access data for a first set of memory accesses of a first frame; identifying parameters for a second set of memory accesses of a second frame subsequent to the first frame, based on the access data; and applying the parameters to the second set of memory accesses. FIG.1is a block diagram of an example computing device100in which one or more features of the disclosure can be implemented. In various examples, the computing device100is one of, but is not limited to, for example, a computer, a gaming device, a handheld device, a set-top box, a television, a mobile phone, a tablet computer, or other computing device. The device100includes, without limitation, one or more processors102, a memory104, one or more auxiliary devices106, a storage108, and a last level cache (“LLC”)110. An interconnect112, which can be a bus, a combination of buses, and/or any other communication component, communicatively links the one or more processors102, the memory104, the one or more auxiliary devices106, the storage108, and the last level cache110. In various alternatives, the one or more processors102include a central processing unit (CPU), a graphics processing unit (GPU), a CPU and GPU located on the same die, or one or more processor cores, wherein each processor core can be a CPU, a GPU, or a neural processor. In various alternatives, at least part of the memory104is located on the same die as one or more of the one or more processors102, such as on the same chip or in an interposer arrangement, and/or at least part of the memory104is located separately from the one or more processors102. The memory104includes a volatile or non-volatile memory, for example, random access memory (RAM), dynamic RAM, or a cache. The storage108includes a fixed or removable storage, for example, without limitation, a hard disk drive, a solid state drive, an optical disk, or a flash drive. The one or more auxiliary devices106include, without limitation, one or more auxiliary processors114, and/or one or more input/output (“IO”) devices. The auxiliary processors114include, without limitation, a processing unit capable of executing instructions, such as a central processing unit, graphics processing unit, parallel processing unit capable of performing compute shader operations in a single-instruction-multiple-data form, multimedia accelerators such as video encoding or decoding accelerators, or any other processor. Any auxiliary processor114is implementable as a programmable processor that executes instructions, a fixed function processor that processes data according to fixed hardware circuitry, a combination thereof, or any other type of processor. The auxiliary processor(s)114include an accelerated processing device (“APD”)116. The one or more IO devices118include one or more input devices, such as a keyboard, a keypad, a touch screen, a touch pad, a detector, a microphone, an accelerometer, a gyroscope, a biometric scanner, or a network connection (e.g., a wireless local area network card for transmission and/or reception of wireless IEEE 802 signals), and/or one or more output devices such as a display, a speaker, a printer, a haptic feedback device, one or more lights, an antenna, or a network connection (e.g., a wireless local area network card for transmission and/or reception of wireless IEEE 802 signals). The last level cache110acts as a shared cache for the various components of the device100, such as the processor102, the APD116, and the various auxiliary devices106. In some implementations, there other caches within the device100. For example, in some examples, the processor102includes a cache hierarchy including different levels such as levels1and2. In some examples, each such cache level is specific to a particular logical division of the processor102, such as a processor core, or a processor chip, die, or package. In some examples, the hierarchy includes other types of caches as well. In various examples, one or more of the auxiliary devices106includes one or more caches. In some examples, the last level cache110is “last level” in the sense that such a cache is the last cache that the device100attempts to service a memory access request from before servicing that request from the memory104itself. For example, if a processor102accesses data that is not stored in any of the cache levels of the processor102, then the processor exports the memory access request to be satisfied by the last level cache110. The last level cache110determines whether the requested data is stored in the last level cache110. If the data is within the last level cache110, the last level cache110services the request by providing the requested data from the last level cache110. If the data is not within the last level cache110, the device100services the request from the memory104. As can be seen, in some implementations, the last level cache110acts as a final cache level before the memory104, which helps to reduce the overall amount of memory access latency for accesses to the memory104. Although techniques are described herein for operations involving the last level cache110, it should be understood that the techniques can alternatively be used in other types of caches or memories. FIG.2is a block diagram of the device100, illustrating additional details related to execution of processing tasks on the APD116, according to an example. The processor102maintains, in system memory104, one or more control logic modules for execution by the processor102. The control logic modules include an operating system120, a driver122, and applications126, and may optionally include other modules not shown. These control logic modules control various aspects of the operation of the processor102and the APD116. For example, the operating system120directly communicates with hardware and provides an interface to the hardware for other software executing on the processor102. The driver122controls operation of the APD116by, for example, providing an application programming interface (“API”) to software (e.g., applications126) executing on the processor102to access various functionality of the APD116. The driver122also includes a just-in-time compiler that compiles shader code into shader programs for execution by processing components (such as the SIMD units138discussed in further detail below) of the APD116. The APD116executes commands and programs for selected functions, such as graphics operations and non-graphics operations, which may be suited for parallel processing. The APD116can be used for executing graphics pipeline operations such as pixel operations, geometric computations, and rendering an image to display device118based on commands received from the processor102. The APD116also executes compute processing operations that are not directly related to graphics operations, such as operations related to video, physics simulations, computational fluid dynamics, or other tasks, based on commands received from the processor102or that are not part of the “normal” information flow of a graphics processing pipeline, or that are completely unrelated to graphics operations (sometimes referred to as “GPGPU” or “general purpose graphics processing unit”). The APD116includes compute units132(which may collectively be referred to herein as “programmable processing units”) that include one or more SIMD units138that are configured to perform operations in a parallel manner according to a SIMD paradigm. The SIMD paradigm is one in which multiple processing elements share a single program control flow unit and program counter and thus execute the same program but are able to execute that program with different data. In one example, each SIMD unit138includes sixteen lanes, where each lane executes the same instruction at the same time as the other lanes in the SIMD unit138but can execute that instruction with different data. Lanes can be switched off with predication if not all lanes need to execute a given instruction. Predication can also be used to execute programs with divergent control flow. More specifically, for programs with conditional branches or other instructions where control flow is based on calculations performed by individual lanes, predication of lanes corresponding to control flow paths not currently being executed, and serial execution of different control flow paths, allows for arbitrary control flow to be followed. The basic unit of execution in compute units132is a work-item. Each work-item represents a single instantiation of a shader program that is to be executed in parallel in a particular lane of a wavefront. Work-items can be executed simultaneously as a “wavefront” on a single SIMD unit138. Multiple wavefronts may be included in a “work group,” which includes a collection of work-items designated to execute the same program. A work group can be executed by executing each of the wavefronts that make up the work group. The wavefronts may be executed sequentially on a single SIMD unit138or partially or fully in parallel on different SIMD units138. Wavefronts can be thought of as instances of parallel execution of a shader program, where each wavefront includes multiple work-items that execute simultaneously on a single SIMD unit138in line with the SIMD paradigm (e.g., one instruction control unit executing the same stream of instructions with multiple data). A command processor137is present in the compute units132and launches wavefronts based on work (e.g., execution tasks) that is waiting to be completed. A scheduler136is configured to perform operations related to scheduling various wavefronts on different compute units132and SIMD units138. The parallelism afforded by the compute units132is suitable for graphics related operations such as pixel value calculations, vertex transformations, tessellation, geometry shading operations, and other graphics operations. A graphics processing pipeline134which accepts graphics processing commands from the processor102thus provides computation tasks to the compute units132for execution in parallel. The compute units132are also used to perform computation tasks not related to graphics or not performed as part of the “normal” operation of a graphics processing pipeline134(e.g., custom operations performed to supplement processing performed for operation of the graphics processing pipeline134). An application126or other software executing on the processor102transmits programs (often referred to as “compute shader programs,” which may be compiled by the driver122) that define such computation tasks to the APD116for execution. Although the APD116is illustrated with a graphics processing pipeline134, the teachings of the present disclosure are also applicable for an APD116without a graphics processing pipeline134. Various entities of the APD116, such as the compute units132, can make accesses to the last level cache110. FIG.3illustrates a dynamic cache policy system300for a cache302, according to an example. The system300includes the APD116, the cache302, and a dynamic cache policy controller304. In some examples, the cache302is the last level cache110ofFIG.1. In other examples, the cache302is any other cache that services memory access requests by the APD116. In some examples, the dynamic cache policy controller304is a hardware circuit, software executing on a processor, or a combination thereof. In some examples, the dynamic cache policy controller304is included in the cache302or is separate from the cache. The APD116processes data for a series of frames306. Each frame306is a full image rendered by the APD116. To generate graphical data over time, the APD116renders a series of frame, each of which includes graphical objects for a particular instant in time. To generate each frame306, various components of the APD116access data stored in memory. These accesses are serviced by the cache302, which stores copies of accessed data. Accessed data includes a wide variety of data types for rendering graphics, such as geometry data (e.g., vertex coordinates and attributes), texture data, pixel data, or a wide variety of other types of data. Aspects of cache performance such as hit rate (ratio of hits in the cache to total number of accesses) depend in part on how the cache302retains data in the event of an eviction. Cache replacement policies indicate which cache lines to evict when a new cache line is to be brought into a cache. A wide variety of replacement policies and other techniques are known. However, many of these techniques suffer from inaccuracies due to overly general rules that may not apply to particular items of data. For example, a least recently used replacement policy is not appropriate for all data access patterns. For at least these reasons, the dynamic cache policy controller304dynamically applies cache management policies to access requests from the APD116based on observed history of accesses. The dynamic cache policy controller304maintains cache usage data that records memory access information over the course of a frame. The memory access information is related to aspects of memory accesses such as re-reference interval and re-reference intensity for memory pages. A re-reference interval is the “time” (for example, number of clock cycles, number of instructions, or other measure of “time”) between accesses to a particular memory page. A re-reference intensity is the frequency of accesses to a particular memory page in a given amount of “time” (for example, number of clock cycles, number of instructions, or other measure of time). Based on the memory access information from previous frames, the dynamic cache policy controller304applies memory access policies to memory access requests from the APD116in a current frame. More specifically, patterns of memory accesses are generally very similar from frame to frame. Therefore, it is possible to make observations about memory accesses in one frame and to use those observations to control the memory access policy for those accesses in subsequent frames. In other words, based on observations made about particular memory accesses in a particular frame, the dynamic cache policy controller304controls the memory access policy for the same memory accesses in one or more subsequent frames. Memory accesses are considered “the same” from frame to frame if the memory accesses occur in approximately the same place in an order of memory accesses of a frame. More specifically, in general, time-adjacent frames render very similar graphical information and thus have a very similar pattern of memory accesses. The term “approximately” allows for the possibility that some memory accesses that occur in one frame may not occur in another frame, since the frames can be somewhat different, meaning that the order of memory accesses between frames will not be identical. A memory access policy is a policy that controls how a cache302manages information for cache lines targeted by a memory access request. In an example, the memory access policy controls three aspects: the age at which a new cache line is brought into the cache (e.g., as the result of a miss), the manner in which ages of the cache lines are updated when a miss occurs (e.g., how cache lines other than the accessed cache line are aged), and the age that a cache line is updated to when a hit occurs for the cache line (e.g., what age is the cache line set to when it is accessed). These features are discussed in greater detail elsewhere herein. It should be noted that the age referred to is an age in a least recently used cache replacement policy. More specifically, the cache302is a set-associative cache that includes a number of sets each having a plurality of ways. Any given cache line is mapped to at least one set and can be placed into any way in that set, but cannot be placed in any set that the cache line is not mapped to. A cache line is defined as a portion of the memory address space. In an example, each cache line has a cache line address which includes a certain number of the most significant bits of the address, such that the remaining least significant bits specify an offset in that cache line. Typically, though not necessarily, a cache line is the minimum unit of data read into or written back from the cache. When a cache line is to be brought into a set of the cache302and there are no free ways for the cache line, the cache302evicts one of the cache lines in the set. The replacement algorithm selects the oldest cache line as the cache line to evict based on the ages of the cache lines in that set. The cache302also places the new cache line into the cache302with a “starting age.” In the event of a hit, the cache302updates the age of the cache line for which the hit occurred based on the memory access policy. Additional details are provided elsewhere herein. FIG.4illustrates additional details of the dynamic cache policy controller304, according to an example. The dynamic cache policy controller304stores and maintains cache usage data402. The cache usage data402includes the memory access information for “sets” of memory accesses in a frame. Each access data item406includes information that characterizes a certain set of one or more memory accesses. In some examples, each set of memory accesses is one or more memory accesses that occur within a certain amount of “time” and within a certain memory address range (such as a page). In some examples, time is represented by real time, cycle count, instruction count, or any other measure of time. Thus, each data item406includes information about memory accesses that occur within a particular period of time within a frame and that occur to the same memory address range. Put differently, in different examples, accesses occurring within a threshold number of instructions, a threshold number of computer clock cycles, or a threshold amount of time are grouped together into the set that corresponds to an access data item406. In some examples, the dynamic policy controller304records the access data items406as the corresponding accesses occur. In some examples, the dynamic policy controller304calculates additional information for each access data item406in response to the frame ending or at some other point in time (e.g., when the “time” for an access data item406is over). In one example, the dynamic policy controller304determines the re-reference interval and the re-reference intensity for the accesses corresponding to the access data item406. Then, the dynamic policy controller304determines a policy corresponding to the access data item406according to the re-reference interval and re-reference intensity for the accesses corresponding to the access data item406. In some examples, the dynamic policy controller304determines the policy based on the re-reference interval and re-reference intensity from multiple frames. In some examples, the dynamic policy controller304records this policy into the access data item406to use for the subsequent frame. The access data item406thus indicates, for a particular page and a particular “time” in the frame, what policy to use. This policy information allows the dynamic cache policy controller304to “condition” accesses made in frames subsequent to the frame(s) in which the access data items406are recorded. “Conditioning” memory accesses means setting parameters for the memory accesses, where the parameters correspond to the determined re-reference interval values and the determined re-reference intensity values. For example, in a first frame, the dynamic cache policy controller304records a first access data item406for a set of memory accesses that access the same page. The first access data item406indicates a certain policy for that set of memory accesses. In a subsequent frame, the dynamic cache policy controller304identifies the same accesses (that is, the accesses of the first access data item406) and “conditions” those accesses according to the policy stored in the first access data item406. More specifically, the dynamic cache policy controller304configures those accesses to use the policy that corresponds to the first access data item406. Such a policy indicates that these accesses in the subsequent frame should be made according to a certain set of settings. In some examples, the policy indicates the following: how to age cache lines other than the accessed cache line and in the same set as the cache line, in the event of a miss; what age to insert a new cache line into the cache line; and what age to set a cache line to when a hit occurs. Aging cache lines in the event of a miss is now described. When a miss occurs in the cache302, the cache302identifies the cache line to bring into the cache. The cache302also determines which set the identified cache line is to be brought in to. If any cache line in that set has an age that is equal to or greater than a threshold (e.g.,3if the age counter is two bits), then the cache302selects that cache line for eviction. If no cache lines have an age that is higher than a threshold, then the cache302ages the cache lines of the set. The setting of how cache lines are to aged in the event of a miss indicates which cache lines of a set are aged in the event that a miss occurs in that set and no cache line has an age that is equal to or greater than a threshold. In some examples, this setting indicates, by age, which cache lines will be aged in the event of a miss. In other words, the setting indicates the ages of cache lines that will be aged in the event of a miss. In an example of the setting, the setting indicates that cache lines of all ages lower than the above threshold (the “eviction threshold”) are aged. In another example, the setting indicates that cache lines having ages above an age trigger threshold, which can be lower than the eviction threshold, are aged. In this situation, cache lines with ages lower than or equal to the low threshold are not aged in the event of a miss. In sum, in some examples, the re-reference interval and re-reference intensity of a set of memory accesses for one frame indicates how to age cache lines (specifically, the ages of cache lines that will be aged) in the event of a miss for the set of memory accesses in a subsequent frame. The setting of what age to insert a new cache line into the cache is now described. When a cache line is brought into the cache as the result of a miss, the cache line is initially given a particular age. In some examples, this “starting” age is the lowest possible age, some intermediate age that is above the lowest possible age, or the maximum age. Again, this setting is dependent on the memory accesses reflected in an access data item406. Thus, the access data item406, corresponding to a memory page, indicates the starting age for a cache line in the event that a miss occurs and the cache line is copied into the cache. What age to set a new cache line to when a hit occurs is now described. When a hit occurs to a cache line, the cache302updates the age of that cache line (e.g., to indicate that the cache line is “newer”). In some examples, this setting indicates that the cache line is to have a particular age (such as 0) when a hit occurs for that cache line. In other examples, the setting indicates that the cache line is to have a different age such as 1, 2, or 3 (for a two bit age counter) in the event of a hit. In other examples, this setting indicates that the age of a cache line is to be modified in a particular manner, such as by decrementing the age by a number like1. In sum, the access data item406, corresponding to a memory page, indicates how the age of a cache line is modified when a hit occurs for that cache line. It should be understood that conditioning a particular cache access according to a policy means causing the cache access to occur with that policy. For the insertion age policy, this policy is applied for an access conditioned according to that policy if that access results in a miss. The cache line that is to be brought in is brought in according to the age specified by the policy. For the aging policy, this occurs for the access conditioned according to the policy in the event that that access results in a miss. In this situation, the aging policy causes the other cache lines for the same set to be aged as specified by the policy. For the policy that defines what age a cache line will be set to in the event of a hit, when an access occurs that is conditioned according to a policy, and that access results in a hit, the policy causes the age of the cache line targeted by the access to be set according to the policy. FIG.5illustrates operations for recording access patterns, according to an example.FIG.5illustrates a single frame—frame 1508(1) in which a number of accesses512are occurring. A first set of accesses—accesses512(1) to512(4) are to a first page P1 and occur within a certain time period. The dynamic cache policy controller304identifies the three sets of accesses502and generates access data items506corresponding to each of the sets of accesses502. These data items indicate the access times and the addresses (including or represented as pages) targeted by the accesses. For the first set502(1), the dynamic cache policy controller304notes that that set has a high-reference intensity and a low re-reference interval. There is a high re-reference intensity because there are a lot of accesses to the same page, and there is a low-reference interval because the accesses occur relatively close together in “time.” The dynamic cache policy controller304thus records the access data item506(1) that indicates a policy associated with this re-reference interval and re-reference intensity in the access data item506(1). For the second set of accesses502(2), which includes access512(5) and is made to page P2, there is a low re-reference intensity and low re-reference interval. Thus, the dynamic cache policy controller304records a policy associated with this combination into access data item506(2). For the third set of accesses502(3), which includes accesses having a low re-reference intensity and high re-reference interval, reflected in accesses502(6)-502(7), and made to page P3, the dynamic cache policy controller304records access data item506(3), which records a policy that reflects the re-reference intensity and re-reference interval of set502(3). FIG.6is a block diagram illustrating utilization of the access data items to condition the accesses in a second frame508(2), according to an example. Three sets602of data accesses612are illustrated. The dynamic cache policy controller304conditions these accesses according to the access data items506created in frame 1508(1) ofFIG.5. For the data accesses612of the first set602, the dynamic cache policy controller304identifies the first access data item506(1). This access data item506indicates a certain manner in which to condition the accesses612of the set602(1). As described elsewhere herein, the access data item506indicates a policy, which indicates one or more of how to age cache lines when a miss occurs, what age to set new cache lines to, and what age to set cache lines to in the event of a hit. Additionally, this policy is dependent on the data recorded in frame 1508(1) about the same accesses from that frame. The dynamic cache policy controller304causes the policy applied to the accesses of a given set602to be applied based on the recorded access data item506for the same accesses in a previous frame. Similar activity is applied for set602(2) and set602(3). The dynamic cache policy controller304determines which accesses in a particular frame are “the same as” accesses in a previous frame in the following manner. In each frame, memory accesses occur in a particular sequence. This sequence is mostly repeated from frame to frame. Thus, the dynamic cache policy controller304tracks the sequence of memory accesses to identify which accesses are “the same” from frame to frame. It is true that some accesses may differ from frame to frame, so the dynamic cache policy controller304accounts for such differences. For example, the dynamic cache policy controller304may notice omitted accesses, added accesses, or other modifications, and account for those. In some examples, memory accesses that occur to the same page and at the same “time” in different frames are considered to be “the same” memory accesses. In examples, the “time” is defined based on access order. For example, the first through one hundredth accesses are in a first “time,” the one hundred first through two hundredth are in a second “time,” and so on. Any other technically feasible means for determining the time is possible. It should be understood that the manner in which accesses are “conditioned” is based on the access data items506recorded in a previous frame. Thus, for one particular access data item506, which indicates a particular combination of re-reference interval and re-reference intensity, corresponding accesses are made with a first set of parameters including manner of aging cache lines, age to insert new cache lines, and age to update hit cache lines. For another particular access data item506, which indicates a different combination of re-reference interval and re-reference intensity, corresponding accesses are made with a second set of parameters including manner of aging cache lines, age to insert new cache lines, and age to update hit cache lines. At least one of the parameters of the second set is different from at least one of the parameters of the first set. FIG.7is a flow diagram of a method700for managing memory accesses, according to an example. Although described with respect to the system ofFIGS.1-6, those of skill in the art will understand that any system, configured to perform the steps of the method700in any technically feasible order, falls within the scope of the present disclosure. At step702, the dynamic cache policy controller304records access data items406for memory accesses of a first frame. Each access data item406corresponds to a set of one or more memory accesses. In some examples, the memory accesses of each set share a memory page or share a different subset of the memory. Each access data item406includes information characterizing the memory accesses of the set corresponding to that access data item406. In some examples, this information is associated with one or both of re-reference interval (e.g., “time” between references to the same address or page) or re-reference intensity (e.g., number of accesses again within a particular window of “time”—meaning number of instructions, clock time, clock cycles, or other measure of time). In some examples, the dynamic cache policy controller304records the access data items406for multiple sets of accesses within the frame. At step704, for a second frame subsequent to the first frame, the dynamic cache policy controller304identifies parameters for corresponding memory accesses to the accesses for which access data was recorded in the first frame. The parameters include information indicating how such memory accesses are to be conditioned when provided to the cache302to be satisfied. In some examples, the memory accesses for which parameters are identified are the same memory accesses for which the corresponding access data is stored in the first frame. In other words, in some examples, in step704, the dynamic cache policy controller304identifies how to condition accesses of the second frame based on the access data recorded for those accesses in the first frame. At step706, the dynamic cache policy controller304applies the identified parameters to the memory accesses in the second frame. In some examples, the parameters indicate how to age cache lines in the event of a miss as described elsewhere herein. In some examples, the parameters indicate what age to insert new cache lines into the cache302in the event of a miss. In some examples, the parameters indicate what age to set a cache line to in the event of a hit. Applying these parameters to memory accesses is described elsewhere herein. The elements in the figures are embodied as, where appropriate, software executing on a processor, a fixed-function processor, a programmable processor, or a combination thereof. The processor102, last level cache110, interconnect112, memory104, storage108, various auxiliary devices106, APD116and elements thereof, and the dynamic cache policy controller304, include at least some hardware circuitry and, in some implementations, include software executing on a processor within that component or within another component. It should be understood that many variations are possible based on the disclosure herein. Although features and elements are described above in particular combinations, each feature or element can be used alone without the other features and elements or in various combinations with or without other features and elements. The methods provided can be implemented in a general purpose computer, a processor, or a processor core. Suitable processors include, by way of example, a general purpose processor, a special purpose processor, a conventional processor, a digital signal processor (DSP), a plurality of microprocessors, one or more microprocessors in association with a DSP core, a controller, a microcontroller, Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) circuits, any other type of integrated circuit (IC), and/or a state machine. Such processors can be manufactured by configuring a manufacturing process using the results of processed hardware description language (HDL) instructions and other intermediary data including netlists (such instructions capable of being stored on a computer readable media). The results of such processing can be maskworks that are then used in a semiconductor manufacturing process to manufacture a processor which implements features of the disclosure. The methods or flow charts provided herein can be implemented in a computer program, software, or firmware incorporated in a non-transitory computer-readable storage medium for execution by a general purpose computer or a processor. Examples of non-transitory computer-readable storage mediums include a read only memory (ROM), a random access memory (RAM), a register, cache memory, semiconductor memory devices, magnetic media such as internal hard disks and removable disks, magneto-optical media, and optical media such as CD-ROM disks, and digital versatile disks (DVDs).
34,747
11860785
While the invention is susceptible to various modifications and alternative forms, specific embodiments have been shown by way of example in the drawings and will be described in detail herein. It should be understood, however, that the invention is not intended to be limited to the particular forms disclosed. Rather, the invention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the invention as defined by the appended claims. DETAILED DESCRIPTION FIG.1Ashows a multi-core processing system100that includes a series of cores or processor units102,104, and106. The system100may include any device that includes multiple processing cores such as a multi-core CPU, GPU, and APU and have any number of processors that operate independently from each other. Each of the processor units102,104, and106may operate in parallel to perform program functions asynchronously. Each of the processor units102,104, and106is coupled to a common external memory110, which is RAM in this example. Each of the processor units102,104, and106may read data stored in an area of the common memory110when executing programming tasks and functions or rely on faster internal cache memory. The multi-core processing system100executes a program by distributing the tasks or functions in the program among worker threads associated with the processor units102,104, and106.FIG.1Ashows a detailed block diagram of the processor unit102. The other processor units104and106are not shown but may have similar components. The processor unit102includes processor circuitry120that accesses an internal cache memory122. As is understood, the cache memory122is extremely fast and allows efficient execution of instructions such that if data is required by the instructions it can be fetched from the cache122in comparison to fetching such data from the common external memory110. A command system130provides an interface with the common external memory110. As will be explained, the command system130allows the efficient use of internal cache memory by minimizing memory transfer based on calls to program functions made by the processor unit102. This system facilitates the efficient transfer and distribution of program state data and related commands within a processor unit such as the processor unit102of a multi-core system100inFIG.1Afor deferred operations from a program. In this example the command system130is a program that is run by the processing unit102as part of the operating system. The command system130may be executed by any appropriate processing unit or in a separate dedicated chip. As shown inFIG.1B, the command system130includes a direct state data and command module132, an indirect state data and command module134and program logic136, which binds the command modules132and136together. The direct state data and command module132is referenced as the “direct command module” and the indirect state data and command module134is referenced as the “indirect command module.” Commands and state data may be treated similarly as both reside in external memory110. Other direct modules, such as direct modules138, may be used to handle data for different delegates as will be explained below. The indirect command module134is related to one or more direct modules such as the direct command modules132and138. The direct and indirect modules of the command system130access a payload repository area140, which resides in an area of the external memory110. The payload repository area140serves as storage for data payloads that are used by the command modules132and134to be loaded into the cache memory122based on user functions. The payload repository area140includes multiple payload repositories for each direct and indirect command module. In this example, the payload repository area140includes an indirect payload repository142and direct payload repositories144,146, and148. The indirect payload repository142stores payloads of data required by the indirect command module134. Each direct module such as the direct command modules132and138have a corresponding direct payload repository such as payload repositories144and146to store payloads of data. The payloads of data are used by functions of programs when executed by the processor unit102. The external memory110also stores programs for execution by the system100. An example of such a program is a program150that includes various program functions that are executed by a processor unit or units in the system100. The program functions are referenced as delegates as their execution is deferred. The program functions typically require access to commands and state data to be executed on one of the processor units such as the processor unit102. As explained above, the time to access such commands and state data influences the speed of executing the function by the processing unit102. The command system130minimizes the access time for command and state data by minimizing accesses to the internal cache memory122. The direct state data and command module132or “direct command module” is responsible for associating a payload repository buffer with a set of user defined CPU program instructions or delegates from the program150, which operate on the state and command data stored within the payload repository buffer. This association may happen at run-time or compile time of the program150. The payload stored in the payload repository buffer includes a payload header and a payload field that includes the actual payload. The payload may include commands and state data that are communicated and transferred between program functions of the program150and are selected depending on the requirements of the program function. This association between a payload and a CPU instruction based on a program function (delegate) is facilitated by the payload header, which is used to map the delegates to payloads associated with the direct command module132and used to reference the delegates. FIG.2shows a block diagram200of the direct command module132allowing access to the direct payload repository144inFIG.1B. The direct payload repository144includes payload repository buffers202,204, and206. Each payload repository buffer is numbered from buffer 0 (202) to buffer N (206). There may be any number of payload repository buffers between buffer 0 and buffer N depending on the payloads required for the program functions in a program such as the program150.FIG.2shows that payloads are stored in the payload repository buffers202,204, and206. The multiple payload repository buffers such as the payload repository buffer202allow the processor unit102to work asynchronously with other delegates since each payload repository buffer contains the necessary data to perform the function (delegate) independent of other delegates. The payload header provides context with which the direct command module132can operate on and transfer payloads. The deferred program operation (delegate) used in conjunction with the payload may be any typical reference to program code of the program150including, but not limited, to static/global program methods and object member functions. The direct command module132also accesses delegates to create a payload header map210that associates payloads with specific delegates. The direct command module132is associated with the functions of binding delegates, unbinding delegates, and flushes. The bind delegate to payload identifier function takes a user function or delegate of the program150and associates it with a payload ID header for a particular delegate arriving at the command system130. The bind delegate function is expressed as:BindDelegateToPayloadIdentifier(<DirectModule>,<uniquePayloadID>, <delegate>, <delegateData:Optional>); The opposite unbind delegate function unbinds a delegate from a payload ID and disassociates the user function or delegate with the payload header ID. The unbind delegate function is phrased as:UnBindDelegateToPayloadIdentifier(<DirectModule>,<uniquePayloadID>,<delegate>); The flush function iterates through all of the current active stored payloads in the direct payload repository144, reads the payload header IDs, and, using the map210, makes the necessary command or state data of the payloads from the direct payload repository144inFIG.1available to the delegates. The flush function may be expressed as:Flush(<DirectModule>){DirectModule.repository.LockBufferSwap( );//optionalDirectModule.repository.SwapReceivingStream( )while(!DirectModule.repository.empty( )){DirectModule::Repository::Header payloadHeader=DirectModule.repository.GetNextHeader( );DirectModule::Repository::Payload payload=repository.GetNextPayload( );DirectModule::Delegate delegate=DirectModule.LookUpBoundDelegate(payloadHeader);delegate.DoBoundMethod(payload);}DirectModule.repository.UnLockBufferSwap( );} As shown above, the flush function causes the direct command module132to iterate through all the currently active stored payloads in the payload repository buffers202,204, and206, and, using the associated headers, makes the payload data required by the appropriate delegate available from the repository buffers thereby bypassing the cache memory122. The flush function increases efficiency of delegate execution by minimizing memory accesses for both indirect and direct cache memory. The above described bind delegate function, BindDelegateToPayloadIdentifer( ) is responsible for associating a delegate referenced by the direct command module132with the payload ID header (uniquePayloadID field) of a payload having data needed to execute the delegate. The bind delegate function is shown in the flow diagram300inFIG.3. The delegate and data map210inFIG.2is accessed by the direct command module132as shown inFIG.3. The map210includes entries322,324, and326created by the bind delegate function (320). Each payload includes a payload ID header field such as the payload ID header field310for a first payload and a payload ID header field312for a second payload. The bind delegate function (320) binds a delegate330, which is identified by a user field332, a handle payload field334, and a payload data field336to the payload ID field310. As shown inFIG.3, the first payload identified by the payload ID header field310is bound to the first delegate (user1). The user field332and the user handle field334identify the delegate where the user field is an instance of a class while the user handle field is a function that is specific to the type of class to operate on an instance. The user data field336is an optional field that includes additional information such as whether the delegate should be included in a broadcast or flush function. The created entry322therefore binds the delegate330with the first payload identified by the payload ID field310. The same payload may be bound to other delegates. For example, inFIG.3, the first payload as identified by the payload ID field310is also bound to a second delegate340with an associated user field, handle field, and user data field. The binding of the first payload to the second delegate340results in the creation of the entry324in the map210. Multiple payloads may be associated with a single delegate. For example, another payload such as a second payload is associated with the first delegate330. This association is reflected in the creation of the entry326by the bind delegate function (320) inFIG.3. When payloads are stored within the direct payload repository144and associated with a delegate by the direct command module132, their associated payload headers are used to find the correct delegate via the map210inFIG.2when the flush function is performed. On the above described flush function, the delegate is executed with the associated payload as an argument for the delegate. FIG.4is a flow diagram400showing the execution of the flush function by the direct command module132. In the flush function (402), the direct command module132reads the delegate to the payload header map210and uses the map210to access a payload repository buffer404in the direct payload repository144. The flush function causes the direct command module132to read a payload ID header410from the payload repository buffer404and load a corresponding payload412. The payload412is handled by a handle field414associated with a delegate416and made accessible to the delegate416. The flush function continues to a next payload ID header420and loads a corresponding payload422and continues for each payload, such as the payload ID header430and corresponding payload432in the payload repository buffer404, until the buffer is empty. Additional state data, which may modify the behavior of the delegate or flush, may also be associated with each binding of a delegate and the payload header via a user data field such as the user data field336. The data binding is optional and may be used during communication to augment the behavior of the flush function. FIG.5shows a block diagram of the indirect command module134inFIG.1. The indirect state data and command module or “indirect command module”134is responsible for associating the payloads in an indirect payload repository buffer such as those of an indirect payload repository such as the indirect payload repository142inFIG.1Bwith one or more direct modules such as the direct command module132inFIG.2. As shown inFIG.5, the indirect payload repository142includes multiple payload repository buffers502,504, and506. Similar to the direct module, there may be any number of payload repository buffers for the indirect command module134. The indirect command module134creates and accesses an indirect to direct translation map510. The indirect command module134stores a mapping of direct modules such as the direct command module132inFIG.2to specific payloads via additional data in the indirect to direct translation map510, which is provided to the indirect command module134by the direct module132at the time of association. In this way the indirect command module134leverages some of the existing functionality from the payload repository of direct command modules, such as the direct command module132, but offers additional indirect access to multiple direct command modules. The payload data may be completely streamed from the payload repository of the indirect command module to the payload repositories of the direct command modules. Additionally since the mapping from direct command modules to an indirect command module is stored in the local memory of the indirect command module, translation will be faster and more cache friendly. It also allows for the translated payload header for the direct command module to be streamed as well since it may be composed entirely off of data already in the cache from the translation map510associated with the indirect command module134. The indirect command module134is associated with functions including a register direct module to payload function, an unregister direct module to payload function, and a flush function. These functions are operated as follows. The register direct module to payload function may be called as follows:RegisterDirectModuleToPayloadIdentifier(<IndirectModule>, <uniquePayloadID>, <DirectModule>,<Delegate>, <DelegateData:Optional>);UnRegisterDirectModuleToPayloadIdentifier(<IndirectModule>, <uniquePayloadID>, <DirectModule>,<Delegate>);Flush(<IndirectModule>){IndirectModule.repository.LockBufferSwap( );//optionalIndirectModule.repository.SwapReceivingStream( );while(!IndirectModule.repository.empty( )){IndirectModule::Repository::Header payloadHeader=IndirectModule.repository.GetNextHeader( )IndirectModule::Repository::Payload payload=IndirectModule.repository.GetNextPayload( )DirectModule:: directModule=indirectModule.GetRegisteredModules(payloadHeader); while(directModule){DirectModule::Repository::PayloadHeader dmPayloadHeader;IndirectModule.Translate(dmPayloadHeader, payloadHeader, directModule);Push(directModule.repository, dmPayloadHeader, payload, <ThreadID:Optional>);DirectModule.repository.UnLockBufferSwap( )}}IndirectModule.repository.UnLockBufferSwap( )} The register direct module to payload function, (RegisterDirectModuleToPayloadIdentifier), is responsible for associating a delegate and a direct module with a unique payload ID. Multiple direct module and payload pairs may be associated with a single unique payload identifier. This allows a single payload to be communicated to multiple direct modules. Additional program state data may also be stored with each binding of the delegates to a direct module using a direct module unique ID (DirectModuleuniquePayloadID). This can be used by the flush function to modify its behavior. The unregister direct module to payload function, (UnRegisterDirectModuleToPayloadIdentifier), will remove an associated delegate and direct module with a unique payload ID. The flush function is responsible for iterating through all the currently active stored payloads and using the associated payload header, communicating the translated payload header (specific to each direct command module) and payload data to the repositories associated with direct command modules. As explained above, the flush functions performed by the direct command modules communicate the payload data to the associated delegates for execution. Due to the associated data stored when the direct module and delegate were registered, this communication may be executed via streaming or direct memory access (DMA) commands (based on the model or type of CPU) since the header for the payload of the indirect command module may be translated into headers consumable by the direct command modules in a small fixed local cache such as the cache122. The resulting payload header is then streamed to each repository of the associated direct command module. The payloads themselves do not need translation, and as such may be streamed directly. This streaming is facilitated as the flush function only keeps track of the source and destination memory locations so the local cache122is not compromised. FIG.6is a flow diagram600of the flush function (602) performed by the indirect command module134. The indirect command module134accesses the payload repository buffers of the indirect payload repository142and manages the indirect to direct translation map510. The indirect command module134iterates through stored payloads in the indirect payload repository142and communicates a payload header ID610and the associated payload612. The communication is accomplished by a stream to the processing unit102(620). A first translation (630) is performed that associates payloads with a first direct module such as the direct command module132. A second translation (632) may be performed to associate payloads with a second direct module such as the direct module138inFIG.1B. The flush function continues until all of the payloads and payload headers in the indirect payload repository142, such as the payloads640and650and payload headers642and652, have been assigned to a direct module. Streaming is useful as the execution of a direct module delegate is typically deferred until the flush function is called by a direct module. In this way polluting the cache of the current CPU with the program memory of the destination program in addition to the program memory responsible for storing the delegate instructions may be avoided. This also avoids any additional memory access (and corresponding cache pollution) by deferring the execution of the delegate, which, in almost every case, will need to access additional program memory. The payload repository area140used by both the direct and indirect command modules132and134is interchangeable and may be configured in multiple ways for different payload repositories depending on the program context. The payload repository area140is responsible for maintaining a section of program memory for payload repositories where storage of payloads and the associated payload headers associated with delegates of the program150are stored. The payload repositories may be configured based on several operating parameters, such as the amount of buffering (single, double, triple, . . . etc.). It may also maintain thread safety via several possible methods. This functionality may be abstracted by any combination of functional overloads at program run time or compile time. Payloads stored within the repositories of the payload repository area140are associated with a logical identifier, which is only required to be unique to the payload repository. This logical identifier is used to map the attributes of the payload (such as size and layout), as well as to map methods and operations associated with the direct and indirect command modules132and134. The payload repositories, such as the direct payload repository144, are associated with the following program functions. The direct payload repository144may perform a lock buffer swap, (LockBufferSwap(<ThreadID:Optional>)), an unlock buffer swap (UnLockBufferSwap(<ThreadID:Optional>)), a swap receiving stream (SwapReceivingStream(<ThreadID:Optional>)), and a get receiving stream (GetReceivingStream(<ThreadID:Optional>)) function. The lock buffer swap and unlock buffer swap functions are optional and may be used to guard against acquiring a receiving stream or swapping a receiving stream. These functions may also optionally take an identifier for a thread (threadID) to be used for various synchronization methods. For example, the swap receiving stream function may optionally take a thread identifier for thread safety and may be used in the case of buffered repositories. The get receiving stream function may optionally take a thread identifier for thread safety and is used to acquire a region of program memory in the direct payload repository144for which the payload header and payload may be stored. The payload and the payload header are used in conjunction with each other to allow for storage, retrieval, iteration, and operation on the program state data and commands stored in the payload field. Payloads may be predefined by users or runtime defined as long as the payload header can associate sufficient data to allow the payload to be properly interpreted by the direct and indirect command modules132and134. Payload headers are generated and associated dynamically with one or more payloads. While payloads and their associated headers may be written directly to the associated payload repository, additional methods may be defined that optimize their transfer to the repository. By utilizing streaming CPU instructions or DMA it is possible to build a local copy of the payload header along with the payload itself in a reusable section of program memory that is only cached by the current thread. Once the copy is fully populated it may be streamed to the destination memory of a payload repository for later communication to either direct or indirect command modules132and134. The payload and payload headers are associated with a push function, which is expressed as Push (<repository>, <PayloadHeader>, <Payload>, <ThreadID:Optional>). Thus the push function stores a payload ID header and associated payload in a payload repository buffer. The following is an example push function given with a streaming instruction:{example given with Streaming instructionsrepository.LockBufferSwap( );//optionalRepository::Stream receiving_header=repository.GetReceivingHeaderStream(ThreadID, sizeof(PayloadHeader));Repository::Stream receiving_payload=repository.GetReceivingPayloadStream(ThreadID,sizeof(Payload));CPU STREAM(receiving_header, PayloadHeader);CPU STREAM(receiving_payload, Payload);repository.UnLockBufferSwap( );} The push function is responsible for taking the payload and payload header and storing them in the program memory storage associated with a payload repository such as the direct payload repository144inFIG.2. As explained above, building the payload and header ID somewhere in fast access CPU memory such as stack memory is preferable as it can then be streamed into the payload repository buffers of the direct payload repository144. FIG.7shows a flow diagram700of the push function. The direct payload repository144includes the payload repository buffers202,204, and206as shown previously inFIG.2. The push function works with three different threads702,704, and706in this example. The push function assigns a payload header, such as the payload ID header710in the first thread702and a corresponding payload712. The payload712and the payload ID header710are combined and, using a program stream716, are streamed into the repository data buffer202. A second thread704is established and inserts another payload ID header730and payload732into the program stream720for storage in the repository buffer202. The third thread706is established and inserts another payload ID header740and a payload742into the program stream720for storage in the repository buffer. The program flow for an example direct module such as the direct command module132may be performed as follows. First a direct module is defined as DirectModule DirectModuleInstance(<repository>). Then the bind delegate command (BindDelegateToPayloadIdentifler( )) is used to associate a delegate with a unique payloadID for the direct module. Each delegate is bound to a payload identifier as follows:BindDelegateToPayloadIdentifier(DirectModuleInstance, payloadID0, OnPayloadID0( ));BindDelegateToPayloadIdentifier(DirectModuleInstance, payloadID1,UserInstance::OnPayloadID1( ));BindDelegateToPayloadIdentifier(DirectModuleInstance, payloadID2, OnPayloadID2( ));The number of bindings allowed and the binding itself need not be static, and may be changed during execution if needed. During execution, payloads may be stored with the direct payload repository144of the direct command module132using the push function. This is shown as follows:Push(DirectModuleInstance, Payload0, PayloadHeader0);Push(DirectModuleInstance, Payload1, PayloadHeader1);Push(DirectModuleInstance, Payload2, PayloadHeader2); The payloads are then propagated to the delegates during the flush function. The flush function of the direct command module is not required to order the payload propagation, however it may be chosen to by overloading the flush function. While the default would be serial execution of payloads A, B, C, in order, an overloaded Flush( ) function could re-order the execution of payloads if needed. For example, if payload B has higher priority than payload A, the flush function would first propagate payload B, then payload A, then payload C. Or, in another case, payloads A, B, and C could all be propagated on separate threads such that payloads A, B, and C are all executed at the same time. The program flow for an indirect module such as the indirect command module134is similar. First direct modules that will be bound to the indirect module are created. Direct modules may be created and added to the indirect module at any suitable time.DirectModule DirectModule0<repository0>;DirectModule DirectModule1<repository1>;DirectModule DirectModule2<repository2>; Then the indirect command module134is created by the function, IndirectModule IndirectModuleInstance <repository3>. The direct modules may be bound as follows in the below example. In this example, there are three direct modules registered for a first payload (PayloadID0). There is one direct module register for the first payload (PayloadID1):RegisterDirectModuleToPayloadIdentifier(IndirectModuleInstance,PayloadHeader0,DirectModule0,DirectModule0::OnPayload0( ),DirectModule0::UserData);RegisterDirectModuleToPayloadIdentifier(IndirectModuleInstance,PayloadHeader0,DirectModule1,DirectModule1::OnPayload0( ),DirectModule1::UserData);RegisterDirectModuleToPayloadIdentifier(IndirectModuleInstance,PayloadHeader0,DirectModule2,DirectModule2::OnPayload0( ),DirectModule2::UserData);RegisterDirectModuleToPayloadIdentifier(IndirectModuleInstance,PayloadHeader1,DirectModule0,DirectModule0::OnPayload1( ),DirectModule0::UserData); Then during execution, payloads can be stored with the indirect payload repository142of the indirect command module134using the push function:Push(IndirectModuleInstance, Payload0, PayloadHeader0);Push(IndirectModuleInstance, Payload1, PayloadHeader1);Push(IndirectModuleInstance, Payload2, PayloadHeader2); The flush function allows the payloads to be distributed to the direct modules that have been registered for them. In the previous case, the DirectModules (0, 1 and 2) will all be presented with the payload data of PayloadID0. The DirectModule0 will also be presented with the payload data of PayloadID1. Since there are no registered direct modules for PayloadID2, it will not be broadcast at all. The advantage of the system100is to limit the reads by the processor unit102from the external memory110to the cache memory122. Communication of needed program state data and commands to delegates is facilitated by minimizing the time of polling since only the requested payloads are flushed to registered modules and delegates, thereby eliminating the need for polling for information. Eliminating the need for polling to determine whether new data exists or not reduces the pressure on the processor cache122and external memory110. The translation of the payload header using the tables allows direct writing efficiently without rewriting payload data from external memory110for a delegate. The push function takes the payloads and associated payload headers and streams them into a memory location thus avoiding the computationally expensive task of polling all of the external memory110for the required data. The access to the cache memory122is limited to the flush functions. Since the delegates are deferred in execution, the data used for each delegate is prevented from contending with other threads. The process of scheduling worker threads for tasks may be controlled on the example system100will now be described with reference toFIGS.1-7in conjunction with the flow diagram shown inFIG.8. The flow diagram inFIG.8is representative of example machine readable instructions for assembling payloads and associating them with delegates from a program such as the program150. In this example, the machine readable instructions comprise an algorithm for execution by: (a) a processor, (b) a controller, and/or (c) one or more other suitable processing device(s) such as a GPU. The algorithm may be embodied in software stored on tangible media such as, for example, a flash memory, a CD-ROM, a floppy disk, a hard drive, a digital video (versatile) disk (DVD), or other memory devices, but persons of ordinary skill in the art will readily appreciate that the entire algorithm and/or parts thereof could alternatively be executed by a device other than a processor and/or embodied in firmware or dedicated hardware in a well-known manner (e.g., it may be implemented by an application specific integrated circuit (ASIC), a programmable logic device (PLD), a field programmable logic device (FPLD), a field programmable gate array (FPGA), discrete logic, etc.). For example, any or all of the components of the interfaces could be implemented by software, hardware, and/or firmware. Also, some or all of the machine readable instructions represented by the flowchart ofFIG.8may be implemented manually. Further, although the example algorithm is described with reference to the flowcharts illustrated inFIG.8, persons of ordinary skill in the art will readily appreciate that many other methods of implementing the example machine readable instructions may alternatively be used. For example, the order of execution of the blocks may be changed, and/or some of the blocks described may be changed, eliminated, or combined. FIG.8is a flow diagram of process executed by the command system130to assemble payloads and associated delegates for execution on the processor unit102inFIG.1. A direct module is created based on a group of delegates from the functions of a program such as the program150inFIG.1A(800). The payloads are assembled based on delegates in a program having a payload header ID with payloads of either state data or command data (802). The bind delegate function is then executed by the direct command module132to create the map between payloads and delegates (804). The push function is then used to move data in the payloads to the respective payload repositories by the direct command module (806). On executing a delegate, the flush function is run (808) that iterates through all of the payloads in the payload repository buffers and makes the payloads available to the delegates as they are executed. The flush function reads the payload ID of the first payload (810). The flush function uses the map210to determine the associated delegate for the payload (812). The delegates are then executed with the loaded payloads containing the commands and state data needed to execute the delegates (814). The flush function then determines whether the payload ID is the last payload in the payload repository buffer (816). If the payload repository includes more payloads, the flush function loops back and reads the next payload ID (810). If the payload repository does not include any more payloads, the flush function concludes. Each of these embodiments and obvious variations thereof is contemplated as falling within the spirit and scope of the claimed invention, which is set forth in the following claims.
33,565
11860786
DETAILED DESCRIPTION The present disclosure includes techniques to use multiple caches or cache sets of a cache interchangeably with different types of executions by a connected processor. The types of executions can include speculative and non-speculative execution threads. Non-speculative execution can be referred to as main execution or normal execution. For enhanced security, when a processor performs conditional speculative execution of instructions, the processor can be configured to use a shadow cache during the speculative execution of the instructions, where the shadow cache is separate from the main cache that is used during the main execution or normal execution of instructions. Some techniques of using a shadow cache to improve security can be found in U.S. patent application Ser. No. 16/028,930, filed Jul. 6, 2018 and entitled “Shadow Cache for Securing Conditional Speculative Instruction Execution,” the entire disclosure of which is here by incorporated herein by reference. The present disclosure includes techniques to allow a cache to be configured dynamically as a shadow cache or a main cache; a unified set of cache resources can be dynamically allocated for the shadow cache or for the main cache; and the allocation can be changed during the execution of instructions. In some embodiments, a system can include a memory system (e.g., including main memory), a processor, and a cache system coupled between the processor and memory system. The cache system can have a set of caches. And, a cache of the set of caches can be designed in multiple ways. For instance, a cache in the set of caches can include cache sets through cache set associativity (which can include physical or logical cache set associativity). In some embodiments, caches of the system can be changeable between being configured for use in a first type of execution of instructions by the processor and being configured for use in a second type of execution of instructions by the processor. The first type can be a non-speculative execution of instructions by the processor. The second type can be a speculative execution of instructions by the processor. In some embodiments, cache sets of a cache can be changeable between being configured for use in a first type of execution of instructions by the processor and being configured for use in a second type of execution of instructions by the processor. The first type can be a non-speculative execution of instructions by the processor. And, the second type can be a speculative execution of instructions by the processor. In some embodiments, speculative execution is where the processor executes one or more instructions based on a speculation that such instructions need to be executed under some conditions, before the determination result is available as to whether such instructions should be executed or not. Non-speculative execution (or main execution, or normal execution) is where instructions are executed in an order according to the program sequence of the instructions. In some embodiments, the set of caches of the system can include at least a first cache and a second cache. In such examples, the system can include a command bus, configured to receive a read command or a write command from the processor. The system can also include an address bus, configured to receive a memory address from the processor for accessing memory for a read command or a write command. And, a data bus can be included that is configured to: communicate data to the processor for the processor to read; and receive data from the processor to be written in memory. The memory access requests from the processor can be defined by the command bus, the address bus, and the data bus. In some embodiments, a common command and address bus can replace the command and address buses described herein. Also, in such embodiments, a common connection to the common command and address bus can replace the respective connections to command and address buses described herein. The system can also include an execution-type signal line that is configured to receive an execution type from the processor. The execution type can be either an indication of a normal or non-speculative execution or an indication of a speculative execution. The system can also include a configurable data bit that is configured to be set to a first state (e.g., “0”) or a second state (e.g., “1”) to change the uses of the first cache and the second cache with respect to non-speculative execution and speculative execution. The system can also include a logic circuit that is configured to select the first cache for a memory access request from the processor, when the configurable data bit is set to the first state and the execution-type signal line receives an indication of non-speculative execution. The logic circuit can also be configured to select the second cache for a memory access request from the processor, when the configurable data bit is set to the first state and the execution-type signal line receives an indication of speculative execution. The logic circuit can also be configured to select the second cache for a memory access request from the processor, when the configurable data bit is set to the second state and the execution-type signal line receives an indication of a non-speculative execution. The logic circuit can also be configured to select the first cache for a memory access request from the processor, when the configurable data bit is set to the second state and the execution-type signal line receives an indication of a speculative execution. The system can also include a speculation-status signal line that is configured to receive speculation status from the processor. The speculation status can be either a confirmation or a rejection of a condition with nested instructions that are executed initially by a speculative execution and subsequently by a non-speculative execution when the speculation status is the confirmation of the condition. The logic circuit can also be configured to select the second cache as identified by the first state of the configurable data bit and restrict the first cache from use or change as identified by the first state of the configurable data bit, when the signal received by the execution-type signal line changes from an indication of a non-speculative execution to an indication of a speculative execution. Also, the logic circuit can be configured to change the configurable data bit from the first state to the second state and select the second cache for a memory access request when the execution-type signal line receives an indication of a non-speculative execution. This can occur when the signal received by the execution-type signal line changes from the indication of the speculative execution to the indication of the non-speculative execution and when the speculation status received by the speculation-status signal line is the confirmation of the condition. The logic circuit can also be configured to maintain the first state of the configurable data bit and select the first cache for a memory access request when the execution-type signal line receives an indication of a non-speculative execution. This can occur when the signal received by the execution-type signal line changes from the indication of the speculative execution to the indication of the non-speculative execution and when the speculation status received by the speculation-status signal line is the rejection of the condition. Also, the logic circuit can be configured to invalidate and discard the contents of the second cache, when the signal received by the execution-type signal line changes from the indication of the speculative execution to the indication of the non-speculative execution and when the speculation status received by the speculation-status signal line is the rejection of the condition. The system can also include a second command bus, configured to communicate a read command or a write command to a main memory connected to the cache system. The read command or the write command can be received from the processor by the cache system. The system can also include a second address bus, configured to communicate a memory address to the main memory. The memory address can be received from the processor by the cache system. The system can also include a second data bus, configured to communicate data to the main memory to be written in memory, and receive data from the main memory to be communicated to the processor to be read by the processor. Memory access requests to the main memory from the cache system can be defined by the second command bus, the second address bus, and the second data bus. As mentioned, a cache of the set of caches can be designed in multiple ways, and one of those ways includes a cache of a set divided into cache sets through cache set associativity (which can include physical or logical cache set associativity). A benefit of cache design through set associativity is that a single cache with set associativity can have multiple cache sets within the single cache, and thus, different parts of the single cache can be allocated for use by the processor without allocating the entire cache. Therefore, the single cache can be used more efficiently. This is especially the case when the processor executes multiple types of threads or has multiple execution types. For instance, the cache sets within a single cache can be used interchangeably with different execution types instead of the use of interchangeable caches. Common examples of cache division include having two, four, or eight cache sets within a cache. Also, set associativity cache design is advantageous over other common cache designs when the processor executes main and speculative threads. Since a speculative execution may use less additional cache capacity than the normal or non-speculative execution, the selection mechanism can be implemented at a cache set level and thus reserve less space than an entire cache (i.e., a fraction of a cache) for speculative execution. Cache with set associativity can have multiple cache sets within a set (e.g., division of two, four, or eight cache sets within a cache). For instance, as shown inFIG.7A, there are a least four cache sets in a cache of a cache system (e.g., see cache sets702,704, and706). The normal or non-speculative execution, which usually demands most of cache capacity can have a larger numbers of cache sets delegated to it. And, the speculative execution with modifications over the non-speculative execution can use one cache set or a smaller number of cache sets, since the speculative execution typically involving less instructions than the non-speculative execution. As shown inFIG.6or10, a cache system can include multiple caches (such as caches602a,602b, and602cdepicted inFIG.6) for a processor and a cache of a cache system can include cache sets (such as cache sets610a,610b, and610cdepicted inFIG.6) to further divide the organization of the cache system. Such an example includes a cache system with set associativity. On the cache set level of a cache, a first cache set (e.g., see cache set702depicted inFIG.7A,FIGS.8A, and9A) can hold content for use with a first type of execution by the processor or a second type. For instance, the first cache set can hold content for use with a non-speculative type or a speculative type of execution by the processor. Also, a second cache set (e.g., see cache set704or706depicted inFIG.7A,FIGS.8A, and9A) can hold content for use with the first type of execution by the processor or the second type. For example, in a first time instance, a first cache set is used for normal or non-speculative execution and a second cache set is used for speculative execution. In a second time instance, the second cache set is used for normal or non-speculative execution and the first cache set is used for speculative execution. A way of delegating/switching the cache sets for non-speculative and speculative executions can use set associativity via a cache set index within or external to a memory address tag or via a cache set indicator within a memory address tag that is different from a cache set index (e.g., seeFIGS.7A,7B,8A,8B,9A, and9B). As shown in at leastFIGS.1B,1C,1D,1E,7A,7B,8A,8B,9A, and9B, a cache set index or a cache set indicator can be included in cache block addressing to implement cache set addressing and associativity. Cache block addressing can be stored in memory (e.g., SRAM, DRAM, etc. depending on design of computing device-design of processor registers, cache system, other intermediate memory, main memory, etc.). As shown inFIGS.6,7A,7B,8A,8B,9A,9B, and10, each cache set of a cache (e.g., level 1, level 2 or level 3 cache) has a respective register (e.g., register610a,610b, or610cshown inFIGS.6and10or register712,714, or716shown inFIGS.7A,7B,8A,8B,9A, and9B) and one of set indexes (e.g., see set indexes722,724,726, and728shown inFIGS.7A,7B,8A,8B,9A, and9B) that can be swapped between the respective registers to implement swapping of cache sets for non-speculative and speculative executions of the processor (or, in general, for first type and second type executions of the processor). For example, with respect toFIGS.7A and7B, at a first time period, a first type of execution can use cache sets702and704and a second type of execution can use cache set706. Then, at a second time period, the first type of execution can use cache sets704and706and the second type of execution can use cache set702. Note this is just one example usage of cache sets, and it is to be understood that any of the cache sets without a predetermined restriction can be used by the first or second types of execution depending on time periods or set indexes or indicators stored in the registers. In some embodiments, a number of cache sets can be initially allocated for use in the first type of execution (e.g., non-speculative execution). During the second type of execution (e.g., speculative execution), one of the cache sets initially used for the first type of execution or not (such as a reserved cache set) can be used in the second type of execution. Essentially, a cache set allocated for the second type of execution can be initially a free cache set waiting to be used, or selected from the number of cache sets used for the first type of execution (e.g., a cache set that is less likely to be further used in further first type executions). In general, in some embodiments, the cache system includes a plurality of cache sets. The plurality of cache sets can include a first cache set, a second cache set, and a plurality of registers associated with the plurality of cache sets respectively. The plurality of registers can include a first register associated with the first cache set and a second register associated with the second cache set. The cache system can also include a connection to a command bus coupled between the cache system and a processor, a connection to an address bus coupled between the cache system and the processor, and a connection to a data bus coupled between the cache system and the processor. The cache system can also include a logic circuit coupled to the processor to control the plurality of cache sets according to the plurality of registers. In such embodiments, the cache system can be configured to be coupled between the processor and a memory system. And, when the connection to the address bus receives a memory address from the processor, the logic circuit can be configured to generate a set index from at least the memory address (e.g., see set index generation730,732,830,832,930, and932shown inFIGS.7A,7B,8A,8B,9A, and9Brespectively). Also, when the connection to the address bus receives a memory address from the processor, the logic circuit can be configured to determine whether the generated set index matches with content stored in the first register or with content stored in the second register. Also, the logic circuit can be configured to implement a command received in the connection to the command bus via the first cache set in response to the generated set index matching with the content stored in the first register and via the second cache set in response to the generated set index matching with the content stored in the second register. Also, in response to a determination that a data set of the memory system associated with the memory address is not currently cached in the cache system, the logic circuit can be configured to allocate the first cache set for caching the data set and store the generated set index in the first register. The generated set index can include a predetermined segment of bits in the memory address. The cache system can also include a connection to an execution-type signal line from the processor identifying an execution type (e.g., see connection604ddepicted inFIGS.6and10). In such embodiments, the generated set index can be generated further based on a type identified by the execution-type signal line. Also, the generated set index can include a predetermined segment of bits in the memory address and a bit representing the type identified by the execution-type signal line (e.g., the generated set index can include or be derived from the predetermined segment of bits in the memory address102eand one or more bits representing the type identified by the execution-type signal line, in execution type110e, shown inFIG.1E). Also, when the first and second registers are in a first state, the logic circuit can be configured to: implement commands received from the command bus for accessing the memory system via the first cache set, when the execution type is a first type; and implement commands received from the command bus for accessing the memory system via the second cache set, when the execution type is a second type. Also, when the first and second registers are in a second state, the logic circuit can be configured to: implement commands received from the command bus for accessing the memory system via another cache set of the plurality of cache sets besides the first cache set, when the execution type is the first type; and implement commands received from the command bus for accessing the memory system via another other cache set of the plurality of cache sets besides the second cache set, when the execution type is the second type. In such an example, each one of the plurality of registers can be configured to store a set index, and when the execution type changes from the second type to the first type, the logic circuit can be configured to change the content stored in the first register and the content stored in the second register. In some embodiments, the first type is configured to indicate non-speculative execution of instructions by the processor; and the second type is configured to indicate speculative execution of instructions by the processor. In such embodiments, the cache system can further include a connection to a speculation-status signal line from the processor identifying a status of a speculative execution of instructions by the processor (e.g., see connection1002shown inFIG.10). The connection to the speculation-status signal line can be configured to receive the status of a speculative execution, and the status of a speculative execution can indicate that a result of a speculative execution is to be accepted or rejected. Each one of the plurality of registers can be configured to store a set index, and when the execution type changes from the second type to the first type, the logic circuit can be configured to change the content stored in the first register and the content stored in the second register, if the status of speculative execution indicates that a result of speculative execution is to be accepted (e.g., see the changes of the content stored in the registers shown betweenFIG.7AandFIG.7B, shown betweenFIG.8AandFIG.8B, and shown betweenFIG.9AandFIG.9B). And, when the execution type changes from the second type to the first type, the logic circuit can be configured to maintain the content stored in the first register and the content stored in the second register without changes, if the status of speculative execution indicates that a result of speculative execution is to be rejected. Additionally, the cache systems described herein (e.g., cache systems200,400,600, and1000) can each include or be connected to background syncing circuitry (e.g., see background syncing circuitry1102shown inFIGS.11A and11B). The background syncing circuitry can be configured to synchronize caches or cache sets before reconfiguring a shadow cache as a main cache and/or reconfiguring a main cache as shadow cache. For example, the content of a cache or cache set that is initially delegated for a speculative execution (e.g., an extra cache or a spare cache set delegated for a speculative execution) can be synced with a corresponding cache or cache set used by a normal or non-speculative execution (to have the cache content of the normal execution), such that if the speculation is confirmed, the cache or cache set that is initially delegated for the speculative execution can immediately join the cache sets of a main or non-speculative execution. Also, the original cache set corresponding to the cache or cache set that is initially delegated for the speculative execution can be removed from the group of cache sets used for the main or non-speculative execution. In such embodiments, a circuit, such as a circuit including the background synching circuitry, can be configured to synchronize caches or cache sets in the background to reduce the impact of cache set syncing on cache usage by the processor. Also, the synchronization of the cache or cache sets can continue either until the speculation is abandoned, or until the speculation is confirmed and the syncing is complete. The synchronization may optionally include syncing (e.g., writing back) to the memory. In some embodiments, a cache system can include a first cache and a second cache as well as a connection to a command bus coupled between the cache system and a processor, a connection to an address bus coupled between the cache system and the processor, a connection to a data bus coupled between the cache system and the processor, and a connection to an execution-type signal line from the processor identifying an execution type (e.g., see cache systems200and400). Such a cache system can also include a logic circuit coupled to control the first cache and the second cache according to the execution type, and the cache system can be configured to be coupled between the processor and a memory system. Also, when the execution type is a first type indicating non-speculative execution of instructions by the processor and the first cache is configured to service commands from the command bus for accessing the memory system, the logic circuit can be configured to copy a portion of content cached in the first cache to the second cache (e.g., see operation1202). Further, the logic circuit can be configured to copy the portion of content cached in the first cache to the second cache independent of a current command received in the command bus. Additionally, when the execution type is the first type indicating non-speculative execution of instructions by the processor and the first cache is configured to service commands from the command bus for accessing the memory system, the logic circuit can be configured to service subsequent commands from the command bus using the second cache in response to the execution type being changed from the first type to a second type indicating speculative execution of instructions by the processor (e.g., see operation1208). In such an example, the logic circuit can be configured to complete synchronization of the portion of the content from the first cache to the second cache before servicing the subsequent commands after the execution type is changed from the first type to the second type (e.g., seeFIG.12). The logic circuit can also be configured to continue synchronization of the portion of the content from the first cache to the second cache while servicing the subsequent commands (e.g., see operation1210). In such embodiments, the cache system can also include a configurable data bit, wherein the logic circuit is further coupled to control the first cache and the second cache according to the configurable data bit. Also, in such embodiments, the cache system can further include a plurality of cache sets. For instance, the first cache and the second cache together can include the plurality of cache sets, and a plurality of cache sets can include a first cache set and a second cache set. The cache system can also include a plurality of registers associated with the plurality of cache sets respectively. The plurality of registers can include a first register associated with the first cache set and a second register associated with the second cache set. And, in such embodiments, the logic circuit can be further coupled to control the plurality of cache sets according to the plurality of registers. In some embodiments, a cache system can include a plurality of cache sets that includes a first cache set and a second cache set. The cache system can also include a plurality of registers associated with the plurality of cache sets respectively, which includes a first register associated with the first cache set and a second register associated with the second cache set. In such embodiments, the cache system can include a plurality of caches that include a first cache and a second cache, and the first cache and the second cache together can include at least part of the plurality of cache sets. Such a cache system can also include a connection to a command bus coupled between the cache system and a processor, a connection to an address bus coupled between the cache system and the processor, a connection to a data bus coupled between the cache system and the processor, and a connection to an execution-type signal line from the processor identifying an execution type, as well as a logic circuit coupled to control the plurality of cache sets according to the execution type. In such embodiments, the cache system can be configured to be coupled between the processor and a memory system. And, when the execution type is a first type indicating non-speculative execution of instructions by the processor and the first cache set is configured to service commands from the command bus for accessing the memory system, the logic circuit is configured to copy a portion of content cached in the first cache set to the second cache set. The logic circuit can also be configured to copy the portion of content cached in the first cache set to the second cache set independent of a current command received in the command bus. Also, when the execution type is the first type indicating non-speculative execution of instructions by the processor and the first cache set is configured to service commands from the command bus for accessing the memory system, the logic circuit can be configured to service subsequent commands from the command bus using the second cache set in response to the execution type being changed from the first type to a second type indicating speculative execution of instructions by the processor. The logic circuit can also be configured to complete synchronization of the portion of the content from the first cache set to the second cache set before servicing the subsequent commands after the execution type is changed from the first type to the second type. The logic circuit can also be configured to continue synchronization of the portion of the content from the first cache set to the second cache set while servicing the subsequent commands. And, the logic circuit can be further coupled to control the plurality of cache sets according to the plurality of registers. In addition to using a shadow cache for securing speculative executions, and synchronizing content between a main cache and the shadow cache to save the content cached in the main cache in preparation of acceptance of the content in the shadow cache, a spare cache set can be used to accelerate the speculative executions. Also, a spare cache set can be used to accelerate the speculative executions without use of a shadow cache. Use of a spare cache set is useful with shadow cache implementations because data held in cache sets used as a shadow cache can be validated and therefore used for normal execution and some cache sets used as the main cache may not be ready to be used as the shadow cache. Thus, one or more cache sets can be used as spare cache sets to avoid delays from waiting for cache set availability. To put it another way, once a speculation is confirmed, the content of the cache sets used as a shadow cache is confirmed to be valid and up-to-date; and thus, the former cache sets used as the shadow cache for speculative execution are used for normal execution. However, some of the cache sets initially used as the normal cache may not be ready to be used for a subsequent speculative execution. Therefore, one or more cache sets can be used as spares to avoid delays from waiting for cache set availability and accelerate the speculative executions. In some embodiments, if the syncing from a cache set in the normal cache to a corresponding cache set in the shadow cache has not yet been completed, the cache set in the normal cache cannot be freed immediately for use in the next speculative execution. In such a situation, if there is no spare cache set, the next speculative execution has to wait until the syncing is complete so that the corresponding cache set in the normal cache can be freed. This is just one example, of when a spare cache set is beneficial and can be added to an embodiment. And, there are many other situations when cache sets in the normal cache cannot be freed immediately so a spare cache set can be useful. Also, in some embodiments, the speculative execution may reference a memory region that has no overlapping with the memory region cached in the cache sets used in the normal cache. As a result of accepting the result of the speculative execution, the cache sets in the shadow cache and the normal cache may all be in the normal cache. This can cause delays as well, because it takes time for the cache system to free a cache set to support the next speculative execution. To free one, the cache system can identify a cache set, such as a least used cache set, and synchronize the cache set with the memory system. If the cache has data that is more up to date than the memory system, the data can be written into the memory system. Additionally, a system using a spare cache set can also use background synchronizing circuitry such as the background synchronizing circuitry1102depicted inFIGS.11A and11B. The background synchronizing circuitry1102can be a part of the logic circuit606or1006, in some embodiments. When an initial speculation is confirmed, the cache set used in the initial speculation can be switched to join the set of cache sets used for a main execution. Instead of using a cache set from the prior main execution that was being used for a case of the speculation failing, a spare cache set can be made available immediately for a next speculative execution. Also, the spare cache set can be updated for the next speculative execution via the background synchronizing circuitry. And, because of background synchronizing, a spare cache set can be ready for use when the cache set currently used for the speculation execution is ready to be accepted for normal execution. This way there is no delay in waiting for use of the next cache set for the next speculative execution. To prepare for the next speculative execution, the spare cache set can be synchronized to a normal cache set that is likely to be used in the next speculative execution or a least used cache set in the system. In addition to using a shadow cache, synchronizing content between a main cache and the shadow cache, and using a spare cache set, extended tags can be used to improve use of interchangeable caches and caches sets for different types of executions by a processor (such as speculative and non-speculative executions). There are many different ways to address cache sets and cache blocks within a cache system using extended tagging. Two example ways are shown inFIGS.16and17. In general, cache sets and cache blocks can be selected via a memory address. In some examples, selection is via set associativity. Both examples inFIGS.16and17use set associativity. InFIG.16, set associativity is implicitly defined (e.g., defined through an algorithm that can be used to determine which tag should be in which cache set for a given execution type). InFIG.17, set associativity is implemented via the bits of cache set index in the memory address. Also, parts of the functionality illustrated inFIGS.16and17can be implemented without use of set associativity (although this is not depicted inFIGS.16and17). In some embodiments, including embodiments shown inFIGS.16and17, a block index can be used as an address within individual cache sets to identify particular cache blocks in a cache set. And, the extended tags can be used as addresses for the cache sets. A block index of a memory address can be used for each cache set to get a cache block and a tag associated with the cache block. Also, as shown inFIGS.16and17, tag compare circuits can compare the extended tags generated from the cache sets with the extended cache tag generated from a memory address and a current execution type. The output of the comparison can be a cache hit or miss. The construction of the extended tags guarantee that there is at most one hit among the cache sets. If there is a hit, a cache block from the selected cache set provides the output. Otherwise, the data associated with the memory address is not cached in or outputted from any of the cache sets. In short, the extended tags depicted inFIGS.16and17are used to select a cache set, and the block indexes are used to select a cache block and its tag within a cache set. Also, as shown inFIG.17, the combination of a tag and a cache set index in the system can provide somewhat similar functionality as merely using a tag—as shown inFIG.16. However, inFIG.17, by separating the tag and the cache set index, a cache set does not have to store redundant copies of the cache set index since a cache set can be associated with a cache set register to hold cache set indexes. Whereas, inFIG.16, a cache set does need to store redundant copies of a cache set indicator in each of its blocks. However, since tags have the same cache set indicator in embodiments depicted inFIG.16, the indicator could be stored once in a register for the cache set (e.g., see cache set registers shown inFIG.17). A benefit of using cache set registers is that the lengths of the tags can be shorter in comparison with an implementation of the tags without cache set registers. Both of the embodiments shown inFIGS.16and17have cache set registers configured to hold an execution type so that the corresponding cache sets can be used in implementing different execution types (e.g., speculative and non-speculative execution types). But, the embodiment shown inFIG.17has registers that are further configured to hold an execution type and a cache set index. When the execution type is combined with the cache set index to form an extended cache set index, the extended cache set index can be used to select one of the cache sets without depending on the addressing through tags of cache blocks. Also, when a tag from a selected cache set is compared to the tag in the address to determine hit or miss, the two-stage selection can be similar to a conventional two-stage selection using a cache set index or can be used to be combined with the extended tag to support interchanging of cache sets for different execution types. In addition to using extended tags as well as other techniques disclosed herein to improve use of interchangeable caches and caches sets for different types of executions by a processor, a circuit included in or connected to the cache system can be used to map physical outputs from cache sets of a cache hardware system to a logical main cache and a logical shadow cache for normal and speculative executions by the processor respectively. The mapping can be according to at least one control register (e.g., a physical-to-logical-set-mapping (PLSM) register). Also, disclosed herein are computing devices having cache systems having interchangeable cache sets utilizing a mapping circuit (such as mapping circuit1830shown inFIG.18) to map physical cache set outputs to logical cache set outputs. A processor coupled to the cache system can execute two types of threads such as speculative and non-speculative execution threads. The speculative thread is executed speculatively with a condition that has not yet been evaluated. The data of the speculative thread can be in a logical shadow cache. The data of the non-speculative thread can be in the logical main or normal cache. Subsequently, when the result of evaluating the condition becomes available, the system can keep the results of executing the speculative thread when the condition requires the execution of the thread, or remove it. With the mapping circuit, the hardware circuit for the shadow cache can be repurposed as the hardware circuit for the main cache by changing the content of the control register. Thus, for example, there is no need to synchronize the main cache with the shadow cache if the execution of the speculative thread is required. In a conventional cache, each cache set is statically associated with a particular value of “Index S”/“Block Index L”. In the cache systems disclosed herein, any cache set can be used for any purpose for any index value S/L and for a main cache or a shadow cache. Cache sets can be used and defined by data in cache set registers associated with the cache sets. A selection logic can then be used to select the appropriate result based on the index value of S/L and how the cache sets are used. For example, four cache sets, a cache set 0 to set 3, can be initially used for a main cache for S/L=00, 01, 10 and 11 respectively. A fourth cache set can be used as the speculative cache for S/L=00, assuming that speculative execution does not change the cache sets defined by 01, 10 and 11. If the result of the speculative execution is required, the mapping data can be changed to indicate that the main cache for S/L=00, 01, 10 and 11 are respectively for the fourth cache set, cache set 1, cache set 2, and cache set 3. Cache set 0 can then be freed or invalidated for subsequent use in a speculative execution. If the next speculative execution needs to change the cache set S/L to 01, cache set 0 can be used as the shadow cache (e.g., copied from cache set 1 and used to look up content for addresses with S/L equaling ‘01’). Also, the cache system and processor does not merely switch back and forth between a predetermined main thread and a predetermined speculative thread. Consider the speculative execution of the following pseudo-program.Instructions A;If condition=true,then Instructions B;End conditional loop;Instructions C; andInstructions D.For the pseudo-program, the processor can run two threads.Thread A:Instructions A;Instructions C; andInstructions D.Thread B:Instructions A;Instructions B;Instructions C; andInstructions D. The execution of Instructions B is speculative because it depends on the test result of “condition=true” instead of “condition=false”. The execution of Instructions B is required only when condition=true. By the time the result of the test “condition=true” becomes available, the execution of Thread A reached Instructions D and the execution of Thread A may reach Instructions C. If the test result requires the execution of Instructions B, cache content for thread B is correct and cache content for thread A is incorrect. Then, all changes made in the cache according to Thread B should be retained and the processor can continue the execution of Instructions C using the cache that has the results of executing Instructions B; and Thread A is terminated. Since the changes made according to Thread B is in the shadow cache, the content of the shadow cache should be accepted as the main cache. If the test result requires no execution of Instructions B, the results of the Thread B is discarded (e.g., the content of the shadow cache is discarded or invalidated). The cache sets used for the shadow and the normal cache can be swapped or changed according to a mapping circuit and a control register (e.g., a physical-to-logical-set-mapping (PLSM) register). In some embodiments, a cache system can include a plurality of cache sets, having a first cache set configured to provide a first physical output upon a cache hit and a second cache set configured to provide a second physical output upon a cache hit. The cache system can also include a connection to a command bus coupled between the cache system and a processor and a connection to an address bus coupled between the cache system and the processor. The cache system can also include the control register, and the mapping circuit coupled to the control register to map respective physical outputs of the plurality of cache sets to a first logical cache and a second logical cache according to a state of the control register. The cache system can be configured to be coupled between the processor and a memory system. When the connection to the address bus receives a memory address from the processor and when the control register is in a first state, the mapping circuit can be configured to: map the first physical output to the first logical cache for a first type of execution by the processor to implement commands received from the command bus for accessing the memory system via the first cache set during the first type of execution; and map the second physical output to the second logical cache for a second type of execution by the processor to implement commands received from the command bus for accessing the memory system via the second cache set during the second type of execution. And, when the connection to the address bus receives a memory address from the processor and when the control register is in a second state, the mapping circuit is configured to: map the first physical output to the second logical cache to implement commands received from the command bus for accessing the memory system via the first cache set during the second type of execution; and map the second physical output to the first logical cache to implement commands received from the command bus for accessing the memory system via the second cache set for the first type of execution. In some embodiments, the first logical cache is a normal cache for non-speculative execution by the processor, and the second logical cache is a shadow cache for speculative execution by the processor. Also, in some embodiments, the cache system can further include a plurality of registers associated with the plurality of cache sets respectively, including a first register associated with the first cache set and a second register associated with the second cache set. The cache system can also include a logic circuit coupled to the processor to control the plurality of cache sets according to the plurality of registers. When the connection to the address bus receives a memory address from the processor, the logic circuit can be configured to generate a set index from at least the memory address, as well as determine whether the generated set index matches with a content stored in the first register or with a content stored in the second register. And, the logic circuit can be configured to implement a command received in the connection to the command bus via the first cache set in response to the generated set index matching with the content stored in the first register and via the second cache set in response to the generated set index matching with the content stored in the second register. In some embodiments, the mapping circuit can be a part of or connected to the logic circuit and the state of the control register can control a state of a cache set of the plurality of cache sets. In some embodiments, the state of the control register can control the state of a cache set of the plurality of cache sets by changing a valid bit for each block of the cache set. Also, in some examples, the cache system can further include a connection to a speculation-status signal line from the processor identifying a status of a speculative execution of instructions by the processor. The connection to the speculation-status signal line can be configured to receive the status of a speculative execution, and the status of a speculative execution can indicate that a result of a speculative execution is to be accepted or rejected. When the execution type changes from the speculative execution to a non-speculative execution, the logic circuit can be configured to change, via the control register, the state of the first and second cache sets, if the status of speculative execution indicates that a result of speculative execution is to be accepted (e.g., when the speculative execution is to become the main thread of execution). And, when the execution type changes from the speculative execution to a non-speculative execution, the logic circuit can be configured to maintain, via the control register, the state of the first and second cache sets without changes, if the status of speculative execution indicates that a result of speculative execution is to be rejected. In some embodiments, the mapping circuit is part of or connected to the logic circuit and the state of the control register can control a state of a cache register of the plurality of cache registers via the mapping circuit. In such examples, the cache system can further include a connection to a speculation-status signal line from the processor identifying a status of a speculative execution of instructions by the processor. The connection to the speculation-status signal line can be configured to receive the status of a speculative execution, and the status of a speculative execution indicates that a result of a speculative execution is to be accepted or rejected. When the execution type changes from the speculative execution to a non-speculative execution, the logic circuit can be configured to change, via the control register, the state of the first and second registers, if the status of speculative execution indicates that a result of speculative execution is to be accepted. And, when the execution type changes from the speculative execution to a non-speculative execution, the logic circuit can be configured to maintain, via the control register, the state of the first and second registers without changes, if the status of speculative execution indicates that a result of speculative execution is to be rejected. Additionally, the present disclosure includes techniques to secure speculative instruction execution using multiple interchangeable caches that are each interchangeable as a shadow cache or a main cache. The speculative instruction execution can occur in a processor of a computing device. The processor can execute two different types of threads of instructions. One of the threads can be executed speculatively (such as with a condition that has not yet been evaluated). The data of the speculative thread can be in a logical cache acting as a shadow cache. The data of a main thread can be in a logical cache acting as a main cache. Subsequently, when the result of evaluating the condition becomes available, the processor can keep the results of executing the speculative thread when the condition requires the execution of the thread, or remove the results. The hardware circuit for the cache acting as a shadow cache can be repurposed as the hardware circuit for the main cache by changing the content of the register. Thus, there is no need to synchronize the main cache with the shadow cache if the execution of the speculative thread is required. The techniques disclosed herein also relate to the use of a unified cache structure that can be used to implement, with improved performance, a main cache and a shadow cache. In the unified cache structure, results of cache sets can be dynamically remapped using a set of registers to switch being in the main cache and being in the shadow cache. When a speculative execution is successful, the cache set used with the shadow cache has the correct data and can be remapped as the corresponding cache set for the main cache. This eliminates a need to copy the data from the shadow cache to the main cache as used by other techniques using shadow and main caches. In general, a cache can be configured as multiple sets of blocks. Each block set can have multiple blocks and each block can hold a number bytes. A memory address can be partitioned into three segments for accessing the cache: tag, block index (which can be for addressing a set within the multiple sets), and cache block (which can be for addressing a byte in a block of bytes). For each block in a set, the cache stores not only the data from the memory, but can also store a tag of the address from which the data is loaded and a field indicating whether the content in the block is valid. Data can be retrieved from the cache using the block index (e.g., set ID) and the cache block (e.g., byte ID). The tag in the retrieved data is compared with the tag portion of the address. A matched tag means the data is cached for the address. Otherwise, it means that the data can be cached for another address that is mapped to the same location in the cache. With the techniques using multiple interchangeable caches, the physical cache sets of the interchangeable caches are not hardwired as main cache or shadow cache. A physical cache set can be used either as a main cache set or a shadow cache set. And, a set of registers can be used to specify whether the physical cache set is currently being used as a main cache set or a shadow cache set. In general, a mapping can be constructed to translate the outputs of the physical cache sets as logical outputs of the corresponding cache sets represented by the block index (e.g., set ID) and the main status or shadow status. The remapping allows any available physical cache to be used as a shadow cache. In some embodiments, the unified cache architecture can remap a shadow cache (e.g., speculative cache) to a main cache, and can remap a main cache to a speculative cache. It is to be understood that designs can include any number of caches or cache sets that can interchange between being main or speculative caches or cache sets. It is to be understood that there are no physical distinctions in the hardwiring of the main and speculative caches or cache sets. And, in some embodiments, there are no physical distinctions in the hardwiring of the logic units described herein. It is to be understood that interchangeable caches or cache sets do not have different caching capacity and structure. Otherwise, such caches or cache sets would not be interchangeable. Also, the physical cache sets can dynamically be configured to be main or speculative, such as with no a priori determination. Also, it is to be understood that interchangeability occurs at the cache level and not at the cache block level. Interchangeability at cache block level may allow the main cache and the shadow cache to have different capacity; and thus, not be interchangeable. Also, in some embodiments, when a speculation, by a processor, is successful and a cache is being used as a main cache as well as another cache is being used as a speculative or shadow cache, the valid bits associated with cache index blocks of the main cache are all set to indicate invalid (e.g., indicating invalid by a “0” bit value). In such embodiments, the initial states of all the valid bits of the speculative cache are indicative of invalid but then changed to indicate valid since the speculation was successful. In other words, the previous state of the main cache is voided, and the previous state of the speculative cache is set from invalid to valid and accessible by a main thread. In some embodiments, a PLSM register for the main cache can be changed from indicating the main cache to indicating the speculative cache. The change in the indication, by the PLSM register, of the main cache to the speculative cache can occur by the PLSM register receiving a valid bit of the main cache which indicates invalid after a successful speculation. For example, after a successful speculation and where a first cache is initially a main cache and a second cache is initially a speculative cache, an invalid indication of bit “0” can replace a least significant bit in a 3-bit PLSM register for the first cache, which can change “011” to “010” (or “3” to “2”). And, for a 3-bit PLSM register for the second cache, a valid indication of bit “1” can replace a least significant bit in the PLSM register, which can change “010” to “011” (or “2” to “3”). Thus, as shown by the example, a PLSM register, which is initially for a first cache (e.g., main cache) and initially selecting the first cache, is changed to selecting the second cache (e.g., speculative cache) after a successful speculation. And, as shown by the example, a PLSM register, which is initially for a second cache (e.g., speculative cache) and initially selecting the second cache, is changed to selecting the first cache (e.g., main cache) after a successful speculation. With such a design, a main thread of the processor can first access a cache initially designated as a main cache and then access a cache initially designated as a speculative cache after a successful speculation by the processor. And, a speculative thread of the processor can first access a cache initially designated as a speculative cache and then access a cache initially designated as a main cache after a successful speculation by the processor. FIG.1Ashows a memory address102apartitioned into a tag part104a, a block index part106a, and a block offset part108a. The execution type110acan be combined with the parts of the memory addresses to control cache operations in accordance with some embodiments of the present disclosure. The total bits used to control the addressing in a cache system according to some embodiments disclosed herein is A bits. And, the sum of the bits for the parts104a,106aand108aand the execution type110aequals the A bits. Tag part104ais K bits, the block index part106ais L bits, the block offset part108ais M bits, and the execution type110ais one or more T bits. For example, data of all memory addresses having the same block index part106aand block offset part108acan be stored in the same physical location in a cache for a given execution type. When the data at the memory address102ais stored in the cache, tag part104ais also stored for the block containing the memory address to identify which of the addresses having the same block index part106aand block offset part108ais currently being cached at that location in the cache. The data at a memory address can be cached in different locations in a unified cache structure for different types of executions. For example, the data can be cached in a main cache during non-speculative execution; and subsequent cached in a shadow cache during speculative execution. Execution type110acan be combined with the tag part104ato select from caches that can be dynamically configured for use in main and speculative executions without restriction. There can be many different ways to implement the use of the combination of execution type110aand tag part104ato make the selection. For example, logic circuit206depicted inFIGS.2and4can use the execution type110aand/or the tag part104a In a relatively simple implementation, the execution type110acan be combined with the tag part104ato form an extended tag in determining whether a cache location contains the data for the memory address102aand for the current type of execution of instructions. For example, a cache system can use the tag part104ato select a cache location without distinction of execution types; and when the tag part104ais combined with the execution type110ato form an extended tag, the extended tag can be used in a similar way to select a cache location in executions that have different types (e.g., speculative execution and non-speculative execution), such that the techniques of shadow cache can be implemented to enhance security. Also, since the information about the execution type associated with cached data is shared among many cache locations (e.g., in a cache set, or in a cache having multiple cache sets), it is not necessary to store the execution type for individual locations; and a selection mechanism (e.g., a switch, a filter, or a multiplexor such as a data multiplexor) can be used to implement the selection according to the execution type. Alternatively, the physical caches or physical cache sets used for different types of executions can be remapped to logical caches pre-associated with the different types of executions respectively. Thus, the use of the logical caches can be selected according to the execution type110a. FIG.1Bshows another way to partition a memory address102bpartitioned into parts to control cache operations. The memory address102bis partitioned into a tag part104b, a cache set index part112b, a block index part106b, and a block offset part108b. The total bits of the memory address102bis A bits. And, the sum of the bits for the four parts equals the A bits of the address102b. Tag part104bis K bits, the block index part106bis L bits, the block offset part108bis M bits, and the cache set index part112bis S bits. Thus, for address102b, its A bits=K bits+L bits+M bits+S bits. The partition of a memory address102baccording toFIG.1Ballows the implementation of set associativity in caching data. For example, a plurality of cache sets can be configured in a cache, where each cache set can be addressed using cache set index112b. A data set associated with the same cache set index can be cached in a same cache set. The tag part104bof a data block cached in the cache set can be stored in the cache in association with the data block. When the address102bis used to retrieve data from the cache set identified using the cache set index112b, the tag part of the data block stored in the cache set can be retrieved and compared with the tag part104bto determine whether there is a match between the tag104bof the address102bof the access request and the tag104bstored in the cache set identified by the cache set index112band stored for the cache block identified by the block index106b. If there is a match (such as a cache hit), the cache block stored in the cache set is for the memory address112b; otherwise, the cache block stored in the cache set is for another the memory address that has the same cache set index112band the same block index106bas the memory address102b, which results in a cache miss. In response to a cache miss, the cache system accesses the main memory to retrieve the data block according to the address102b. To implement shadow cache techniques, the cache set index112bcan be combined with the execution type110ato form an extended cache set index. Thus, cache sets used for different types of executions for different cache set indices can be addressed using the extended cache set index that identifies both the cache set index and the execution type. InFIG.1B, a cache set index part112bis extracted from a predetermined portion of the address102b. Data stored at memory addresses having different set indices can be cached in different cache sets of a cache to implement set associativity in caching data. A cache set of a cache can be selected using the cache set index (e.g., part112bof the address102b). Alternatively, cache set associativity can be implemented via tag104cthat includes a cache set indicator using a partition scheme illustrated inFIG.1C. Optionally, the cache set indicator is computed from tag104cand used as a cache set index to address a cache set. Alternatively, set associativity can be implemented directly via tag104csuch that a cache set storing the tag104cis selected for a cache hit; and when no cache set stores the tag104c, a cache miss is determined. Alternatively, an address102dcan be partition in a way as illustrated inFIG.1Dfor cache operations, where tag part104dincludes a cache set index112d, where the cache sets are not explicitly and separately addressed using cache set index. For example, to implement shadow cache techniques, the combination of execution type110eand tag104e(depicted inFIG.1E) with an embedded cache set indicator can be used to select a cache set that is for the correct execution type and that stores the same tag104efor a cache hit. When no cache set has a matching execution type and storing the same tag104e, a cache miss is determined. Also, as shown inFIG.1C,FIG.1Cdepicts another way to partition a memory address102cpartitioned into parts to control cache operations. The memory address102cis partitioned into a tag part104chaving a cache set indicator, a block index part106c, and a block offset part108c. The total bits of the memory address102cis A bits. And, the sum of the bits for the three parts equals the A bits of the address102c. Tag part104cis K bits, the block index part106cis L bits, and the block offset part108cis M bits. Thus, for address102c, its A bits=K bits+L bits+M bits. As mentioned, the partition of a memory address102caccording toFIG.1Callows the implementation of set associativity in caching data. Also, as shown inFIG.1D,FIG.1Ddepicts another way to partition a memory address102dpartitioned into parts to control cache operations. The memory address102dis partitioned into a tag part104dhaving a cache set index112d, a block index part106d, and a block offset part108d. The total bits of the memory address102dis A bits. And, the sum of the bits for the three parts equals the A bits of the address102d. Tag part104dis K bits, the block index part106dis L bits, and the block offset part108dis M bits. Thus, for address102d, its A bits=K bits+L bits+M bits. As mentioned, the partition of a memory address102daccording toFIG.1Dallows the implementation of set associativity in caching data. Also, as shown inFIG.1E,FIG.1Edepicts another way to partition a memory address102epartitioned into parts to control cache operations.FIG.1Eshows a memory address102epartitioned into a tag part104ehaving a cache set indicator, a block index part106e, and a block offset part108e. The execution type110ecan be combined with the parts of the memory addresses to control cache operations in accordance with some embodiments of the present disclosure. The total bits used to control the addressing in a cache system according to some embodiments disclosed herein is A bits. And, the sum of the bits for the parts104e,106eand108eand the execution type110eequals the A bits. Tag part104eis K bits, the block index part106eis L bits, the block offset part108eis M bits, and the execution type110eis T bit(s). FIGS.2,3A, and3Bshow example aspects of example computing devices, each computing device including a cache system having caches interchangeable for first type and second type executions (e.g., for implementation of shadow cache techniques in enhancing security), in accordance with some embodiments of the present disclosure. FIG.2specifically shows aspects of an example computing device that includes a cache system200having multiple caches (e.g., see caches202a,202b, and202c). The example computing device is also shown having a processor201and a memory system203. The cache system200is configured to be coupled between the processor201and a memory system203. The cache system200is shown including a connection204ato a command bus205acoupled between the cache system and the processor201. The cache system200is shown including a connection204bto an address bus205bcoupled between the cache system and the processor201. Addresses102a,102b,102c,102d, and102edepicted inFIGS.1A,1B,1C,1D, and1E, respectively, can each be communicated via the address bus205bdepending on the implementation of the cache system200. The cache system200is also shown including a connection204cto a data bus205ccoupled between the cache system and the processor201. The cache system200is also shown including a connection204dto an execution-type signal line205dfrom the processor201identifying an execution type. Not shown inFIG.2, the cache system200can include a configurable data bit. The configurable data bit can be included in or be data312shown in a first state inFIG.3Aand can be included in or be data314shown in a second state inFIG.3B. Memory access requests from the processor and memory use by the processor can be controlled through the command bus205a, the address bus205b, and the data bus205c. In some embodiments, the cache system200can include a first cache (e.g., see cache202a) and a second cache (e.g., see cache202b). In such embodiments, as shown inFIG.2, the cache system200can include a logic circuit206coupled to the processor201. Also, in such embodiments, the logic circuit206can be configured to control the first cache (e.g., see cache202a) and the second cache (e.g., see cache202b) based on the configurable data bit. When the configurable data bit is in a first state (e.g., see data312depicted inFIG.3A), the logic circuit206can be configured to implement commands received from the command bus205afor accessing the memory system203via the first cache, when the execution type is a first type. Also, when the configurable data bit is in a first state (e.g., see data312depicted inFIG.3A), the logic circuit206can be configured to implement commands received from the command bus205afor accessing the memory system203via the second cache, when the execution type is a second type. When the configurable data bit is in a second state (e.g., see data314depicted inFIG.3B), the logic circuit206can be configured to implement commands received from the command bus205afor accessing the memory system203via the second cache, when the execution type is the first type. Also, when the configurable data bit is in a second state (e.g., see data314depicted inFIG.3B), the logic circuit206can be configured to implement commands received from the command bus205afor accessing the memory system203via the first cache, when the execution type is the second type. In some embodiments, when the execution type changes from the second type to the first type, the logic circuit206is configured to toggle the configurable data bit. Also, as shown inFIG.2, the cache system200further includes a connection208ato a second command bus209acoupled between the cache system and the memory system203. The cache system200also includes a connection208bto a second address bus209bcoupled between the cache system and the memory system203. The cache system200also includes a connection208cto a second data bus209ccoupled between the cache system and the memory system203. When the configurable data bit is in a first state, the logic circuit206is configured to provide commands to the second command bus209afor accessing the memory system203via the first cache, when the execution type is a first type (such as a non-speculative type). When the configurable data bit is in a first state, the logic circuit206is also configured to provide commands to the second command bus209afor accessing the memory system via the second cache, when the execution type is a second type (such as a speculative type). When the configurable data bit is in a second state, the logic circuit206is configured to provide commands to the second command bus209afor accessing the memory system203via the second cache, when the execution type is the first type. Also, when the configurable data bit is in a second state, the logic circuit206is configured to provide commands to the second command bus209afor accessing the memory system203via the first cache, when the execution type is the second type. In some embodiments, the connection204ato the command bus205ais configured to receive a read command or a write command from the processor201for accessing the memory system203. Also, the connection204bto the address bus205bcan be configured to receive a memory address from the processor201for accessing the memory system203for the read command or the write command. Also, the connection204cto the data bus205ccan be configured to communicate data to the processor201for the processor to read the data for the read command. And, the connection204cto the data bus205ccan also be configured to receive data from the processor201to be written in the memory system203for the write command. Also, the connection204dto the execution-type signal line205dcan be configured to receive an identification of the execution type from the processor201(such as an identification of a non-speculative or speculative type of execution performed by the processor). In some embodiments, the logic circuit206can be configured to select the first cache for a memory access request from the processor201(e.g., one of the commands received from the command bus for accessing the memory system), when the configurable data bit is in the first state and the connection204dto the execution-type signal line205dreceives an indication of the first type (e.g., the non-speculative type). Also, the logic circuit206can be configured to select the second cache for a memory access request from the processor201, when the configurable data bit is in the first state and the connection204dto the execution-type signal line205dreceives an indication of the second type (e.g., the speculative type). Also, the logic circuit206can be configured to select the second cache for a memory access request from the processor201, when the configurable data bit is in the second state and the connection204dto the execution-type signal line205dreceives an indication of the first type. And, the logic circuit206can be configured to select the first cache for a memory access request from the processor201, when the configurable data bit is in the second state and the connection204dto the execution-type signal line205dreceives an indication of the second type. FIG.3Aspecifically shows aspects of an example computing device that includes a cache system (e.g., cache system200) having multiple caches (e.g., see caches302and304). The example computing device is also shown having a register306storing data312that can include the configurable bit. The register306can be connect to or be a part of the logic circuit206. InFIG.3A, it is shown that during a first time instance (“Time Instance X”), the register306stores data312which can be the configurable bit in a first state. The content308areceived from the first cache (e.g., cache302) during the first time instance includes content for a first type of execution. And, the content310areceived from the second cache (e.g., cache304) during the first time instance includes content for a second type of execution. FIG.3Bspecifically shows aspects of an example computing device that includes a cache system (e.g., cache system200) having multiple caches (e.g., see caches302and304). The example computing device is also shown having a register306storing data314that can include the configurable bit. InFIG.3B, it is shown that during a second time instance (“Time Instance Y”), the register306stores data314which can be the configurable bit in a second state. The content308breceived from the first cache (e.g., cache302) during the second time instance includes content for the second type of execution. And, the content310breceived from the second cache (e.g., cache304) during the second time instance includes content for the first type of execution. The illustrated lines320connecting the register306to the caches302and304can be a part of the logic circuit206. In some embodiments, instead of using a configurable bit to control use of the caches of the cache system200, another form of data may be used to control use of the caches of the cache system. For instance, the logic circuit206can be configured to control the first cache (e.g., see cache202a) and the second cache (e.g., see cache202b) based on different data being stored in the register306that is not the configurable bit. In such an example, when the register306stores first data or is in a first state, the logic circuit can be configured to: implement commands received from the command bus for accessing the memory system via the first cache, when the execution type is a first type; and implement commands received from the command bus for accessing the memory system via the second cache, when the execution type is a second type. And, when the register306stores second data or is in a second state, the logic circuit can be configured to: implement commands received from the command bus for accessing the memory system via the second cache, when the execution type is the first type; and implement commands received from the command bus for accessing the memory system via the first cache, when the execution type is the second type. FIGS.4,5A, and5Bshow example aspects of example computing devices, each computing device including a cache system having interchangeable caches for main or normal type execution (e.g., non-speculative execution) and speculative execution, in accordance with some embodiments of the present disclosure. FIG.4specifically shows aspects of an example computing device that includes a cache system400having multiple caches (e.g., see caches202a,202b, and202cdepicted inFIG.4). InFIG.4, the example computing device is also shown having a processor401and memory system203. As shown byFIG.4, cache system400is similar to cache system200but for the cache system400also includes a connection402to a speculation-status signal line404from the processor401identifying a status of a speculative execution of instructions by the processor401. Similarly, the cache system400is shown including connection204ato command bus205acoupled between the cache system and the processor401. The system400also includes connection204bto an address bus205bcoupled between the cache system and the processor401. Addresses102a,102b,102c,102d, and102edepicted inFIGS.1A,1B,1C,1D, and1E, respectively, can each be communicated via the address bus205bdepending on the implementation of the cache system400. The system400also includes a connection204cto a data bus205ccoupled between the cache system and the processor401. It also includes a connection204dto an execution-type signal line205dfrom the processor401identifying a non-speculative execution type or a speculative execution type. Not shown inFIG.4, the cache system400can also include the configurable data bit. The configurable data bit can be included in or be data312shown in a first state inFIG.5Aand can be included in or be data314shown in a second state inFIG.5B. In some embodiments, the cache system400can include a first cache (e.g., see cache202a) and a second cache (e.g., see cache202b). In such embodiments, as shown inFIG.4, the cache system400can include a logic circuit406coupled to the processor401. Also, in such embodiments, the logic circuit406can be configured to control the first cache (e.g., see cache202a) and the second cache (e.g., see cache202b) based on the configurable data bit. When the configurable data bit is in a first state (e.g., see data312depicted inFIG.5A), the logic circuit406can be configured to: implement commands received from the command bus205afor accessing the memory system203via the first cache, when the execution type is a non-speculative type; and implement commands received from the command bus205afor accessing the memory system203via the second cache, when the execution type is a speculative type. When the configurable data bit is in a second state (e.g., see data314depicted inFIG.5B), the logic circuit406can be configured to implement commands received from the command bus205afor accessing the memory system203via the second cache, when the execution type is the non-speculative type. Also, when the configurable data bit is in a second state (e.g., see data314depicted inFIG.5B), the logic circuit406can be configured to implement commands received from the command bus205afor accessing the memory system203via the first cache, when the execution type is the speculative type. In some embodiments, such as shown inFIG.4, the first type can be configured to indicate non-speculative execution of instructions by the processor. In such examples, the second type can be configured to indicate speculative execution of instructions by the processor. In such embodiments, the cache system400can further include connection402to speculation-status signal line404from the processor401identifying a status of a speculative execution of instructions by the processor. The connection402to the speculation-status signal line404can be configured to receive the status of a speculative execution, and the status of a speculative execution can indicate that a result of a speculative execution is to be accepted or rejected. Also, when the execution type changes from the second type or the speculative type to the first type or non-speculative type, the logic circuit406of system400can be configured to toggle the configurable data bit, if the status of speculative execution indicates that a result of speculative execution is to be accepted. Further, when the execution type changes from the second type or the speculative type to the first type or non-speculative type, the logic circuit406of system400can be configured to maintain the configurable data bit without changes, if the status of speculative execution indicates that a result of speculative execution is to be rejected. FIG.5Aspecifically shows aspects of an example computing device that includes a cache system (e.g., cache system400) having multiple caches (e.g., see caches302and304). The example computing device is also shown having a register306storing data312that can include the configurable bit. InFIG.5A, it is shown that during a first time instance (“Time Instance X”), the register306stores data312which can be the configurable bit in a first state. This is similar toFIG.3A. except the content502areceived from a first cache (e.g., cache302) during the first time instance includes content for a non-speculative execution. And, the content504areceived from a second cache (e.g., cache304) during the first time instance includes content for a speculative execution. FIG.5Bspecifically shows aspects of an example computing device that includes a cache system (e.g., cache system400) having multiple caches (e.g., see caches302and304). The example computing device is also shown having a register306storing data314that can include the configurable bit. InFIG.5B, it is shown that during a second time instance (“Time Instance Y”), the register306stores data314which can be the configurable bit in a second state. This is similar toFIG.3B. except the content502breceived from the first cache (e.g., cache302) during the second time instance includes content for the speculative execution. And, the content504breceived from the second cache (e.g., cache304) during the second time instance includes content for the non-speculative execution. Also, similarly, inFIGS.5A and5B, the illustrated lines320connecting the register306to the caches302and304can be a part of the logic circuit406of the cache system400. In some embodiments, instead of using a configurable bit to control use of the caches of the cache system400, another form of data may be used to control use of the caches of the cache system400. For instance, the logic circuit406in the system400can be configured to control the first cache (e.g., see cache202a) and the second cache (e.g., see cache202b) based on different data being stored in the register306that is not the configurable bit. In such an example, when the register306stores first data or is in a first state, the logic circuit can be configured to: implement commands received from the command bus for accessing the memory system via the first cache, when the execution type is a non-speculative type; and implement commands received from the command bus for accessing the memory system via the second cache, when the execution type is a speculative type. And, when the register306stores second data or is in a second state, the logic circuit can be configured to: implement commands received from the command bus for accessing the memory system via the second cache, when the execution type is the non-speculative type; and implement commands received from the command bus for accessing the memory system via the first cache, when the execution type is the speculative type. Some embodiments can include a cache system and the cache system can include a plurality of caches including a first cache and a second cache. The system can also include a connection to a command bus, configured to receive a read command or a write command from a processor connected to the cache system, for reading from or writing to a memory system. The system can also include a connection to an address bus, configured to receive a memory address from the processor for accessing the memory system for the read command or the write command. The system can also include a connection to a data bus, configured to: communicate data to the processor for the processor to read the data for the read command; and receive data from the processor to be written in the memory system for the write command. In such examples, the memory access requests from the processor and memory used by the processor can be defined by the command bus, the address bus, and the data bus. The system can also include an execution-type signal line, configured to receive an identification of execution type from the processor. The execution type is either a first execution type or a second execution type (e.g., a normal or non-speculative execution or a speculative execution). The system can also include a configurable data bit configured to be set to a first state (e.g., “0”) or a second state (e.g., “1”) to control selection of the first cache and the second cache for use by the processor. The system can also include a logic circuit, configured to select the first cache for use by the processor, when the configurable data bit is in a first state and the execution-type signal line receives an indication of the first type of execution. The logic circuit can also be configured to select the second cache for use by the processor, when the configurable data bit is in the first state and the execution-type signal line receives an indication of the second type of execution. The logic circuit can also be configured to select the second cache for use by the processor, when the configurable data bit is in the second state and the execution-type signal line receives an indication of the first type of execution. The logic circuit can also be configured to select the first cache for use by the processor, when the configurable data bit is in the second state and the execution-type signal line receives an indication of the second type of execution. In some embodiments, the first type of execution is a speculative execution of instructions by the processor, and the second type of execution is a non-speculative execution of instructions by the processor (e.g., a normal or main execution). In such examples, the system can further include a connection to a speculation-status signal line that is configured to receive speculation status from the processor. The speculation status can be either an acceptance or a rejection of a condition with nested instructions that are executed initially by a speculative execution of the processor and subsequently by a normal execution of the processor when the speculation status is the acceptance of the condition. In some embodiments, the logic circuit is configured to switch the configurable data bit from the first state to the second state, when the speculation status received by the speculation-status signal line is the acceptance of the condition. The logic circuit can also be configured to maintain the state of the configurable data bit, when the speculation status received by the speculation-status signal line is the rejection of the condition. In some embodiments, the logic circuit is configured to select the second cache for use as identified by the first state of the configurable data bit and restrict the first cache from use as identified by the first state of the configurable data bit, when the signal received by the execution-type signal line changes from an indication of a normal execution to an indication of a speculative execution. At this change, a speculation status can be ignored/bypassed by the logic circuit because the processor is in speculative execution does not know whether the instructions preformed under the speculative execution should be executed or not by the main execution. The logic circuit can also be configured to maintain the first state of the configurable data bit and select the first cache for a memory access request when the execution-type signal line receives an indication of a normal execution, when the signal received by the execution-type signal line changes from the indication of the speculative execution to the indication of the normal execution and when the speculation status received by the speculation-status signal line is the rejection of the condition. In some embodiments, the logic circuit is configured to invalidate and discard the contents of the second cache, when the signal received by the execution-type signal line changes from the indication of the speculative execution to the indication of the normal execution and when the speculation status received by the speculation-status signal line is the rejection of the condition. In some embodiments, the system further includes a connection to a second command bus, configured to communicate a read command or a write command to the memory system (e.g., including main memory). The read command or the write command can be received from the processor by the cache system. The system can also include a connection to a second address bus, configured to communicate a memory address to the memory system. The memory address can be received from the processor by the cache system. The system can also include a connection to a second data bus, configured to: communicate data to the memory system to be written in the memory system; and receive data from the memory system to be communicated to the processor to be read by the processor. For instance, memory access requests to the memory system from the cache system can be defined by the second command bus, the second address bus, and the second data bus. In some embodiments, when the configurable data bit is in a first state, the logic circuit is configured to: provide commands to the second command bus for accessing the memory system via the first cache, when the execution type is a first type; and provide commands to the second command bus for accessing the memory system via the second cache, when the execution type is a second type. And, when the configurable data bit is in a second state, the logic circuit can be configured to: provide commands to the second command bus for accessing the memory system via the second cache, when the execution type is the first type; and provide commands to the second command bus for accessing the memory system via the first cache, when the execution type is the second type. Some embodiments can include a system including a processor, a memory system, and a cache system coupled between the processor and the memory system. The cache system of the system can include a plurality of caches including a first cache and a second cache. The cache system of the system can also include a connection to a command bus coupled between the cache system and the processor, a connection to an address bus coupled between the cache system and the processor, a connection to a data bus coupled between the cache system and the processor, and a connection to an execution-type signal line from the processor identifying an execution type. The cache system of the system can also include a configurable data bit and a logic circuit coupled to the processor to control the first cache and the second cache based on the configurable data bit. When the configurable data bit is in a first state, the logic circuit can be configured to: implement commands received from the command bus for accessing the memory system via the first cache, when the execution type is a first type; and implement commands received from the command bus for accessing the memory system via the second cache, when the execution type is a second type. And, when the configurable data bit is in a second state, the logic circuit can be configured to: implement commands received from the command bus for accessing the memory system via the second cache, when the execution type is the first type; and implement commands received from the command bus for accessing the memory system via the first cache, when the execution type is the second type. In such a system, the first type can be configured to indicate non-speculative execution of instructions by the processor, and the second type can be configured to indicate speculative execution of instructions by the processor. Also, the cache system of the system can further include a connection to a speculation-status signal line from the processor identifying a status of a speculative execution of instructions by the processor. The connection to the speculation-status signal line can be configured to receive the status of a speculative execution, and the status of a speculative execution can indicate that a result of a speculative execution is to be accepted or rejected. When the execution type changes from the second type (speculative type) to the first type (non-speculative type), the logic circuit can be configured to toggle the configurable data bit, if the status of speculative execution indicates that a result of speculative execution is to be accepted. And, when the execution type changes from the second type (speculative type) to the first type (non-speculative type), the logic circuit can also be configured to maintain the configurable data bit without changes, if the status of speculative execution indicates that a result of speculative execution is to be rejected. FIGS.6,7A,7B,8A,8B,9A, and9Bshow example aspects of example computing devices, each computing device including a cache system having interchangeable cache sets for first type and second type executions (e.g., for implementation of shadow cache techniques in enhancing security and/or for main type and speculative type executions), in accordance with some embodiments of the present disclosure. FIG.6specifically shows aspects of an example computing device that includes a cache system600having multiple caches (e.g., see caches602a,602b, and602c), where at least one of the caches is implemented with cache set associativity. The example computing device is also shown having a processor601and a memory system603. The cache system600is configured to be coupled between the processor601and a memory system603. The cache system600is shown including a connection604ato a command bus605acoupled between the cache system and the processor601. The cache system600is shown including a connection604bto an address bus605bcoupled between the cache system and the processor601. Addresses102a,102b,102c,102d, and102edepicted inFIGS.1A,1B,1C,1D, and1E, respectively, can each be communicated via the address bus605bdepending on the implementation of the cache system600. The cache system600is also shown including a connection604cto a data bus605ccoupled between the cache system and the processor601. The cache system600is also shown including a connection604dto an execution-type signal line605dfrom the processor601identifying an execution type. The connections604a,604b,604c, and604dcan provide communicative couplings between the buses605a,605b,605c, and605dand a logic circuit606of the cache system600. Also, as shown inFIG.6, the cache system600further includes a connection608ato a second command bus609acoupled between the cache system and the memory system603. The cache system600also includes a connection608bto a second address bus609bcoupled between the cache system and the memory system603. The cache system600also includes a connection608cto a second data bus609ccoupled between the cache system and the memory system603. The cache system600also includes a plurality of cache sets (e.g., see cache sets610a,610b, and610c). The caches sets can include a first cache set (e.g., see cache set610a) and a second cache set (e.g., see cache set610b). Also, as shown inFIG.6, the cache system600further includes a plurality of registers (e.g., see registers612a,612b, and612c) associated with the plurality of cache sets respectively. The registers (or cache set registers) can include a first register (e.g., see register612a) associated with the first cache set (e.g., see cache set610a) and a second register (e.g., see register612a) associated with the second cache set (e.g., see cache set610b). Each one of the plurality of registers (e.g., see registers612a,612b, and612c) can be configured to store a set index. As shown inFIG.6as well asFIG.10, cache602aand cache602bto cache602c(caches 1 to N) are not fixed structures. However, it is to be understood that in some embodiments the caches can be fixed structures. Each of the depicted caches can be considered a logical grouping of cache sets and such logical grouping is shown by broken lines representing each logical cache. The cache sets610ato610c(cache sets 1 to N) can be based on the content of the registers610ato610c(registers 1 to N). Cache sets 1 to N can be a collection of cache sets within the cache system shared among cache 1, and cache 2 to cache N. Cache 1 can be a subset of the collection; cache 2 can be another non-overlapping subset. The member cache sets in each of the caches can change based on the contents in the registers 1 to N. Cache set 1 (in a conventional sense) may or may not communicate with its register 1 depending on the embodiment. Broken lines are also shown inFIGS.7A,7B,8A,8B,9A, and9Bto indicate the logical relation between the cache sets and corresponding registers inFIGS.7A,7B,8A,8B,9A, and9B. The content of the register 1 determines how cache set 1 is addressed (e.g., what cache set index will cause the cache set 1 to be selected to output data). In some embodiments, there is no direct interaction between a cache set 1 and its corresponding register 1. The logic circuit606or1006interacts with both the cache set and the corresponding register depending on the embodiment. In some embodiments, the logic circuit606can be coupled to the processor601to control the plurality of cache sets (e.g., cache sets610a,610b, and610c) according to the plurality of registers (e.g., registers612a,612b, and612c). In such embodiments, the cache system600can be configured to be coupled between the processor601and a memory system603. And, when the connection604bto the address bus605breceives a memory address from the processor601, the logic circuit606can be configured to generate a set index from at least the memory address and determine whether the generated set index matches with content stored in the first register (e.g., register612a) or with content stored in the second register (e.g., register612b). The logic circuit606can also be configured to implement a command received in the connection604ato the command bus605avia the first cache set (e.g., cache set610a) in response to the generated set index matching with the content stored in the first register (e.g., register612a) and via the second cache set (e.g., cache set610b) in response to the generated set index matching with the content stored in the second register (e.g., register612b). In some embodiments, the cache system600can include a first cache (e.g., see cache602a) and a second cache (e.g., see cache602b). In such embodiments, as shown inFIG.2, the cache system600can include a logic circuit606coupled to the processor601. Also, in such embodiments, the logic circuit606can be configured to control the first cache (e.g., see cache602a) and the second cache (e.g., see cache602b) based on a configurable data bit and/or respective registers (e.g., see registers612a,612b, and612c). In some embodiments, in response to a determination that a data set of the memory system603associated with the memory address is not currently cached in the cache system600(such as not cached in cache602aof the system), the logic circuit606is configured to allocate the first cache set (e.g., cache set610a) for caching the data set and store the generated set index in the first register (e.g., register612a). In such embodiments and others, the cache system can include a connection to an execution-type signal line (e.g., connection604dto execution-type signal line605) from the processor (e.g., processor601) identifying an execution type. And, in such embodiments and others, the generated set index is generated further based on a type identified by the execution-type signal line. Also, the generated set index can include a predetermined segment of bits in the memory address and a bit representing the type identified by the execution-type signal line605d. Also, when the first and second registers (e.g., registers612aand612b) are in a first state, the logic circuit606can be configured to implement commands received from the command bus605afor accessing the memory system603via the first cache set (e.g., cache set610a), when the execution type is a first type. Also, when the first and second registers (e.g., registers612aand612b) are in a first state, the logic circuit606can be configured to implement commands received from the command bus605afor accessing the memory system603via the second cache set (e.g., cache set610b), when the execution type is a second type. Furthermore, when the first and second registers (e.g., registers612aand612b) are in a second state, the logic circuit606can be configured to implement commands received from the command bus605afor accessing the memory system603via another cache set of the plurality of cache sets besides the first cache set (e.g., cache set610bor610c), when the execution type is the first type. Also, when the first and second registers (e.g., registers612aand612b) are in a second state, the logic circuit606can be configured to implement commands received from the command bus605afor accessing the memory system603via another other cache set of the plurality of cache sets besides the second cache set (e.g., cache set610aor610cor another cache set not depicted inFIG.6), when the execution type is the second type. In some embodiments, each one of the plurality of registers (e.g., see registers612a,612b, and612c) can be configured to store a set index, and when the execution type changes from the second type to the first type (e.g., from the non-speculative type to the speculative type of execution), the logic circuit606can be configured to change the content stored in the first register (e.g., register612a) and the content stored in the second register (e.g., register612b). Examples of the change of the content stored in the first register (e.g., register612a) and the content stored in the second register (e.g., register612b) are illustrated inFIGS.7A and7B,FIGS.8A and8B, andFIGS.9A and9B. Each ofFIGS.7A,7B,8A,8B,9A, and9B, specifically shows aspects of an example computing device that includes a cache system having multiple cache sets (e.g., see caches702,704, and706), where the cache sets are implemented via cache set associativity. The respective cache system for each of these figures is also shown having a plurality of registers associated with the cache sets respectively. The plurality of registers includes at least register712, register714, and register716. The plurality of registers includes at least one additional register which is not shown in the figures. Register712is shown being associated with or connected to cache set702, register714is shown being associated with or connected to cache set704, and register716is shown being associated with or connected to cache set706. Not shown inFIGS.7A,7B,8A,8B,9A, and9B, each of the respective cache systems can also include a connection to a command bus coupled between the cache system and a processor, a connection to an address bus coupled between the cache system and the processor, and a connection to a data bus coupled between the cache system and the processor. Each of the cache systems can also include a logic circuit coupled to the processor to control the plurality of cache sets (e.g., cache sets702,704, and706) according to the plurality of registers (e.g., registers712,714, and716). As illustrated byFIGS.7A,7B,8A,8B,9A, and9B, when a connection to an address bus of a cache system receives a memory address (e.g., see memory address102b,102c, or102d) from a processor, a logic circuit of the cache system can be configured to generate a set index (e.g., see set index722,724,726, or728) from the memory address (e.g., see set index generation730,732,830,832,930, or932). Specifically, as shown inFIG.7A, at least the registers712,714, and716are configured in a first state. When a connection to an address bus of the cache system receives the memory address102bfrom a processor, a logic circuit of the cache system generates set index722,724or726according to at least set index generation730a,730b, or730crespectively and an instance of cache set index112bof address102b. The set index generation730a,730b, or730ccan be for storing the set index722,724or726in register712,714, or716respectively. The set index generation730a,730b, or730ccan also be for usage of the recently generated set index in a comparison of the recently generated set index to content already stored in register712,714, or716respectively. The set index generations730a,730b, and730coccur when the registers are configured in the first state. The configuration of the first state can be through set index generation and storage. Specifically, as shown inFIG.7B, at least the registers712,714, and716are configured in a second state. When the connection to the address bus of the cache system receives the memory address102bfrom the processor, the logic circuit of the cache system generates set index726,722or728according to at least set index generation732a,732b, or732crespectively and an instance of cache set index112bof address102b. The set index generation732a,732b, or732ccan be for storing the set index726,722or728in register712,714, or716respectively. The set index generation732a,732b, or732ccan also be for usage of the recently generated set index in a comparison of the recently generated set index to content already stored in register712,714, or716respectively. The set index generations732a,732b, and732coccur when the registers are configured in the second state. The configuration of the second state can be through set index generation and storage. Specifically, as shown inFIG.8A, at least the registers712,714, and716are configured in a first state. When a connection to an address bus of the cache system receives the memory address102cfrom a processor, a logic circuit of the cache system generates set index722,724or726according to at least set index generation830a,830b, or830crespectively and an instance of tag104cof address102bhaving a cache set indicator. The set index generation830a,830b, or830ccan be for storing the set index722,724or726in register712,714, or716respectively. The set index generation830a,830b, or830ccan also be for usage of the recently generated set index in a comparison of the recently generated set index to content already stored in register712,714, or716respectively. The set index generations830a,830b, and830coccur when the registers are configured in the first state. Specifically, as shown inFIG.8B, at least the registers712,714, and716are configured in a second state. When the connection to the address bus of the cache system receives the memory address102cfrom the processor, the logic circuit of the cache system generates set index726,722or728according to at least set index generation832a,832b, or832crespectively and an instance of tag104cof address102bhaving a cache set indicator. The set index generation832a,832b, or832ccan be for storing the set index726,722or728in register712,714, or716respectively. The set index generation832a,832b, or832ccan also be for usage of the recently generated set index in a comparison of the recently generated set index to content already stored in register712,714, or716respectively. The set index generations832a,832b, and832coccur when the registers are configured in the second state. Specifically, as shown inFIG.9A, at least the registers712,714, and716are configured in a first state. When a connection to an address bus of the cache system receives the memory address102dfrom a processor, a logic circuit of the cache system generates set index722,724or726according to at least set index generation930a,930b, or930crespectively and an instance of cache set index112din tag104dof address102b. The set index generation930a,930b, or930ccan be for storing the set index722,724or726in register712,714, or716respectively. The set index generation930a,930b, or930ccan also be for usage of the recently generated set index in a comparison of the recently generated set index to content already stored in register712,714, or716respectively. The set index generations930a,930b, and930coccur when the registers are configured in the first state. Specifically, as shown inFIG.7B, at least the registers712,714, and716are configured in a second state. When the connection to the address bus of the cache system receives the memory address102dfrom the processor, the logic circuit of the cache system generates set index726,722or728according to at least set index generation932a,932b, or932crespectively and an instance of cache set index112din tag104dof address102b. The set index generation932a,932b, or932ccan be for storing the set index726,722or728in register712,714, or716respectively. The set index generation932a,932b, or932ccan also be for usage of the recently generated set index in a comparison of the recently generated set index to content already stored in register712,714, or716respectively. The set index generations932a,932b, and932coccur when the registers are configured in the second state. In some embodiments implemented through the cache system illustrated inFIGS.7A and7B,8A and8B, or9A and9B, when the connection to the address bus receives a memory address from the processor, the logic circuit can be configured to determine whether the generated set index matches with content stored in one of the registers (e.g., registers712,714, and716). The content stored in the register can be from a prior generation of a set index and storage of the set index in the register. Also, in some embodiments implemented through the cache system illustrated inFIGS.7A and7B,8A and8B, or9A and9B, the logic circuit can be configured to implement a command received in the connection to the command bus via a first cache set in response to the generated set index matching with the content stored in an associated first register and via a second cache set in response to the generated set index matching with the content stored in an associated second register. Also, in response to a determination that a data set of the memory system associated with the memory address is not currently cached in the cache system, the logic circuit can be configured to allocate the first cache set for caching the data set and store the generated set index in the first register. The generated set index can include a predetermined segment of bits in the memory address. Also, in such embodiments, when the first and second registers are in a first state, the logic circuit can be configured to: implement commands received from the command bus for accessing the memory system via the first cache set, when an execution type of a processor is a first type; and implement commands received from the command bus for accessing the memory system via the second cache set, when the execution type is a second type. Also, when the first and second registers are in a second state, the logic circuit can be configured to: implement commands received from the command bus for accessing the memory system via another cache set of the plurality of cache sets besides the first cache set, when the execution type is the first type; and implement commands received from the command bus for accessing the memory system via another other cache set of the plurality of cache sets besides the second cache set, when the execution type is the second type. In such an example, each one of the plurality of registers can be configured to store a set index, and when the execution type changes from the second type to the first type, the logic circuit can be configured to change the content stored in the first register and the content stored in the second register. FIG.10specifically shows aspects of an example computing device that includes a cache system1000having multiple caches (e.g., see caches602a,602b, and602cdepicted inFIG.10), where at least one of the caches is implemented with cache set associativity (e.g., see cache sets610a,610b, and601c). InFIG.10, the example computing device is also shown having a processor1001and memory system603. As shown byFIG.10, cache system1000is similar to cache system600but for the cache system1000also includes a connection1002to a speculation-status signal line1004from the processor1001identifying a status of a speculative execution of instructions by the processor1001. Similarly, the cache system1000is shown including connection604ato command bus605acoupled between the cache system and the processor1001. The system1000also includes connection604bto an address bus605bcoupled between the cache system and the processor1001. Addresses102a,102b,102c,102d, and102edepicted inFIGS.1A,1B,1C,1D, and1E, respectively, can each be communicated via the address bus605bdepending on the implementation of the cache system1000. The system1000also includes a connection604cto a data bus605ccoupled between the cache system and the processor1001. It also includes a connection604dto an execution-type signal line605dfrom the processor1001identifying a non-speculative execution type or a speculative execution type. Similarly, the cache system1000is also shown including logic circuit1006which can be similar to logic circuit606but for its circuitry coupled to the connection1002to the speculation-status signal line1004. In some embodiments, the logic circuit1006can be coupled to the processor1001to control the plurality of cache sets (e.g., cache sets610a,610b, and610c) according to the plurality of registers (e.g., registers612a,612b, and612c). Each one of the plurality of registers (e.g., see registers612a,612b, and612c) can be configured to store a set index. In such embodiments, the cache system1000can be configured to be coupled between the processor1001and a memory system603. And, when the connection604bto the address bus605breceives a memory address from the processor1001, the logic circuit1006can be configured to generate a set index from at least the memory address and determine whether the generated set index matches with content stored in the first register (e.g., register612a) or with content stored in the second register (e.g., register612b). The logic circuit1006can also be configured to implement a command received in the connection604ato the command bus605avia the first cache set (e.g., cache set610a) in response to the generated set index matching with the content stored in the first register (e.g., register612a) and via the second cache set (e.g., cache set610b) in response to the generated set index matching with the content stored in the second register (e.g., register612b). Also, the cache system1000is shown including connections608a,608b, and608c, which are similar to the corresponding connections shown inFIG.6. With respect to the connections608a,608b, and608cdepicted inFIGS.6and10, when the first and second registers (e.g., registers612aand612b) are in a first state, the logic circuit606or1006can be configured to provide commands to the second command bus609afor accessing the memory system603via the first cache set (e.g., cache set610a), when the execution type is a first type (such as a non-speculative type). Also, when the first and second registers (e.g., registers612aand612b) are in the first state, the logic circuit606or1006can be configured to provide commands to the second command bus609afor accessing the memory system via the second cache set (e.g., cache set610b), when the execution type is a second type (such as a speculative type). Further, when the first and second registers (e.g., registers612aand612b) are in a second state, the logic circuit606or1006can be configured to provide commands to the second command bus609afor accessing the memory system603via a cache set other than the first cache set (e.g., cache set610bor610cor another cache set not depicted inFIG.6or10), when the execution type is the first type. Also, when the first and second registers (e.g., registers612aand612b) are in a second state, the logic circuit606or1006can be configured to provide commands to the second command bus609afor accessing the memory system603via a cache set other than the second cache set (e.g., cache set610aor610cor another cache set not depicted inFIG.6or10), when the execution type is the second type. In some embodiments, such as shown inFIG.10, the first type can be configured to indicate non-speculative execution of instructions by the processor1001; and the second type can be configured to indicate speculative execution of instructions by the processor. Shown inFIG.10, the cache system1000further includes connection1002to speculation-status signal line1004from the processor1001identifying a status of a speculative execution of instructions by the processor. The connection1002to the speculation-status signal line1004can be configured to receive the status of a speculative execution, and the status of a speculative execution can indicate that a result of a speculative execution is to be accepted or rejected. In such embodiments, each one of the plurality of registers (e.g., registers612a,612b, and612c) can be configured to store a set index, and when the execution type changes from the speculative execution type to the non-speculative type, the logic circuit1006can be configured to change the content stored in the first register (e.g., register612a) and the content stored in the second register (e.g., register612b), if the status of speculative type of execution indicates that a result of the speculative execution is to be accepted. And, when the execution type changes from the speculative type to the non-speculative type, the logic circuit1006can be configured to maintain the content stored in the first register and the content stored in the second register without changes, if the status of speculative type of execution indicates that a result of the speculative type of execution is to be rejected. Some embodiments can include a cache system that includes a plurality of cache sets having at least a first cache set and a second cache set. The cache system can also include a plurality of registers associated with the plurality of cache sets respectively. The plurality of registers can include at least a first register associated with the first cache set, configured to store a set index, and a second register associated with the second cache set, configured to store a set index. The cache system can also include a connection to a command bus coupled between the cache system and a processor, a connection to an address bus coupled between the cache system and the processor, a connection to a data bus coupled between the cache system and the processor, and a connection to an execution-type signal line from the processor identifying an execution type. The cache system can also include a logic circuit coupled to the processor to control the plurality of cache sets according to the plurality of registers. And, the cache system can be configured to be coupled between the processor and a memory system. When the first and second registers are in a first state, the logic circuit can be configured to: implement commands received from the command bus for accessing the memory system via the first cache set, when the execution type is a first type; and implement commands received from the command bus for accessing the memory system via the second cache set, when the execution type is a second type. Also, when the first and second registers are in a second state, the logic circuit can be configured to: implement commands received from the command bus for accessing the memory system via another cache set of the plurality of cache sets besides the first cache set, when the execution type is the first type; and implement commands received from the command bus for accessing the memory system via another other cache set of the plurality of cache sets besides the second cache set, when the execution type is the second type. The connection to the address bus can be configured to receive a memory address from the processor, and the memory address can include a set index. In some embodiments, when the first and second registers are in a first state, a first set index associated with the first cache set is stored in the first register, and a second set index associated with the second cache set is stored in the second register. When the first and second registers are in a second state, the first set index can be stored in another register of the plurality of registers besides the first register, and the second set index can be stored in another register of the plurality of registers besides the second register. In such examples, when the connection to the address bus receives a memory address from the processor, the logic circuit can be configured to: generate a set index from at least the memory address; and determine whether the generated set index matches with content stored in the first register or with content stored in the second register. And, the logic circuit can be further configured to implement a command received in the connection to the command bus via the first cache set in response to the generated set index matching with the content stored in the first register and via the second cache set in response to the generated set index matching with the content stored in the second register. In response to a determination that a data set of the memory system associated with the memory address is not currently cached in the cache system, the logic circuit can be configured to allocate the first cache set for caching the data set and store the generated set index in the first register. In some embodiments, the generated set index is generated further based on an execution type identified by the execution-type signal line. In such examples, the generated set index can include a predetermined segment of bits in the memory address and a bit representing the execution type identified by the execution-type signal line. Some embodiments can include a system, including a processor, a memory system, and a cache system. The cache system can include a plurality of cache sets, including a first cache set and a second cache set, and a plurality of registers associated with the plurality of cache sets respectively, including a first register associated with the first cache set and a second register associated with the second cache set. The cache system can also include a connection to a command bus coupled between the cache system and the processor, a connection to an address bus coupled between the cache system and the processor, and a connection to a data bus coupled between the cache system and the processor. The cache system can also include a logic circuit coupled to the processor to control the plurality of cache sets according to the plurality of registers. When the connection to the address bus receives a memory address from the processor, the logic circuit can be configured to: generate a set index from at least the memory address; and determine whether the generated set index matches with content stored in the first register or with content stored in the second register. And, the logic circuit can be configured to implement a command received in the connection to the command bus via the first cache set in response to the generated set index matching with the content stored in the first register and via the second cache set in response to the generated set index matching with the content stored in the second register. The cache system can further include a connection to an execution-type signal line from the processor identifying an execution type. The generated set index can be generated further based on a type identified by the execution-type signal line. The generated set index can include a predetermined segment of bits in the memory address and a bit representing the type identified by the execution-type signal line. FIGS.11A and11Billustrate background synching circuitry for synchronizing content between a main cache and a shadow cache to save the content cached in the main cache in preparation of acceptance of the content in the shadow cache, in accordance with some embodiments of the present disclosure. The cache system inFIGS.11A and11Bincludes background syncing circuitry1102. For example, cache1124and cache1126can be caches202aand202binFIG.2or4, or caches602aand602binFIG.6or10. The background syncing circuitry1102can be a part of the logic circuit206,406,606or1006. FIG.11Aillustrates a scenario where cache1124is used as the main cache in non-speculative execution and cache1126is used as a shadow cache in speculative execution. The background syncing circuitry1102is configured to synchronize1130the cached content from cache1124to cache1126such that if the conditional speculative execution is confirmed to be required, cache1126can be used as the main cache in subsequent non-speculative execution; and, cache1124can be used as the shadow cache in a further instance of speculative execution. The syncing1130of the cached content from cache1124to cache1126copies the previous execution results into cache1126such that the execution results are not lost in repurposing the cache1124as the shadow cache subsequently. The cached content from cache1124can be cached in cache1124but not yet flushed to memory (e.g., memory203or603). Further, some of the memory content that has a same copy cached in cache1124can also be copied from cache1124to cache1126, such that when cache1126is subsequently used as a main cache, the content previously cached in cache1124is also available in cache1126. This can speed up the access to the previously cached content. Copying the content between the cache1124and cache1126is faster than retrieving the data from the memory to the cache system. In some embodiments, if a program references a variable during normal execution, the variable can be cached. In such examples, if during speculation the variable is referenced in a write-through cache, the value in main memory is valid and correct. If during speculation the variable is referenced in a write-back cache, then the aforesaid examples features described forFIG.11Acan be used; and the valid value of the variable can be in the cache1124. In the scenario illustrated inFIG.11A, a processor (e.g., processor201,401,601, or1001) can execution a first set of instructions in the mode of non-speculative execution. During the execution of the first set of instructions, the processor can access memory addresses to load data (e.g., instructions and operands) from the memory, and store computation results. Since cache1124is used as the main cache, the content of the data and/or computation results can be cached in cache1124. For example, cache1124can store the computation results that have not yet been written back into the memory; and cache1124can store the loaded data (e.g., instructions and operands) that may be used in subsequent executions of instructions. In preparation of the cache B1226for use as a shadow cache in the speculative execution of a second set of instructions, the background syncing circuitry1102copies the cached content from cache1124to cache1126in syncing1130. At least part of the copying operations can be performed in the background in a way independent from the processor accessing the memory via the cache system. For example, when the processor is accessing a first memory address in the non-speculative execution of the first set of instructions, the background syncing circuitry1102can copy the content cached in the cache1124for a second memory address into the cache1126. In some instances, the copying operations can be performed in the background in parallel with the accessing the memory via the cache system. For example, when the processor is accessing a first memory address in the non-speculative execution of the first set of instructions to store a computation result, the background syncing circuitry can copy the computation result into the cache1126as cache content for the first memory address. In one implementation, the background syncing circuitry1102is configured to complete the syncing operation before the cache1126is allowed to be used in the speculative execution of the second set of instructions. Thus, when the cache1126is enabled to be used for the speculative execution of the second set of instructions, the valid content in the cache1124can also be found in cache1126. However, the syncing operation can delay the use of the cache1126as the shadow cache. Alternatively, the background syncing circuitry1102is configured to prioritize the syncing of dirty content from the cache1124to the cache1126. Dirty content can be where the data in the cache has been modified and the data in main memory has not be modified. Dirty content cached in the cache1124can be more up to date than the content stored in corresponding one or more addresses in the memory. For example, when the processor stores a computation result at an address, the cache1124can cache the computation result for the address without immediately writing the computation result into the memory at the address. When the computation result is written back to the memory at the address, the cached content is no longer considered dirty. The cache1124stores data to track the dirty content cached in cache1124. The background syncing circuit1102can automatically copy the dirty content from cache1124to cache1126in preparation of cache1126to serve as a shadow cache. Optionally, before the completion of the syncing operations, the background syncing circuitry1102can allow the cache1126to function as a shadow cache in conditional speculative execution of the second set of instructions. During the time period in which the cache1126is used in the speculative execution as a shadow cache, the background syncing circuit1102can continue the syncing operation1130of copying cached content from cache1124to cache1126. The background syncing circuitry1102is configured to complete at least the syncing of the dirty content from the cache1124to cache1126before allowing the cache1126to be accepted as the main cache. For example, upon the indication that the execution of the second set of instructions is required, the background syncing circuitry1102determines whether the dirty content in the cache1124has been synced to the cache1126; and if not, the use of the cache1126as main cache is postponed until the syncing is complete. In some implementations, the background syncing circuitry1102can continue its syncing operation even after the cache1126is accepted as the main cache, but before the cache1124is used as a shadow cache in conditional speculative execution of a third set of instructions. Before the completion of the syncing operation1130, the cache system can configure the cache1124as a secondary cache between the cache1126and the memory during the speculative execution, such that when the content of a memory address is not found in cache1126, the cache system checks cache1124to determine whether the content is in cache1124; and if so, the content is copied from cache1124to cache1126(instead of being loaded from the memory directly). When the processor stores data at a memory address and the data is cached in cache1126, the cache system checks invalidates the content that is cached in the cache1124as a secondary cache. After the cache1126is reconfigured as the main cache following the acceptance of the result of the speculative execution of the second set of instructions, the background syncing circuitry1102can start to synchronize1132the cached content from the cache1126to the cache1124, as illustrated inFIG.11B. Following the speculative execution of the second set of instructions, if the speculative status from the processor indicates that the results of the execution of the second set of instructions should be rejected, the cache1124remains to function as the main cache; and the content in the cache1126can be invalidated. The invalidation can include the cache1126has all its entries marked empty; thus, any subsequent speculations begin with an empty speculative cache. The background syncing circuitry1102can again synchronize1130the cached content from the cache1124to the cache1126in preparation of the speculative execution of the third set of instructions. In some embodiments, each of the cache1124and cache1126has a dedicated and fixed collection of cache sets; and a configurable bit is used to control use of the caches1124and1126as main cache and shadow cache respectively, as illustrated inFIGS.3A,3B,5A, and5B. In other embodiments, cache1124and cache1126can share a pool of cache sets, some of the cache sets can be dynamically allocated to cache1124and cache1126, as illustrated inFIGS.6to10. When the cache1124is used as the main cache and the cache1126is used as the shadow cache, the cache1126can have a smaller number of cache sets than the cache1124. Some of the cache sets in cache1126can be the shadows of a portion of the cache sets in the cache1124such that when the result of the speculative execution is determined to be accepted, the portion of the cache sets in the cache1124can be reconfigured for use as shadow cache in the next speculative execution; and the remaining portion of the cache sets that is not affected by the speculative execution can be re-allocated from the cache1124to the cache1126, such that the cached content in the unaffected portion can be further used in the subsequent non-speculative execution. FIG.12show example operations of the background syncing circuitry1102ofFIGS.11A and11B, in accordance with some embodiments of the present disclosure. As shown inFIG.12, at operation1202, a cache system configures a first cache as main cache and a second cache as shadow cache. For example, when dedicated caches with fixed hardware structures are used as the first cache and the second cache, a configurable bit can be used to configure the first cache as main cache and the second cache as shadow cache, as illustrated inFIGS.2to5B. Alternatively, cache sets can be allocated from a pool of cache sets, using registers, to and from the first cache and the second cache, in a way as illustrated inFIGS.6to10. At operation1204, the cache system determines whether the current execution type is changed from non-speculative to speculative. For example, when the processor accesses the memory via the cache system200, the processor further provides the indication of whether the current memory access is associated with conditional speculative execution. For example, the indication can be provided in a signal line205dconfigured to specify execution type. If the current execution type is not changed from non-speculative to speculative, the cache system services memory access requests from the processor using the first cache as the main cache at operation1206. When the memory access changes the cached content in the first cache, the background syncing circuitry1102can copy the content cached in the first cache to the second cache in operation1208. For example, the background syncing circuitry1102can be part of the logic circuit206inFIG.2,406inFIG.4,606inFIG.6, and/or1006inFIG.10. The background syncing circuitry1102can prioritize the copy of dirty content cached in the first cache. InFIG.12, the operations1204to1208are repeated until the cache system200determines that the current execution type is changed to speculative. Optionally, the background syncing circuitry1102is configured to continue copying content cached in the first cache to the second cache to finish syncing at least the dirty content from the first cache to the second cache in operation1210before allowing the cache system to service memory requests from the processor during the speculative execution using the second cache in operation1212. Optionally, the background syncing circuitry1102can continue the syncing operation while the cache system uses the second cache to service memory requests from the processor during the speculative execution in operation1212. In operation1214, the cache system determines whether the current execution type is changed to non-speculative. If the current execution type remains as speculative, the operations1210and1212can be repeated. In response to the determination that the current execution type is changed to non-speculative at operation1214, the cache system determines whether the result of the speculative execution is to be accepted. The result of the speculative execution corresponds to the changes in the cached content in the second cache. For example, the processor401can provide an indication of whether the result of the speculative execution should be accepted via speculation-status signal line404illustrated inFIG.4or speculation-status signal line1004inFIG.10. If, in operation1216, the cache system determines that the result of the speculative execution is to be rejected, the cache system can discard the cached content currently cached in the second cache in operation1222(e.g., discard via setting the invalid bits of cache blocks in the second cache). Subsequently, in operation1244, the cache system can keep the first cache as main cache and the second cache as shadow cache; and in operation1208, the background syncing circuitry1102can copy the cached content from the first cache to the second cache. When the execution remains non-speculative, operations1204to1208can be repeated. If, in operation1216, the cache system determines that the result of the speculative execution is to be accepted, the background syncing circuitry1102is configured to further copy content cached in the first cache to the second cache to complete syncing at least the dirty content from the first cache to the second cache in operation1218before allowing the cache system to re-configure first cache as shadow cache. In operation1220, the cache system configures the first cache as shadow cache and the second cache as main cache, in a way somewhat similar to the operation1202. In configuring the first cache as shadow cache, the cache system can invalidate its content and then synchronize the cached content in the second cache to the first cache, in a way somewhat similar to the operations1222,1224,1208, and1204. For example, when dedicated caches with fixed hardware structures are used as the first cache and the second cache, a configurable bit can be changed to configure the first cache as shadow cache and the second cache as main cache in operation1220. Alternatively, when cache sets can be allocated from a pool of cache sets using registers to from the first cache and the second cache, in a way as illustrated inFIGS.6to10, the cache sets that are initially in the first cache but are not impacted by the speculative execution can be reconfigured via their associated registers (e.g., registers612aand612billustrated inFIGS.6and10) to join the second cache. The cache sets that are initially in the first cache (but now has out of data content in view of the content in the second cache) can be reconfigured as in the new first cache. Optionally, further cache sets can be allocated from the available pool of cache sets and added to the new first cache. Optionally, some of the cache sets that have invalidated cache content can be put back into the available pool of cache sets for future allocation (e.g., for adding to the second cache as the main cache or the first cache as the shadow cache). In this specification, the disclosure has been described with reference to specific exemplary embodiments thereof. However, it will be evident that various modifications can be made thereto without departing from the broader spirit and scope as set forth in the following claims. The specification and drawings are, accordingly, to be regarded in an illustrative sense rather than a restrictive sense. For example, embodiments can include a cache system, including: a first cache; a second cache; a connection to a command bus coupled between the cache system and a processor; a connection to an address bus coupled between the cache system and the processor; a connection to a data bus coupled between the cache system and the processor; a connection to an execution-type signal line from the processor identifying an execution type; and a logic circuit coupled to control the first cache and the second cache according to the execution type. In such embodiments, the cache system is configured to be coupled between the processor and a memory system. Also, when the execution type is a first type indicating non-speculative execution of instructions by the processor and the first cache is configured to service commands from the command bus for accessing the memory system, the logic circuit is configured to copy a portion of content cached in the first cache to the second cache. In such embodiments, the logic circuit can be configured to copy the portion of content cached in the first cache to the second cache independent of a current command received in the command bus. Also, when the execution type is the first type indicating non-speculative execution of instructions by the processor and the first cache is configured to service commands from the command bus for accessing the memory system, the logic circuit can be configured to service subsequent commands from the command bus using the second cache in response to the execution type being changed from the first type to a second type indicating speculative execution of instructions by the processor. The logic circuit can also be configured to complete synchronization of the portion of the content from the first cache to the second cache before servicing the subsequent commands after the execution type is changed from the first type to the second type. The logic circuit can also be configured to continue synchronization of the portion of the content from the first cache to the second cache while servicing the subsequent commands. In such embodiments, the cache system can further include: a configurable data bit, and the logic circuit is further coupled to control the first cache and the second cache according to the configurable data bit. When the configurable data bit is in a first state, the logic circuit can be configured to: implement commands received from the command bus for accessing the memory system via the first cache, when the execution type is the first type; and implement commands received from the command bus for accessing the memory system via the second cache, when the execution type is a second type. And, when the configurable data bit is in a second state, the logic circuit can be configured to: implement commands received from the command bus for accessing the memory system via the second cache, when the execution type is the first type; and implement commands received from the command bus for accessing the memory system via the first cache, when the execution type is the second type. When the execution type changes from the second type to the first type, the logic circuit can also be configured to toggle the configurable data bit. In such embodiments, the cache system can further include: a connection to a speculation-status signal line from the processor identifying a status of a speculative execution of instructions by the processor. The connection to the speculation-status signal line is configured to receive the status of a speculative execution. The status of a speculative execution indicates that a result of a speculative execution is to be accepted or rejected. When the execution type changes from the second type to the first type, the logic circuit can be configured to: toggle the configurable data bit, if the status of speculative execution indicates that a result of speculative execution is to be accepted; and maintain the configurable data bit without changes, if the status of speculative execution indicates that a result of speculative execution is to be rejected. Also, in such embodiments, the first cache and the second cache together include: a plurality of cache sets, including a first cache set and a second cache set; and a plurality of registers associated with the plurality of cache sets respectively, including a first register associated with the first cache set and a second register associated with the second cache set. In such examples, the logic circuit can be further coupled to control the plurality of cache sets according to the plurality of registers. Also, when the connection to the address bus receives a memory address from the processor, the logic circuit can be configured to: generate a set index from at least the memory address; and determine whether the generated set index matches with content stored in the first register or with content stored in the second register. The logic circuit can also be configured to implement a command received in the connection to the command bus via the first cache set in response to the generated set index matching with the content stored in the first register and via the second cache set in response to the generated set index matching with the content stored in the second register. Furthermore, in response to a determination that a data set of the memory system associated with the memory address is not currently cached in the cache system, the logic circuit can be configured to allocate the first cache set for caching the data set and store the generated set index in the first register. Additionally, in such embodiments having cache sets, the cache system can also include a connection to an execution-type signal line from the processor identifying an execution type, and the generated set index is generated further based on a type identified by the execution-type signal line. The generated set index can include a predetermined segment of bits in the memory address and a bit representing the type identified by the execution-type signal line. Also, when the first and second registers are in a first state, the logic circuit can be configured to: implement commands received from the command bus for accessing the memory system via the first cache set, when the execution type is a first type; and implement commands received from the command bus for accessing the memory system via the second cache set, when the execution type is a second type. And, when the first and second registers are in a second state, the logic circuit is configured to: implement commands received from the command bus for accessing the memory system via another cache set of the plurality of cache sets besides the first cache set, when the execution type is the first type; and implement commands received from the command bus for accessing the memory system via another other cache set of the plurality of cache sets besides the second cache set, when the execution type is the second type. In such embodiments having cache sets, each one of the plurality of registers can be configured to store a set index. And, when the execution type changes from the second type to the first type, the logic circuit can be configured to change the content stored in the first register and the content stored in the second register. Also, the first type can be configured to indicate non-speculative execution of instructions by the processor and the second type can be configured to indicate speculative execution of instructions by the processor. In such examples, the cache system can further include a connection to a speculation-status signal line from the processor identifying a status of a speculative execution of instructions by the processor. The connection to the speculation-status signal line is configured to receive the status of a speculative execution, and the status of a speculative execution indicates that a result of a speculative execution is to be accepted or rejected. When the execution type changes from the second type to the first type, the logic circuit can be configured to: change the content stored in the first register and the content stored in the second register, if the status of speculative execution indicates that a result of speculative execution is to be accepted; and maintain the content stored in the first register and the content stored in the second register without changes, if the status of speculative execution indicates that a result of speculative execution is to be rejected. Also, for example, embodiments can include a cache system, including: in general, a plurality of cache sets and a plurality of registers associated with the plurality of cache sets respectively. The plurality of cache sets includes a first cache set and a second cache set, and the plurality of registers includes a first register associated with the first cache set and a second register associated with the second cache set. Similarly, in such embodiments, the cache system can include a connection to a command bus coupled between the cache system and a processor, a connection to an address bus coupled between the cache system and the processor, a connection to a data bus coupled between the cache system and the processor, a connection to an execution-type signal line from the processor identifying an execution type, and a logic circuit coupled to control the plurality of cache sets according to the execution type. The cache system can also be configured to be coupled between the processor and a memory system. And, when the execution type is a first type indicating non-speculative execution of instructions by the processor and the first cache set is configured to service commands from the command bus for accessing the memory system, the logic circuit can be configured to copy a portion of content cached in the first cache set to the second cache set. In such embodiments with cache sets, the logic circuit can be configured to copy the portion of content cached in the first cache set to the second cache set independent of a current command received in the command bus. When the execution type is the first type indicating non-speculative execution of instructions by the processor and the first cache set is configured to service commands from the command bus for accessing the memory system, the logic circuit can be configured to service subsequent commands from the command bus using the second cache set in response to the execution type being changed from the first type to a second type indicating speculative execution of instructions by the processor. The logic circuit can also be configured to complete synchronization of the portion of the content from the first cache set to the second cache set before servicing the subsequent commands after the execution type is changed from the first type to the second type. The logic circuit can also be configured to continue synchronization of the portion of the content from the first cache set to the second cache set while servicing the subsequent commands. Also, in such embodiments with cache sets, the logic circuit can be further coupled to control the plurality of cache sets according to the plurality of registers. When the connection to the address bus receives a memory address from the processor, the logic circuit can be configured to: generate a set index from at least the memory address; and determine whether the generated set index matches with content stored in the first register or with content stored in the second register. The logic circuit can also be configured to implement a command received in the connection to the command bus via the first cache set in response to the generated set index matching with the content stored in the first register and via the second cache set in response to the generated set index matching with the content stored in the second register. Also, in response to a determination that a data set of the memory system associated with the memory address is not currently cached in the cache system, the logic circuit can be configured to allocate the first cache set for caching the data set and store the generated set index in the first register. Additionally, in such embodiments with cache sets, the cache system can further include a connection to an execution-type signal line from the processor identifying an execution type, and the generated set index can be generated further based on a type identified by the execution-type signal line. The generated set index can include a predetermined segment of bits in the memory address and a bit representing the type identified by the execution-type signal line. When the first and second registers are in a first state, the logic circuit can be configured to: implement commands received from the command bus for accessing the memory system via the first cache set, when the execution type is a first type; and implement commands received from the command bus for accessing the memory system via the second cache set, when the execution type is a second type. And, when the first and second registers are in a second state, the logic circuit can be configured to: implement commands received from the command bus for accessing the memory system via another cache set of the plurality of cache sets besides the first cache set, when the execution type is the first type; and implement commands received from the command bus for accessing the memory system via another other cache set of the plurality of cache sets besides the second cache set, when the execution type is the second type. In such embodiments with cache sets, each one of the plurality of registers is configured to store a set index, and when the execution type changes from the second type to the first type, the logic circuit can be configured to change the content stored in the first register and the content stored in the second register. Also, the first type can be configured to indicate non-speculative execution of instructions by the processor and the second type is configured to indicate speculative execution of instructions by the processor. In such embodiments with cache sets, the cache system can also include a connection to a speculation-status signal line from the processor identifying a status of a speculative execution of instructions by the processor. The connection to the speculation-status signal line is configured to receive the status of a speculative execution, and the status of a speculative execution indicates that a result of a speculative execution is to be accepted or rejected. When the execution type changes from the second type to the first type, the logic circuit can be configured to: change the content stored in the first register and the content stored in the second register, if the status of speculative execution indicates that a result of speculative execution is to be accepted; and maintain the content stored in the first register and the content stored in the second register without changes, if the status of speculative execution indicates that a result of speculative execution is to be rejected. Also, in such embodiments with cache sets, the cache sets can be divided amongst a plurality of caches within the cache system. For instance, the cache sets can be divided up amongst first and second caches of the plurality of caches. FIGS.13,14A,14B,14C,15A,15B,15C, and15Dshow example aspects of an example computing device having a cache system (e.g., see cache system1000shown inFIG.13) having interchangeable cache sets (e.g., see cache sets1310a,1310b,1310c, and1310d) including a spare cache set (e.g., see spare cache set1310dshown inFIGS.14A and15A) to accelerate speculative execution, in accordance with some embodiments of the present disclosure. In addition to using a shadow cache for securing speculative executions, as well as synchronizing content between a main cache and the shadow cache to save the content cached in the main cache in preparation of acceptance of the content in the shadow cache, a spare cache set can be used to accelerate the speculative executions (e.g., see the spare cache set1310das depicted inFIGS.14A and15Aas well as cache set1310bas depicted inFIGS.15B and15Cand cache set1310cas depicted inFIG.15D). A spare cache set can also be used to accelerate the speculative executions without use of a shadow cache. Data held in cache sets used as a shadow cache can be validated and therefore used for normal execution (e.g., see the cache set1310cas depicted inFIGS.14A and15Aas well as cache set1310das depicted inFIGS.15B and15Cand cache set1310bas depicted inFIG.15Deach of which can be used for a speculative execution and be a cache set of a shadow cache, and then after content validation can be used for normal execution). And, some cache sets used as the main cache for normal or non-speculative execution (e.g., see the cache set1310bas depicted inFIGS.14A and15Aas well as cache set1310cas depicted inFIGS.15B and15Cand cache set1310das depicted inFIG.15D) may not be ready to be used as the shadow cache for speculative execution. Thus, one or more cache sets can be used as spare cache sets to avoid delays from waiting for cache set availability (e.g., see the spare cache set1310das depicted inFIGS.14A and15Aas well as cache set1310bas depicted inFIGS.15B and15Cand cache set1310cas depicted inFIG.15D). Once a speculation is confirmed, the content of the cache sets used as a shadow cache is confirmed to be valid and up-to-date; and thus, the former cache sets used as the shadow cache for speculative execution are used for normal execution. For example, see the cache set1310cas depicted inFIGS.14A and15Aas well as cache set1310das depicted inFIGS.15B and15Cand cache set1310bas depicted inFIG.15D, each of which can be used for a speculative execution and be a cache set of a shadow cache, and then after content validation can be used for normal execution. However, some of the cache sets initially used as the normal cache may not be ready to be used for a subsequent speculative execution. For instance, see the cache set1310bas depicted inFIGS.14A and15Aas well as cache set1310cas depicted inFIGS.15B and15Cand cache set1310das depicted inFIG.15D, each of which is used as part of a normal cache but may not be ready to be used for a subsequent speculative execution. Therefore, one or more cache sets can be used as spare cache sets to avoid delays from waiting for cache set availability and accelerate the speculative executions. For example, see the spare cache set1310das depicted inFIGS.14A and15Aas well as cache set1310bas depicted inFIGS.15B and15Cand cache set1310cas depicted inFIG.15D, each of which are being used as a spare cache set. In some embodiments, where the cache system has background syncing circuitry (e.g., see background synching circuitry1102), if the syncing from a cache set in the normal cache to a corresponding cache set in the shadow cache has not yet been completed (e.g., see syncing1130shown inFIG.11A), the cache set in the normal cache cannot be freed immediately for use in the next speculative execution. In such a situation, if there is no spare cache set, the next speculative execution has to wait until the syncing is complete so that the corresponding cache set in the normal cache can be freed. This is just one example, of when a spare cache set is beneficial. There are many other situations when cache sets in the normal cache cannot be freed immediately. Also, for example, the speculative execution may reference a memory region in the memory system (e.g., see memory system603inFIGS.6,10, and13) that has no overlapping with the memory region cached in the cache sets used in the normal cache. As a result of accepting the result of the speculative execution, the cache sets in the shadow cache and the normal cache are now all in the normal cache. This can cause delays as well, because it takes time for the cache system to free a cache set to support the next speculative execution. To free one, the cache system needs to identify a cache set, such as a least used cache set, and synchronize the cache set with the memory system. If the cache has data that is more up to date than the memory system, the data needs to be written into the memory system. Additionally, a system using a spare cache set (e.g., see the spare cache set1310das depicted inFIGS.14A and15Aas well as cache set1310bas depicted inFIGS.15B and15Cand cache set1310cinFIG.15D) can also use background synchronizing circuitry (such as the background synchronizing circuitry1102). When an initial speculation is confirmed, the cache set used in the initial speculation (e.g., see the cache set1310cas depicted inFIGS.14A and15A) can be switched to join the set of cache sets used for a main execution (e.g., see the cache set1310aas shown inFIGS.14A, B, and C and as depicted inFIGS.15A, B, C, and D, which is a cache set of a set of cache sets used for main or non-speculative execution). Instead of using a cache set from the prior main execution that was being used for the case of the speculation failing (e.g., see the cache set1310bas depicted inFIGS.14A and15Aas well as cache set1310cas depicted inFIGS.15B and15Cand cache set1310dinFIG.15D), a spare cache set can be made available immediately for a next speculative execution (e.g., see the spare cache set1310das depicted inFIGS.14A and15Aas well as cache set1310bas depicted inFIGS.15B and15Cand cache set1310cinFIG.15D). The spare cache set can be updated for the next speculative execution via the background synchronizing circuitry1102for example. And, because of background synchronizing, a spare cache set, such as the spare cache set1310das shown inFIGS.14A and15A, is ready for use when the cache set currently used for the speculation execution, such as the cache set1310cas shown inFIGS.14A and15A, is ready to be accepted for normal execution. This way there is no delay in waiting for use of the next cache set for the next speculative execution. To prepare for the next speculative execution, the spare cache set, such as the cache set1310cas shown inFIGS.14A and15A, can be synchronized to a normal cache set, such as the cache set1310bas shown inFIGS.14A and15A, that is likely to be used in the next speculative execution or a least used cache set in the system. FIG.13shows example aspects of an example computing device having a cache system1000having interchangeable cache sets (e.g., see cache sets1310a,1310b,1310c, and1310d) including a spare cache set to accelerate speculative execution, in accordance with some embodiments of the present disclosure. The computing device, inFIG.13, is similar to the computing device depicted inFIG.10. For example, the device shown inFIG.13includes processor1001, memory system603, cache system1000, and connections604ato604dand609ato609cas well as connection1002. InFIG.13, the cache system1000is shown having cache sets (e.g., cache sets1310a,1310b,1310c, and1310d). The cache system1000is also shown having connection604dto execution-type signal line605dfrom processor1001identifying an execution type and connection1002to a signal line1004from the processor1001identifying a status of speculative execution. The cache system1000is also shown including logic circuit1006that can be configured to allocate a first subset of the cache sets (e.g., see cache602aas shown inFIG.13) for caching in caching operations when the execution type is a first type indicating non-speculative execution of instructions by the processor1001. The logic circuit1006can also be configured to allocate a second subset of the cache sets (e.g., see cache602bas shown inFIG.13) for caching in caching operations when the execution type changes from the first type to a second type indicating speculative execution of instructions by the processor. The logic circuit1006can also be configured to reserve at least one cache set or a third subset of cache sets (e.g., see cache602cas shown inFIG.13) when the execution type is the second type. The logic circuit1006can also be configured to reconfigure the second subset for caching in caching operations (e.g., see cache602bas shown inFIG.13), when the execution type is the first type and when the execution type changes from the second type to the first type and the status of speculative execution indicates that a result of speculative execution is to be accepted. And, the logic circuit1006can also be configured to allocate the at least one cache set or third subset for caching in caching operations (e.g., see cache602cas shown inFIG.13), when the execution type changes from the first type to the second type and when the execution type changes from the second type to the first type and the status of speculative execution indicates that a result of speculative execution is to be accepted. The logic circuit1006can also be configured to reserve the at least one cache set or the third subset (e.g., see cache602cas shown inFIG.13), when the execution type is the second type and when the at least one cache set is a least used cache set in the plurality of cache sets. In some embodiments, a cache system can include one or more mapping tables that can map the cache sets mentioned herein. And, in such embodiments, a logic circuit, such as the logic circuits mentioned herein, can be configured to allocate and reconfigure subsets of cache sets, such as caches in a cache system, according to the one or more mapping tables. The map can be an alternative to the cache set registers described herein or used in addition to such registers. In some embodiments, as shown in at leastFIGS.13,14A to14C, and15A to15D, the cache system1000can include cache set registers (e.g., see cache set registers1312a,1312b,1312c, and1312d) associated with the cache sets (e.g., see cache sets1310a,1310b,1310c, and1310d), respectively. In such embodiments, the logic circuit1006can be configured to allocate and reconfigure subsets of the of cache sets (e.g., see caches602a,602b, and602cas shown inFIG.13) according to the cache set registers. Also, in some embodiments, as shown inFIGS.15A to15D, a first subset of the cache sets can include a first cache set, a second subset of the cache sets can include a second cache set, and a third subset can include a third cache set. In such embodiments, the cache set registers can include a first cache set register associated with the first cache set which is configured to store a first cache set index initially so that the first cache set is used for non-speculative execution (e.g., see cache set index1504bheld in cache set register1312bas shown inFIG.15A). The cache set registers can also include a second cache set register associated with the second cache set which is configured to store a second cache set index initially so that the second cache set is used for speculative execution (e.g., see cache set index1504cheld in cache set register1312cas shown inFIG.15A). The cache set registers can also include a third cache set register associated with the third cache set which is configured to store a third cache set index initially so that the third cache set is used as a spare cache set (e.g., see cache set index1504dheld in cache set register1312das shown inFIG.15A). Also, in such embodiments, the logic circuit1006can be configured to generate a set index (e.g., see set indexes1504a,1504b,1504c, and1504d) based on a memory address received from address bus605b, from processor1001and an identification of speculative execution or non-speculative execution received from execution-type signal line605dfrom the processor identifying execution type. And, the logic circuit1006can be configured to determine whether the set index matches with content stored in the first cache set register, the second cache set register, or the third cache set register. Also, in such embodiments, the logic circuit1006can be configured to store the first cache set index in the second cache set register or another cache set register associated with another cache set in the second subset of the plurality of cache sets, so that the second cache set or the other cache set in the second subset is used for non-speculative execution, when the execution type changes from the second type to the first type and the status of speculative execution indicates that a result of speculative execution is to be accepted. For example, seeFIG.15Bdepicting cache set index1504bheld in the second cache set register1312c, so that the second cache set1310ccan be used for non-speculative execution. Further, the logic circuit1006can be configured to store the second cache set index in the third cache set register or another cache set register associated with another cache set in the at least one cache set, so that the third cache set or the other cache set in the at least one cache set is used for speculative execution, when the execution type changes from the second type to the first type and the status of speculative execution indicates that a result of speculative execution is to be accepted. For example, seeFIG.15Bdepicting cache set index1504cheld in the third cache set register1312d, so that the third cache set1310dis available and can be used for speculative execution. The logic circuit1006can also be configured to store the third cache set index in the first cache set register or another cache set register associated with another cache set in the first subset of the plurality of cache sets, so that the first cache set or the other cache set in the first subset is used as a spare cache set, when the execution type changes from the second type to the first type and the status of speculative execution indicates that a result of speculative execution is to be accepted. For example, seeFIG.15Bdepicting cache set index1504dheld in the first cache set register1312b, so that the first cache set1310bis used as a spare cache set. FIGS.14A,14B, and14Cshow example aspects of the example computing device having the cache system1000having interchangeable cache sets (e.g., see cache sets1310a,1310b,1310c, and1310d) including a spare cache set (e.g., see spare cache set1310das shown inFIGS.14A and14Band spare cache set1310bas shown inFIG.14C) to accelerate speculative execution, in accordance with some embodiments of the present disclosure. Specifically,FIG.14Ashows the cache sets in a first state where cache sets1310aand1310bcan be used for non-speculative executions, cache set1310ccan be used for a speculative execution, and cache set1310dis used as a spare cache set.FIG.14Bshows the cache sets in a second state where cache sets1310a,1310b, and1310ccan be used for non-speculative executions and cache set1310cis available for and can be used for a speculative execution.FIG.14C, shows the cache sets in a third state where cache sets1310a, and1310ccan be used for non-speculative executions, cache set1310dcan be used for speculative executions, and cache set1310bis used as a spare cache set. FIGS.15A,15B,15C and15Deach show example aspects of the example computing device having the cache system1000having interchangeable cache sets (e.g., see cache sets1310a,1310b,1310c, and1310d) including a spare cache set to accelerate speculative execution, in accordance with some embodiments of the present disclosure. Specifically,FIG.15Ashows the cache sets in a first state where cache sets1310aand1310bcan be used for non-speculative executions (or first type of executions), cache set1310ccan be used for a speculative execution (or a second type execution), and cache set1310dis used as a spare cache set. As shown inFIG.15A, in this first state, the logic circuit1006can be configured to store the cache set index1504bin the cache set register1312bso that content1502bin the cache set1310bis used for non-speculative execution. Further, in this first state, the logic circuit1006can be configured to store the cache set index1504cin the cache set register1312cso that the cache set1310cis available and can be used for speculative execution. The logic circuit1006can also be configured to store the cache set index1504din the cache set register1312dso that the cache set1310dis used as a spare cache set in this first state. FIG.15Bshows the cache sets in a second state where cache sets1310aand1310ccan be used for non-speculative executions, cache set1310dis available for a speculative execution, and cache set1310bis used as a spare cache set. The second state depicted inFIG.15Boccurs when the execution type changes from the second type to the first type and the status of speculative execution indicates that a result of speculative execution is to be accepted. As shown inFIG.15B, in this second state, the logic circuit1006can be configured to store the cache set index1504bin the cache set register1312cso that content1502bin the cache set1310cis used for non-speculative execution. Further, in this second state, the logic circuit1006can be configured to store the cache set index1504cin the cache set register1312dso that the cache set1310dis available for speculative execution. The logic circuit1006can also be configured to store the cache set index1504din the cache set register1312bso that the cache set1310bis used as a spare cache set in this second state. FIG.15Cshows the cache sets in the second state for the most part, where cache sets1310aand1310ccan be used for non-speculative executions and cache set1310bis used as a spare cache set. But, inFIG.15C, it is shown that cache set1310dis being used for a speculative execution instead of being merely available. As shown inFIG.15C, in this second state, the logic circuit1006can be configured to store the cache set index1504cin the cache set register1312dso that the content1502cheld in the cache set1310dcan also be used for speculative execution. FIG.15Dshows the cache sets in a third state where cache sets1310aand1310dcan be used for non-speculative executions, cache set1310bis available for a speculative execution, and cache set1310cis used as a spare cache set. The third state depicted inFIG.15Doccurs, in a subsequent cycle after the second state, when the execution type changes again from the second type to the first type and the status of speculative execution indicates that a result of speculative execution is to be accepted. As shown inFIG.15D, in this third state, the logic circuit1006can be configured to store the cache set index1504bin the cache set register1312dso that content1502bin the cache set1310dis used for non-speculative execution. Further, in this third state, the logic circuit1006can be configured to store the cache set index1504cin the cache set register1312bso that the cache set1310bis available for speculative execution. The logic circuit1006can also be configured to store the cache set index1504din the cache set register1312cso that the cache set1310cis used as a spare cache set in this third state. As shown byFIGS.15A to15D, the cache sets are interchangeable and the cache set used as the spare cache set is interchangeable as well. In such embodiments, when the connection604bto the address bus605breceives a memory address from the processor1001, the logic circuit1006can be configured to generate a set index from at least the memory address102baccording to this cache set index112bof the address (e.g., see set index generations1506a,1506b,1506c, and1506d, which generate set indexes1504a,1504b,1504c, and1504drespectively). Also, when the connection604bto the address bus605breceives a memory address from the processor1001, the logic circuit1006can be configured to determine whether the generated set index matches with content stored in one of the registers (which can be stored set index1504a,1504b,1504c, or1504d). Also, the logic circuit1006can be configured to implement a command received in the connection604ato the command bus605avia a cache set in response to the generated set index matching with the content stored in the corresponding register. Also, in response to a determination that a data set of the memory system associated with the memory address is not currently cached in the cache system, the logic circuit1001can be configured to allocate the cache set for caching the data set and store the generated set index in the corresponding register. The generated set index can include a predetermined segment of bits in the memory address as shown inFIGS.15A to15B. Also, in such embodiments, the logic circuit1006can be configured to generate a set index (e.g., see set indexes1504a,1504b,1504c, and1504d) based on a memory address (e.g., memory address102b) received from address bus605b, from processor1001and an identification of speculative execution or non-speculative execution received from execution-type signal line605dfrom the processor identifying execution type. And, the logic circuit1006can be configured to determine whether the set index matches with content stored in the cache set register1312b, the cache set register1312c, or the cache set register1312d. In some embodiments, a cache system can include a plurality of cache sets, a connection to an execution-type signal line from a processor identifying an execution type, a connection to a signal line from the processor identifying a status of speculative execution, and a logic circuit. The logic circuit can be configured to: allocate a first subset of the plurality of cache sets for caching in caching operations when the execution type is a first type indicating non-speculative execution of instructions by the processor, and allocate a second subset of the plurality of cache sets for caching in caching operations when the execution type changes from the first type to a second type indicating speculative execution of instructions by the processor. The logic circuit can also be configured to reserve at least one cache set (or a third subset of the plurality of cache sets) when the execution type is the second type. The logic circuit can also be configured to reconfigure the second subset for caching in caching operations when the execution type is the first type, when the execution type changes from the second type to the first type and the status of speculative execution indicates that a result of speculative execution is to be accepted. And, the logic circuit can also be configured to allocate the at least one cache set (or the third subset of the plurality of cache sets) for caching in caching operations when the execution type changes from the first type to the second type, when the execution type changes from the second type to the first type and the status of speculative execution indicates that a result of speculative execution is to be accepted. In such embodiments, the logic circuit can be configured to reserve the at least one cache set (or the third subset of the plurality of cache sets) when the execution type is the second type and the at least one cache set (or the third subset of the plurality of cache sets) includes a least used cache set in the plurality of cache sets. Also, in such embodiments, the cache system can include one or more mapping tables mapping the plurality of cache sets. In such an example, the logic circuit is configured to allocate and reconfigure subsets of the plurality of cache sets according to the one or more mapping tables. Also, in such embodiments, the cache system can include a plurality of cache set registers associated with the plurality of cache sets, respectively. In such an example, the logic circuit is configured to allocate and reconfigure subsets of the plurality of cache sets according to the plurality of cache set registers. In such an example, the first subset of the plurality of cache sets can include a first cache set, the second subset of the plurality of cache sets can include a second cache set, and the at least one cache set (or the third subset of the plurality of cache sets) can include a third cache set. Also, the plurality of cache set registers can include a first cache set register associated with the first cache set, configured to store a first cache set index initially so that the first cache set is used for non-speculative execution. The plurality of cache set registers can also include a second cache set register associated with the second cache set, configured to store a second cache set index initially so that the second cache set is used for speculative execution. The plurality of cache set registers can also include a third cache set register associated with the third cache set, configured to store a third cache set index initially so that the third cache set is used as a spare cache set. In such embodiments, the logic circuit can be configured to generate a set index based on a memory address received from an address bus from a processor and identification of speculative execution or non-speculative execution received from an execution-type signal line from the processor identifying execution type. And, the logic circuit can be configured to determine whether the set index matches with content stored in the first cache set register, the second cache set register, or the third cache set register. When the execution type changes from the second type to the first type and the status of speculative execution indicates that a result of speculative execution is to be accepted, the logic circuit can also be configured to store the first cache set index in the second cache set register or another cache set register associated with another cache set in the second subset of the plurality of cache sets, so that the second cache set or the other cache set in the second subset is used for non-speculative execution. When the execution type changes from the second type to the first type and the status of speculative execution indicates that a result of speculative execution is to be accepted, the logic circuit can also be configured to store the second cache set index in the third cache set register or another cache set register associated with another cache set in the at least one cache set (or the third subset of the plurality of cache sets), so that the third cache set or the other cache set in the at least one cache set (or the third subset of the plurality of cache sets) is used for speculative execution. When the execution type changes from the second type to the first type and the status of speculative execution indicates that a result of speculative execution is to be accepted, the logic circuit can also be configured to store the third cache set index in the first cache set register or another cache set register associated with another cache set in the first subset of the plurality of cache sets, so that the first cache set or the other cache set in the first subset is used as a spare cache set. In some embodiments, a cache system can include a plurality of cache sets having a first subset of cache sets, a second subset of cache sets, and a third subset of cache sets. The cache system can also include a connection to an execution-type signal line from a processor identifying an execution type, a connection to a signal line from the processor identifying a status of speculative execution, and a logic circuit. The logic circuit can be configured to allocate the first subset of the plurality of cache sets for caching in caching operations when the execution type is a first type indicating non-speculative execution of instructions by the processor and allocate the second subset of the plurality of cache sets for caching in caching operations when the execution type changes from the first type to a second type indicating speculative execution of instructions by the processor. The logic circuit can also be configured to reserve the third subset of the plurality of cache sets when the execution type is the second type. The logic circuit can also be configured to reconfigure the second subset for caching in caching operations when the execution type is the first type, when the execution type changes from the second type to the first type and the status of speculative execution indicates that a result of speculative execution is to be accepted. The logic circuit can also be configured to allocate the third subset for caching in caching operations when the execution type changes from the first type to the second type, when the execution type changes from the second type to the first type and the status of speculative execution indicates that a result of speculative execution is to be accepted. In some embodiments, a cache system can include a plurality of caches including a first cache, a second cache, and a third cache. The cache system can also include a connection to an execution-type signal line from a processor identifying an execution type, a connection to a signal line from the processor identifying a status of speculative execution, and a logic circuit. The logic circuit can be configured to allocate the first cache for caching in caching operations when the execution type is a first type indicating non-speculative execution of instructions by the processor and allocate the second cache for caching in caching operations when the execution type changes from the first type to a second type indicating speculative execution of instructions by the processor. The logic circuit can also be configured to reserve the third cache when the execution type is the second type. The logic circuit can also be configured to reconfigure the second cache for caching in caching operations when the execution type is the first type, when the execution type changes from the second type to the first type and the status of speculative execution indicates that a result of speculative execution is to be accepted. And, the logic circuit can also be configured to allocate the third cache for caching in caching operations when the execution type changes from the first type to the second type. FIGS.16and17show example aspects of example computing devices having cache systems having interchangeable cache sets (e.g., see cache sets1610a,1610b,1710a, and1710b) utilizing extended tags (e.g., see extended tags1640a,1640b,1740a, and1740b) for different types of executions by a processor (such as speculative and non-speculative executions), in accordance with some embodiments of the present disclosure. Also,FIGS.16and17illustrate different ways to address cache sets and cache blocks within a cache system—such as cache systems600and1000depicted inFIGS.6,10, and13respectively. Also, shown are ways cache sets and cache blocks can be selected via a memory address, such as memory address102eor102bas well as memory address102a,102c, or102d(shown inFIG.1). Both examples inFIGS.16and17use set associativity, and can implement cache systems using set associativity—such as cache systems600and1000. InFIG.16, set associativity is implicitly defined (e.g., defined through an algorithm that can be used to determine which tag should be in which cache set for a given execution type). InFIG.17, set associativity is implemented via the bits of cache set index in the memory address. Also, the functionality illustrated inFIGS.16and17can be implemented without use of set associativity (although this is not depicted), such as implement through cache systems200and400shown inFIGS.2and4respectively. InFIGS.16and17, a block index (e.g., see block indexes106eand106b) can be used as an address within individual cache sets (e.g., see cache sets1610a,1610b,1710a, and1710b) to identify particular cache blocks (e.g., see cache blocks1624a,1624b,1628a,1628b,1724a,1724b,1728a, and1728b) in a cache set. And, the extended tags (e.g., extended tags1640a,1640b,1740a,1740b,1650, and1750) can be used as addresses for the cache sets. A block index (e.g., see block indexes106eand106b) of a memory address (e.g., see memory address102eand102b) can be used for each cache set (e.g., see cache sets1610a,1610b,1710a, and1710b) to get a cache block (e.g., see cache blocks1624a,1624b,1628a,1628b,1724a,1724b,1728a, and1728b) and a tag associated with the cache block (e.g., see corresponding tags1622a,1622b,1626a,1626b,1722a,1722b,1726a, and1726b). Also, as shown inFIGS.16and17, tag compare circuits (e.g., tag compare circuits1660a,1660b,1760a, and1760b) can compare the extended tags generated from the cache sets (e.g., extended tags1640a,1640b,1740a, and1740b) with the extended cache tag (e.g., extended tag1650) from a memory address (e.g., see memory address102eand102b) and a current execution type (e.g., see execution types110eand110b) to determine a cache hit or miss. The construction of the extended tags guarantee that there is at most one hit among the cache sets (e.g., see cache sets1610a,1610b,1710a, and1710b). If there is a hit, a cache block (e.g., see cache blocks1624a,1624b,1628a,1628b,1724a,1724b,1728a, and1728b) from the selected cache set provides the output. Otherwise, the data associated with the memory address (e.g., memory address102eor102b) is not cached in or outputted from any of the cache sets. In short, the extended tags depicted inFIGS.16and17are used to select a cache set, and the block indexes are used to select a cache block and its tag within a cache set. Also, as shown inFIGS.16and17, the memory addresses (e.g., see addresses102eand102b) are partitioned in different ways; and thus, control of the cache operations according to the addresses are different as well. However, there are some similarities. For example, the systems shown inFIGS.16and17control cache set use via set associativity. The control of the cache operations can include controlling whether a cache set is used for a first or second type of execution by the processor (e.g., non-speculative and speculative executions) and such control can be controlled via set associativity to some extent or completely. InFIG.16, extended tag1650for the memory address102ehas an execution type110eand tag104ehaving a cache set indicator that implements the set associativity. InFIG.17, extended tag1750for the memory address102bhas an execution type110e, cache set index112b, and tag104b. In such an example, the cache set index112bimplements the set associativity instead of the cache set indicator in the tag. The different partitioning of the memory address slightly changes how an extended tag (e.g., extended tags1640a,1640b,1650,1740a, and1740band1750) controls the cache operations via set associativity. With the memory address partitioning, in the examples, the extended tag from the memory address and the execution type (e.g., see extended tags1650and1750) are compared with an extended tag for a cache set (e.g., see extended tags1640a,1640b,1740a, and1740b) for controlling cache operations implemented via the cache set. The tag compare circuits (e.g., tag compare circuits1660a,1660b,1760a, and1760b) can output a hit or miss depending on if the extended tags inputted into the compare circuits match or not. The extended tags for the cache sets (e.g., see extended tags1640a,1640b,1740a, and1740b) can be derived from an execution type (e.g., see the execution types1632a,1632b,1732a, and1732b) held in a register (e.g., see registers1612a,1612b,1712a, and1712b) and a block tag (e.g., see tags1622a,1622b,1626a,1626b,1722a,1722b,1726a, and1726b) from a first cache set (e.g., see cache sets1610a,1610b,1710a, and1710b). And, as shown inFIGS.16and17, the execution types are different in each register of the cache sets. For the examples shown, the first cache set (e.g., cache set1610aor1710a) can be used for the first type of execution (e.g., non-speculative execution) and the second cache set (e.g., cache set1610bor1710b) can be used for the second type of execution (e.g., speculative execution). InFIG.17, the combination of tag104band cache set index112bprovides similar functionality as tag104eshown inFIG.16. However, inFIG.17, by separating tag104band cache set index112b, a cache set does not have to store redundant copies of the cache set index112bsince a cache set (e.g., see cache sets1710aand1710b) can be associated with a cache set register (e.g., see registers1732aand1732b) to hold cache set indexes (e.g., see cache set indexes1732aand1732b). Whereas, inFIG.16, a cache set (e.g., see cache sets1610aand1610b) does need to store redundant copies of a cache set indicator in each of its blocks (e.g., see blocks1624a,1624b,1628a, and1628b) since the cache set's associated register is not configured to hold a cache set index. In other words, since tags1622a,1622b, etc., have the same cache set indicator, the indicator could be stored once in a register for the cache set (e.g., see cache set registers1712aand1712b). This is one of the benefits of the arrangement depicted inFIG.17over the arrangement depicted inFIG.16. Also, the lengths of the tags1722a,1722b,1726a, and1726binFIG.17are shorter in comparison with the implementation of the tags shown inFIG.16(e.g., see1622a,1622b,1626a, and1626b), since the cache set registers depicted inFIG.17(e.g., registers1710aand1710b) store both the cache set index and the execution type. When the execution type is combined with the cache set index to form an extended cache set index, the extended cache set index can be used to select one of the cache sets. Then, the tag from the selected cache set is compared to the tag in the address to determine hit or miss. The two-stage selection can be similar to a conventional two-stage selection using a cache set index or can be used to be combined with the extended tag to support more efficient interchanging of cache sets for different execution types (such as speculative and non-speculative execution types). In some embodiments, a cache system (such as the cache system600or1000) can include a plurality of cache sets (such as cache sets610ato610c,1010ato1010c,1310ato1310d,1610ato1610b, or1710ato1710b). The plurality of cache sets can include a first cache set and a second cache set (e.g., see cache sets1610ato1610band sets1710ato1710b). The cache system can also include a plurality of registers associated with the plurality of cache sets respectively (such as registers612ato612c,1012ato1012c,1312ato1312d,1612ato1612b, or1712ato1712b). The plurality of registers can include a first register associated with the first cache set and a second register associated with the second cache set (e.g., see registers1612ato1612band registers1712ato1712b). The cache system can also include a connection (e.g., see connection604a) to a command bus (e.g., see command bus605a) coupled between the cache system and a processor (e.g., see processors601and1001). The cache system can also include a connection (e.g., see connection604b) to an address bus (e.g., see address bus605b) coupled between the cache system and the processor. The cache system can also include a logic circuit (e.g., see logic circuits606and1006) coupled to the processor to control the plurality of cache sets according to the plurality of registers. When the connection to the address bus receives a memory address (e.g., see memory addresses102ato102eshown inFIG.1and the addresses102eand102bshow inFIGS.16and17respectively) from the processor, the logic circuit can be configured to generate an extended tag from at least the memory address (e.g., see extended tags1650and1750). Also, when the connection to the address bus receives the memory address from the processor, the logic circuit can be configured to determine whether the generated extended tag (e.g., see extended tags1650and1750) matches with a first extended tag (e.g., see extended tags1640aand1740a) for the first cache set (e.g., see cache sets1610aand1710a) or a second extended tag (e.g., see extended tags1640band1740b) for the second cache set (e.g., see cache sets1610band1710b). The logic circuit (e.g., see logic circuits606and1006) can also be configured to implement a command received in the connection (e.g., see connection604a) to the command bus (e.g., see command bus605a) via the first cache set (e.g., see cache sets1610aand1710a) in response to the generated extended tag (e.g., see extended tags1650and1750) matching with the first extended tag (e.g., see extended tags1640aand1740a) and via the second cache set (e.g., see cache sets1610band1710b) in response to the generated extended tag matching with the second extended tag (e.g., see extended tags1640band1740b). The logic circuit (e.g., see logic circuits606and1006) can also be configured to generate the first extended tag (e.g., see extended tags1640aand1740a) from a cache address (e.g., see the blocks labeled ‘Tag’ in extended tags1640aand1740a, as well as the tags1622a,1622b,1722a,1722b, etc.) of the first cache set (e.g., see cache sets1610aand1710a) and content (e.g., see the blocks labeled ‘Execution Type’ in extended tags1640aand1740aand the block labeled ‘Cache Set Index’ in extended tag1740a, as well as execution type1632aand cache set index1732a) stored in the first register (e.g., see registers1612aand1712a). The logic circuit can also be configured to generate the second extended tag (e.g., see extended tags1640band1740b) from a cache address (e.g., see the blocks labeled ‘Tag’ in extended tags1640band1740b, as well as the tags1626a,1626b,1726a,1726b, etc.) of the second cache set (e.g., see cache sets1610band1710b) and content (e.g., see the blocks labeled ‘Execution Type’ in extended tags1640band1740band the block labeled ‘Cache Set Index’ in extended tag1740b, as well as execution type1632band cache set index1732b) stored in the second register (e.g., see registers1612band1712b). In some embodiments, the cache system (such as the cache system600or1000) can further include a connection (e.g., see connection604d) to an execution-type signal line (e.g., see execution-type signal line605d) from the processor (e.g., see processors601and1001) identifying an execution type. In such embodiments, the logic circuit (e.g., see logic circuits606and1006) can be configured to generate the extended tag (e.g., see extended tags1650and1750) from the memory address (e.g., see memory addresses102eand102bshown inFIGS.16and17respectively) and an execution type (e.g., see execution type110eshown inFIGS.16and17) identified by the execution-type signal line. Also, in such embodiments, the content stored in each of the first register and the second register (e.g., see registers1612a,1612b,1712a, and1712b) can include an execution type (e.g., see first execution type1632aand second execution type1632b). In some embodiments, for the determination of whether the generated extended tag (e.g., see extended tags1650and1750) matches with the first extended tag for the first cache set (e.g., see extended tags1640aand1740a) or the second extended tag for the second cache set (e.g., see extended tags1640band1740b), the logic circuit (e.g., see logic circuits606and1006) can be configured to compare the first extended tag (e.g., see extended tags1640aand1740a) with the generated extended tag (e.g., see extended tags1650and1750) to determine a cache hit or miss for the first cache set (e.g., see cache sets1610aand1710a). Specifically, as shown inFIGS.16and17, a first tag compare circuit (e.g., see tag compare circuits1660aand1760a) is configured to receive as input the first extended tag (e.g., see extended tags1640aand1740a) and the generated extended tag (e.g., see extended tags1650and1750). The first tag compare circuit (e.g., see tag compare circuits1660aand1760a) is also configured to compare the first extended tag with the generated extended tag to determine a cache hit or miss for the first cache set. The first tag compare circuit (e.g., see tag compare circuits1660aand1760a) is also configured to output the determined cache hit or miss for the first cache set (e.g., see outputs1662aand1762a). Also, for the determination of whether the generated extended tag matches with the first extended tag for the first cache set or the second extended tag for the second cache set, the logic circuit can be configured to compare the second extended tag (e.g., see extended tags1640band1740b) with the generated extended tag (e.g., see extended tags1650and1750) to determine a cache hit or miss for the second cache set (e.g., see cache sets1610band1710b). Specifically, as shown inFIGS.16and17, a second tag compare circuit (e.g., see tag compare circuits1660band1760b) is configured to receive as input the second extended tag (e.g., see extended tags1640band1740b) and the generated extended tag (e.g., see extended tags1650and1750). The second tag compare circuit (e.g., see tag compare circuits1660band1760b) is also configured to compare the second extended tag with the generated extended tag to determine a cache hit or miss for the second cache set. The second tag compare circuit (e.g., see tag compare circuits1660band1760b) is also configured to output the determined cache hit or miss for the second cache set (e.g., see outputs1662band1762b). In some embodiments, the logic circuit (e.g., see logic circuits606and1006) can be further configured to receive output from the first cache set (e.g., see cache sets1610aand1710a) when the logic circuit determines the generated extended tag (e.g., see extended tags1640aand1740a) matches with the first extended tag for the first cache set (e.g., see extended tags1640aand1740a). The logic circuit can also be further configured to receive output from the second cache set (e.g., see cache sets1610band1710b) when the logic circuit determines the generated extended tag (e.g., see cache sets1610aand1710a) matches with the second extended tag for the second cache set (e.g., see extended tags1640aand1740a). In some embodiments, the cache address of the first cache set includes a first tag (e.g., see tags1622a,1622b,1722a, and1722b) of a cache block (e.g., see cache block1624a,1624b,1724a, and1724b) in the first cache set (e.g., see cache sets1610aand1710a). In such embodiments, the cache address of the second cache set includes a second tag (e.g., see tags1626a,1626b,1726a, and1726b) of a cache block (e.g., see cache block1628a,1628b,1728a, and1728b) in the second cache set (e.g., see cache sets1610band1710b). Also, in such embodiments, in general, the block index is used as an address within individual cache sets. For instance, in such embodiments, the logic circuit (e.g., see logic circuits606and1006) can be configured to use a first block index from the memory address (e.g., see block indexes106eand106bfrom memory addresses102eand102bshown inFIGS.16and17respectively) to get a first cache block in the first cache set and a tag associated with the first cache block (e.g., see cache block1624a,1624b,1724a, and1724band respective associated tags1622a,1622b,1722a, and1722b). Also, the logic circuit (e.g., see logic circuits606and1006) can be configured to use a second block index from the memory address (e.g., see block indexes106eand106bfrom memory addresses102eand102bshown inFIGS.16and17respectively) to get a second cache block in the second cache set and a tag associated with the second cache block (e.g., see cache block1628a,1628b,1728a, and1728band respective associated tags1626a,1626b,1726a, and1726b). In some embodiments, such as the embodiments illustrated inFIG.16, when the first and second cache sets (e.g., see cache sets1610aand1610b) are in a first state, the cache address of the first cache set (e.g., see tags1622a,1622b, etc.) includes a first cache set indicator associated with the first cache set. The first cache set indicator can be a first cache set index. In such embodiments, when the first and second cache sets are in a first state, the cache address of the second cache set (e.g., see tags1626a,1626b, etc.) includes a second cache set indicator associated with the second cache set. The second cache set indicator can be a second cache set index. Also, in the embodiments shown inFIG.16, when the first and second cache sets (e.g., see cache sets1610aand1610b) are in a second state (which is not depicted inFIG.16), the cache address of the first cache set includes the second cache set indicator associated with the second cache set. Further, when the first and second cache sets are in the second state, the cache address of the second cache set includes the first cache set indicator associated with the first cache set. This changing of the content within the cache addresses can implement the interchangeability between the cache sets. With the embodiments shown inFIG.16, cache set indicators are repeated in the tags of each cache block in the cache sets and thus, the tags are longer than the tags of each cache block in the cache sets depicted inFIG.17. InFIG.17, instead of repeating the cache set indexes in the tags of each cache block, the set indexes are stored in the cache set registers associated with cache sets (e.g., see registers1712aand1712b). In some embodiments, such as the embodiments illustrated inFIG.17, when the first and second cache sets (e.g., see cache sets1710aand1710b) are in a first state, the cache address of the first cache set (e.g., see tags1722a,1722b, etc.) may not include a first cache set indicator associated with the first cache set. Instead, the first cache set indicator is shown being stored in the first cache set register1712a(e.g., see the first cache set index1732aheld in cache set register1712a). This can reduce the size of the tags for the cache blocks in the first cache set since the cache set indicator is stored in a register associate with the first cache set. Also, when the first and second cache sets are in the first state, the cache address of the second cache set (e.g., see tags1726a,1726b, etc.) may not include a second cache set indicator associated with the second cache set. Instead, the second cache set indicator is shown being stored in the second cache set register1712b(e.g., see the second cache set index1732bheld in cache set register1712b). This can reduce the size of the tags for the cache blocks in the second cache set since the cache set indicator is stored in a register associate with the second cache set. Also, in the embodiments shown inFIG.17, when the first and second cache sets (e.g., see cache sets1710aand1710b) are in a second state (which is not depicted inFIG.17), the cache address of the first cache set (e.g., see tags1722a,1722b, etc.) may not include a second cache set indicator associated with the second cache set. Instead, the second cache set indicator would be stored in the first cache set register1712a. Also, when the first and second cache sets are in the second state, the cache address of the second cache set (e.g., see tags1726a,1726b, etc.) may not include a first cache set indicator associated with the first cache set. Instead, the first cache set indicator would be stored in the second cache set register1712b. This changing of the content of the cache set registers can implement the interchangeability between the cache sets. In some embodiments, as shown inFIG.17, when the first and second registers (e.g., see registers1712aand1712b) are in a first state, the content stored in the first register (e.g., see register1712a) can include a first cache set index (e.g., see cache set index1732a) associated with the first cache set (e.g., see cache set1710a). And, the content stored in the second register (e.g., see register1712b) can include a second cache set index (e.g., see cache set index1732b) associated with the second cache set (e.g., see cache set1710a). In such embodiments, although not depicted inFIG.17, when the first and second registers are in a second state, the content stored in the first register can include the second cache set index associated with the second cache set, and the content stored in the second register can include the first cache set index associated with the first cache set. In some embodiments, such as embodiments as shown inFIG.16and such as embodiments having the connection to the execution-type signal line identifying an execution type, the cache system (e.g., see cache system1000) can further include a connection (e.g., see connection1002) to a speculation-status signal line (e.g., see speculation-status signal line1004) from the processor (e.g., see processor1001) identifying a status of a speculative execution of instructions by the processor. In such embodiments, the connection to the speculation-status signal line can be configured to receive the status of a speculative execution. The status of a speculative execution can indicate that a result of a speculative execution is to be accepted or rejected. When the execution type changes from the speculative execution to a non-speculative execution, the logic circuit can be configured to change the state of the first and second cache sets (e.g., see caches sets1610aand1610b), if the status of speculative execution indicates that a result of speculative execution is to be accepted. And, when the execution type changes from the speculative execution to a non-speculative execution, the logic circuit can be configured to maintain the state of the first and second cache sets (e.g., see caches sets1610aand1610b) without changes, if the status of speculative execution indicates that a result of speculative execution is to be rejected. Somewhat similarly, in some embodiments, such as embodiments as shown inFIG.17and such as embodiments having the connection to the execution-type signal line identifying an execution type, the cache system can further include a connection to a speculation-status signal line from the processor identifying a status of a speculative execution of instructions by the processor. In such embodiments, the connection to the speculation-status signal line can be configured to receive the status of a speculative execution. The status of a speculative execution can indicate that a result of a speculative execution is to be accepted or rejected. When the execution type changes from the speculative execution to a non-speculative execution, the logic circuit can be configured to change the state of the first and second cache sets (e.g., see caches sets1610aand1610b), if the status of speculative execution indicates that a result of speculative execution is to be accepted. And, when the execution type changes from the speculative execution to a non-speculative execution, the logic circuit can be configured to change the state of the first and second registers (e.g., see registers1712aand1712b), if the status of speculative execution indicates that a result of speculative execution is to be accepted. And, when the execution type changes from the speculative execution to a non-speculative execution, the logic circuit can be configured to maintain the state of the first and second registers (e.g., see registers1712aand1712b) without changes, if the status of speculative execution indicates that a result of speculative execution is to be rejected. In some embodiments, a cache system can include a plurality of cache sets, including a first cache set and a second cache set. The cache system can also include a plurality of registers associated with the plurality of cache sets respectively, including a first register associated with the first cache set and a second register associated with the second cache set. The cache system can further include a connection to a command bus coupled between the cache system and a processor, a connection to an address bus coupled between the cache system and the processor, and a logic circuit coupled to the processor to control the plurality of cache sets according to the plurality of registers. The logic circuit can be configured to generate the first extended tag from a cache address of the first cache set and content stored in the first register, and to generate the second extended tag from a cache address of the second cache set and content stored in the second register. The logic circuit can also be configured to determine whether the first extended tag for the first cache set or the second extended tag for the second cache set matches with a generated extended tag generated from a memory address received from the processor. And, the logic circuit can be configured to implement a command received in the connection to the command bus via the first cache set in response to the generated extended tag matching with the first extended tag and via the second cache set in response to the generated extended tag matching with the second extended tag. In such embodiments, cache system can also include a connection to an address bus coupled between the cache system and the processor. When the connection to the address bus receives the memory address from the processor, the logic circuit can be configured to generate the extended tag from at least the memory address. Also, the cache system can include a connection to an execution-type signal line from the processor identifying an execution type. In such examples, the logic circuit can be configured to generate the extended tag from the memory address and an execution type identified by the execution-type signal line. Also, the content stored in each of the first register and the second can include an execution type. Further, for the determination of whether the generated extended tag matches with the first extended tag for the first cache set or the second extended tag for the second cache set, the logic circuit can be configured to: compare the first extended tag with the generated extended tag to determine a cache hit or miss for the first cache set; and compare the second extended tag with the generated extended tag to determine a cache hit or miss for the second cache set. Also, the logic circuit can be configured to: receive output from the first cache set when the logic circuit determines the generated extended tag matches with the first extended tag for the first cache set; and receive output from the second cache set when the logic circuit determines the generated extended tag matches with the second extended tag for the second cache set. In such embodiments and others, the cache address of the first cache set can include a first tag of a cache block in the first cache set, and the cache address of the second cache set can include a second tag of a cache block in the second cache set. In some embodiments, a cache system can include a plurality of cache sets, including a first cache set and a second cache set. The cache system can also include a plurality of registers associated with the plurality of cache sets respectively, including a first register associated with the first cache set and a second register associated with the second cache set. And, the cache system can include a connection to a command bus coupled between the cache system and a processor, a connection to an execution-type signal line from a processor identifying an execution type, a connection to an address bus coupled between the cache system and the processor, and a logic circuit coupled to the processor to control the plurality of cache sets according to the plurality of registers. When the connection to the address bus receives a memory address from the processor, the logic circuit can be configured to: generate an extended tag from the memory address and an execution type identified by the execution-type signal line; and determine whether the generated extended tag matches with a first extended tag for the first cache set or a second extended tag for the second cache set. Also, the logic circuit can be configured to implement a command received in the connection to the command bus via the first cache set in response to the generated extended tag matching with the first extended tag and via the second cache set in response to the generated extended tag matching with the second extended tag. FIG.18shows example aspects of an example computing device having a cache system (e.g., see cache systems600and1000shown inFIGS.6and10respectively) having interchangeable cache sets (e.g., see cache sets1810a,1810b, and1810c) utilizing a mapping circuit1830to map physical cache set outputs (e.g., see physical outputs1820a,1820b, and1820c) to logical cache set outputs (e.g., see logical outputs1840a,1840b, and1840c), in accordance with some embodiments of the present disclosure. As shown, the cache system can include a plurality of cache sets (e.g., see cache sets1810a,1810b, and1810c). The plurality of cache sets includes a first cache set (e.g., see cache set1810a) configured to provide a first physical output (e.g., see physical output1820a) upon a cache hit and a second cache set (e.g., see cache set1810b) configured to provide a second physical output (e.g., see physical output1820b) upon a cache hit. The cache system can also include a connection (e.g., see connection605adepicted inFIGS.6and10) to a command bus (e.g., see command bus605a) coupled between the cache system and a processor (e.g., see processors601and1001). The cache system can also include a connection (e.g., see connection605b) to an address bus (e.g., see address bus605b) coupled between the cache system and the processor. Shown inFIG.18, the cache system includes a control register1832(e.g., a physical-to-logical-set-mapping (PLSM) register1832), and mapping circuit1830coupled to the control register to map respective physical outputs (e.g., see physical outputs1820a,1820b, and1820c) of the plurality of cache sets (e.g., see cache sets1810a,1810b, and1810c) to a first logical cache (e.g., a normal cache) and a second logical cache (e.g., a shadow cache) as corresponding logical cache set outputs (e.g., see logical outputs1840a,1840b, and1840c). The mapping, by the mapping circuit1830, of the physical outputs (e.g., see physical outputs1820a,1820b, and1820c) to logical cache set outputs (e.g., see logical outputs1840a,1840b, and1840c) is according to a state of the control register1832. As shown inFIG.18, at least the logical outputs1840aand1840bare mapped to the first logical cache for the first type of execution, and at least the logical output1840cis mapped to the second logical cache for the second type of execution. Not shown, the cache system can be configured to be coupled between the processor and a memory system (e.g., see memory system603). When the connection (e.g., see connection605b) to the address bus (e.g., see address bus605b) receives a memory address (e.g., see memory address102b) from the processor (e.g., see processors601and1001) and when the control register1832is in a first state (shown inFIG.18), the mapping circuit1830can be configured to map the first physical output (e.g., see physical output1820a) to the first logical cache for a first type of execution by the processor (e.g., see logical output1840a) to implement commands received from the command bus (e.g., see command bus605a) for accessing the memory system (e.g., see memory system603) via the first cache set (e.g., cache set1820a) during the first type of execution (e.g., non-speculative execution). Also, when the connection (e.g., see connection605b) to the address bus (e.g., see address bus605b) receives a memory address (e.g., see memory address102b) from the processor (e.g., see processors601and1001) and when the control register1832is in a first state (shown inFIG.18), the mapping circuit1830can be configured to map the second physical output (e.g., see physical output1820b) to the second logical cache for a second type of execution by the processor (e.g., see logical output1840b) to implement commands received from the command bus (e.g., see command bus605a) for accessing the memory system (e.g., see memory system603) via the second cache set (e.g., cache set1820b) during the second type of execution (e.g., speculative execution). When the connection (e.g., see connection605b) to the address bus (e.g., see address bus605b) receives a memory address (e.g., see memory address102b) from the processor (e.g., see processors601and1001) and when the control register1832is in a second state (not shown inFIG.18), the mapping circuit1830is configured to map the first physical output (e.g., see physical output1820a) to the second logical cache (e.g., see logical output1840b) to implement commands received from the command bus (e.g., see command bus605a) for accessing the memory system (e.g., see memory system603) via the first cache set (e.g., cache set1820a) during the second type of execution (e.g., speculative execution). Also, when the connection (e.g., see connection605b) to the address bus (e.g., see address bus605b) receives a memory address (e.g., see memory address102b) from the processor (e.g., see processors601and1001) and when the control register1832is in the second state (not shown inFIG.18), the mapping circuit1830is configured to map the second physical output (e.g., see physical output1820b) to the first logical cache (e.g., see logical output1840a) to implement commands received from the command bus (e.g., see command bus605a) for accessing the memory system (e.g., see memory system603) via the second cache set (e.g., cache set1820b) for the first type of execution (e.g., non-speculative execution). In some embodiments, the first logical cache is a normal cache for non-speculative execution by the processor, and the second logical cache is a shadow cache for speculative execution by the processor. The mapping circuit1830solves the problem related to the execution type. Mapping circuit1830provides a solution to the how the execution type relates to mapping physical to logical cache sets. If the mapping circuit1830is used, a memory address (e.g., see address102b) can be applied in each cache set (e.g., see cache sets1810a,1810b, and1810c) to generate a physical output (e.g., see physical outputs1820a,1820b, and1820c). The physical output (e.g., see physical outputs1820a,1820b, and1820c) includes the tag and the cache block that are looked up using a block index of the memory address (e.g., see block index106b). The mapping circuit1830can reroute the physical output (e.g., see physical outputs1820a,1820b, and1820c) to one of the logical output (e.g., see logical outputs1840a,1840b, and1840c). The cache system can do a tag compare at the physical output or at the logical output. If the tag compare is done at the physical output, the tag hit or miss of the physical output is routed through the mapping circuit1830to generate a hit or miss of the logical output. Otherwise, the tag itself is routed through the mapping circuit1830; and a tag compare is performed at the logical output to generate the corresponding tag hit or miss result. As illustrated inFIG.18, the logical outputs are predefined for speculative execution and non-speculative execution. Therefore, the current execution type (e.g., see execution type110e) can be used to select which part of the logical outputs is to be used. For example, since it is pre-defined that the logical output1840cis for speculative execution inFIG.18, it results can be discarded if the current execution type is normal execution. Otherwise, if the current execution type is speculative, the results from the first part of the logical outputs inFIG.18(e.g., outputs1840aand1840b) can be blocked. In the embodiment shown inFIG.18, if the current execution type is speculative, the hit or miss results from the logical outputs for the non-speculative execution can be AND'ed with ‘0’ to force a cache “miss”; and the hit or miss results from the logical outputs for the non-speculative execution can be AND′ed with ‘1’ to keep the results unaltered. Execution type110ecan be configured such that speculative execution=0 and non-speculative execution=1, and the tag hit or miss results from non-speculative outputs1840ato1840bcan be AND′ed with execution type (e.g., execution type110e) to generate the hit or miss that includes the consideration of matching both the tag and the execution type. And, the tag hit or miss results from1840ccan be AND′ed with the inverse of the execution type110eto generate the hit or miss. FIGS.19and20show example aspects of example computing devices having cache systems (e.g., see cache systems600and1000shown inFIGS.6and10respectively) having interchangeable cache sets (e.g., see cache sets1810a,1810b, and1810cdepicted inFIGS.18to21) utilizing the circuit shown inFIG.18, the mapping circuit1830, to map physical cache set outputs (e.g., see physical outputs1820a,1820b, and1820cdepicted inFIG.18as well as physical output1820ashown inFIG.19) to logical cache set outputs (e.g., see logical outputs1840a,1840b, and1840c), in accordance with some embodiments of the present disclosure. In particular,FIG.19shows the first cache set1810a, the first cache set register1812a, the tag1815afor the first cache set (which includes a current tag and cache set index), the tag and set index1850from the address102b(which includes a current tag104band a current cache set index112bfrom memory address102b), and the tag compare circuit1860afor the first cache set1810a. Also,FIG.19shows the first cache set1810ahaving cache blocks and associated tags (e.g., see cache blocks1818aand1818band tags1816aand1816b) as well as the first cache set register1812aholding a cache set index1813afor the first cache set. Further,FIG.19shows the tag compare circuit1860bfor the second cache set1810b. The figure shows the physical output1820afrom the first cache set1810abeing outputted to the mapping circuit1830. The second cache set1810band other cache sets of the system can provide their respective physical outputs to the mapping circuit1830as well (although this is not depicted inFIG.19). FIG.20shows an example of multiple cache sets of the system providing physical outputs to the mapping circuit1830(e.g., see physical outputs1820a,1820b, and1820cprovided by cache sets1810a,1810b, and1810c, respectively, as shown inFIG.20).FIG.20also depicts parts of the mapping circuit1830(e.g., see multiplexors2004a,2004b, and2004cas well as PLSM registers2006a,2006b, and2006c).FIG.20also shows the first cache1810ahaving at least cache blocks1818aand1818band associated tags1816aand1816b. And, the second cache1810bis also shown having at least cache blocks1818cand1818dand associated tags1816cand1816d. FIG.19also shows multiplexors1904aand1904bas well as PLSM registers1906aand1906b, which can be parts of a logic circuit (e.g., see logic circuits606and1006) and/or a mapping circuit (e.g., see mapping circuit1830). Each of the multiplexors1904aand1904breceive at least hit or miss results1862aand1862bfrom tag compare circuits1860aand1860bwhich each compare respective tags for cache sets (e.g., see tag for the first cache set1815a) against the tag and set index from the memory address (e.g., see tag and set index1850). In some examples, there can be equivalent multiplexors for each tag compare for each cache set of the system. Each of the multiplexors (e.g., see multiplexors1904aand1904b) can output a selected hit or miss result based on the state of the multiplexor's respective PLSM register (e.g., see PLSM registers1906aand1906b). The PLSM registers controlling the selection of the multiplexors for outputting the cache hits or misses from the cache set comparisons can be controlled by a master PLSM register such as control register1832when such registers are a part of the mapping circuit1830. In some embodiments, each of the PLSM registers (e.g., see PLSM registers1906aand1906bas well as PLSM registers2110a,2110b, and2110cdepicted inFIG.21) can be a one-, two-, or three-bit register or any bit length register depending on the specific implementation. Such PLSM registers can be used (such as used by a multiplexor) to select the appropriate physical tag compare result or the correct result of one of logic units outputting hits or misses. For the case of the PLSM registers2006a,2006b, and2006cdepicted inFIG.20, such registers can be used (such as used by a multiplexor) to select the appropriate physical outputs (e.g., see physical outputs1820a,1820b, and1820cshown inFIG.20) of cache sets (e.g., see cache sets1810a,1810b, and1810cas shown inFIG.20). Such PLSM registers can also each be a one-, two-, or three-bit register or any bit length register depending on the specific implementation. Also, the control register1832can be a one-, two-, or three-bit register or any bit length register depending on the specific implementation. In some embodiments, selections of physical outputs from cache sets or selections of cache hits or misses are by multiplexors that can be arranged in the system to have at least one multiplexor per type of output and per logic unit or per cache set (e.g., see multiplexors1904aand1904bshown inFIG.19, multiplexors2004a,2004b, and2004cshown inFIG.20, and multiplexors2110a,2110b, and2110cshown inFIG.21). As shown in the figures, in some embodiments, where there is an n number of cache sets or logic compare units, there are an n number of n-to-1 multiplexors. As shown inFIG.19, the computing device can include a first multiplexor (e.g., multiplexor1904a) configured to output, to the processor, the first hit-or-miss result or the second hit-or-miss result (e.g., see hit or miss outputs1862aand1862bas shown inFIG.19) according to the content received by the first PLSM register (e.g., see PLSM register1906a). The computing device can also include a second multiplexor (e.g., multiplexor1904b) configured to output, to the processor, the second hit-or-miss result or the first hit-or-miss result (e.g., see hit or miss outputs1862band1862aas shown inFIG.19) according to the content received by the second PLSM register (e.g., see PLSM register1906b). In some embodiments, the contents of the PLSM registers can be received from a control register such as control register1832shown inFIG.18. For example, in some embodiments, when the content received by the first PLSM register indicates a first state, the first multiplexor outputs the first hit-or-miss result, and when the content received by the first PLSM register indicates a second state, the first multiplexor outputs the second hit-or-miss result. Also, when the content received by the second PLSM register indicates the first state, the second multiplexor can output the second hit-or-miss result. And, when the content received by the second PLSM register indicates the second state, the second multiplexor can output the first hit-or-miss result. As shown inFIG.20, the computing device can include a first multiplexor (e.g., multiplexor2004a) configured to output, to the processor, the first physical output of the first cache set1820aor the second physical output of the second cache set1820baccording to the content received by the first PLSM register (e.g., PLSM register2006a). The computing device can include a second multiplexor (e.g., multiplexor2004b) configured to output, to the processor, the first physical output1820aof the first cache set or the second physical output1820bof the second cache set according to the content received by the second PLSM register (e.g., PLSM register2006b). In some embodiments, the contents of the PLSM registers can be received from a control register such as control register1832shown inFIG.18. For example, in some embodiments, when the content received by the first PLSM register indicates a first state, the first multiplexor outputs the first physical output1820a, and when the content received by the first PLSM register indicates a second state, the first multiplexor outputs the second physical output1820b. Also, when the content received by the second PLSM register indicates the first state, the second multiplexor can output the second physical output1820b. And, when the content received by the second PLSM register indicates the second state, the second multiplexor can output the first physical output1820a. In some embodiments, block selection can be based on a combination of a block index and a main or shadow setting. Such parameters can control the PLSM registers. In some embodiments, such as the example shown inFIGS.19and20, only one address (e.g., tag and index) are fed into the interchangeable cache sets (e.g., cache sets1810a,1810band1810c). In such embodiments, there is a signal controlling which cache set is updated according to memory if that cache set produces a miss. Multiplexor1904ais controlled by the PLSM register1906ato provide hit or miss output of cache set1810aand thus the hit or miss status of the cache set for the main or normal execution, when the cache sets are in a first state. Multiplexor1904bis controlled by the PLSM register1906bto provide hit or miss output of cache set1810band thus the hit or miss status of the cache set for the speculative execution, when the cache sets are in the first state. On the other hand, multiplexor1904ais controlled by the PLSM register1906ato provide hit or miss output of cache set1810band thus the hit or miss status of the cache set for the main or normal execution, when the cache sets are in a second state. Multiplexor1904bis controlled by the PLSM register1906bto provide hit or miss output of cache set1810aand thus the hit or miss status of the cache set for the speculative execution, when the cache sets are in the second state. Similar to the selection of hit or miss signals, the data looked up from the interchangeable caches can be selected to produce one result for the processor (such as if there is a hit), for example see physical outputs1820a,1820b, and1820cshown inFIG.20. For example, in a first state of the cache sets, when cache set1810ais used as main cache set and cache set1810bis used as shadow cache set, the multiplexor2004ais controlled by the PLSM register2006ato select the physical output1820aof cache set1810afor the main or normal logical cache used for non-speculative executions. Also, for example, in a second state of the cache sets, when cache set1810bis used as main cache set and cache set1810ais used as shadow cache set, then the multiplexor2004ais controlled by the PLSM register2006ato select the physical output1820bof cache set1810bfor the main or normal logical cache used for non-speculative executions. In such examples, in the first state of the cache sets, when cache set1810ais used as main cache set and cache set1810bis used as shadow cache set, then the multiplexor2004bis controlled by the PLSM register2006bto select the physical output1820bof cache set1810bfor the shadow logical cache used for speculative executions. Also, for example, in the second state of the cache sets, when cache set1810ais used as main cache set and cache set1810bis used as shadow cache set, then the multiplexor2004bis controlled by the PLSM register2006bto select the physical output1820aof cache set1810afor the shadow logical cache used for speculative executions. In some embodiments, the cache system can further include a plurality of registers (e.g., see register1812aas shown inFIG.19) associated with the plurality of cache sets respectively (e.g., see cache sets1810a,1810b, and1810cas shown inFIGS.18to21). The registers can include a first register (e.g., see register1812a) associated with the first cache set (e.g., see cache set1810a) and a second register (not depicted inFIGS.18to21but depicted inFIGS.6and10) associated with the second cache set (e.g., see cache set1810b). The cache system can also include a logic circuit (e.g., see logic circuits606and1006) coupled to the processor (e.g., see logic circuits601and1001) to control the plurality of cache sets according to the plurality of registers. When the connection (e.g., see connection604b) to the address bus (e.g., see address bus605b) receives a memory address from the processor, the logic circuit can be configured to generate a set index from at least the memory address and determine whether the generated set index matches with a content stored in the first register or with a content stored in the second register. And, the logic circuit can be configured to implement a command received in the connection (e.g., see connection604a) to the command bus (e.g., see command bus605a) via the first cache set in response to the generated set index matching with the content stored in the first register and via the second cache set in response to the generated set index matching with the content stored in the second register. In some embodiments, the mapping circuit (e.g., see mapping circuit1830) can be a part of or connected to the logic circuit and the state of the control register (e.g., see control register1832) can control a state of a cache set of the plurality of cache sets. In some embodiments, the state of the control register can control the state of a cache set of the plurality of cache sets by changing a valid bit for each block of the cache set (e.g., seeFIGS.21to23). Also, in some examples, the cache system can further include a connection (e.g., see connection1002) to a speculation-status signal line (e.g., see speculation-status signal line1004) from the processor identifying a status of a speculative execution of instructions by the processor. The connection to the speculation-status signal line can be configured to receive the status of a speculative execution, and the status of a speculative execution can indicate that a result of a speculative execution is to be accepted or rejected. When the execution type changes from the speculative execution to a non-speculative execution, the logic circuit (e.g., see logic circuits606and1006) can be configured to change, via the control register (e.g., see control register1832), the state of the first and second cache sets, if the status of speculative execution indicates that a result of speculative execution is to be accepted. And, when the execution type changes from the speculative execution to a non-speculative execution, the logic circuit can be configured to maintain, via the control register, the state of the first and second cache sets without changes, if the status of speculative execution indicates that a result of speculative execution is to be rejected. In some embodiments, the mapping circuit (e.g., see mapping circuit1830) is part of or connected to the logic circuit (e.g., see logic circuits606and1006) and the state of the control register (e.g., see control register1832) can control a state of a cache register of the plurality of cache registers (e.g., see register1812aas shown inFIG.19) via the mapping circuit. In such examples, the cache system can further include a connection (e.g., see connection1002) to a speculation-status signal line (e.g., see speculation-status signal line1004) from the processor identifying a status of a speculative execution of instructions by the processor. The connection to the speculation-status signal line can be configured to receive the status of a speculative execution, and the status of a speculative execution indicates that a result of a speculative execution is to be accepted or rejected. When the execution type changes from the speculative execution to a non-speculative execution, the logic circuit can be configured to change, via the control register, the state of the first and second registers, if the status of speculative execution indicates that a result of speculative execution is to be accepted. And, when the execution type changes from the speculative execution to a non-speculative execution, the logic circuit can be configured to maintain, via the control register, the state of the first and second registers without changes, if the status of speculative execution indicates that a result of speculative execution is to be rejected. FIG.21shows example aspects of example computing device having a cache system having interchangeable cache sets (such as the cache sets shown inFIG.18, including cache sets1810a,1810b, and1810c), in accordance with some embodiments of the present disclosure. The cache sets (e.g., cache sets1810a,1810b, and1810c) are shown utilizing the circuit shown inFIG.18, mapping circuit1830, to map physical cache set outputs to logical cache set outputs. The parts depicted inFIG.21are part of a computing device that includes memory, such as main memory, a processor, e.g., see processor1001, and at least three interchangeable cache sets (e.g., see interchangeable cache sets1810a,1810b, and1810c). The processor is configured to execute a main thread and a speculative thread. As shown inFIG.21, a first cache set (e.g., cache set1810a) can be coupled in between the memory and the processor, and can include a first plurality of blocks (e.g., see blocks2101a,2101b, and2101cshown inFIG.21) for the main thread, in a first state of the cache set. Each block of the first plurality of blocks can include cached data, a first valid bit, and a block address including an index and a tag. And, the processor, solely or in combination with a cache controller, can be configured to change each first valid bit from indicating valid to invalid when a speculation of the speculative thread is successful so that the first plurality of blocks becomes accessible for the speculative thread and blocked for the main thread, in the first state of the cache set, in a second state of the cache set. As shown inFIG.21, a second cache set (e.g., cache set1810b) can be coupled in between the main memory and the processor, and can include a second plurality of blocks (e.g., see blocks2101d,2101e, and2101fshown inFIG.21) for the speculative thread, in a first state of the cache set. Each block of the second plurality of blocks can include cached data, a second valid bit, and a block address including an index and a tag. And, the processor, solely or in combination with the cache controller, can be configured to change each second valid bit from indicating invalid to valid when a speculation of the speculative thread is successful so that the second plurality of blocks becomes accessible for the main thread and blocked for the speculative thread, in a second state of the cache set. In some embodiments, as shown inFIG.21, a block of the first plurality of blocks can correspond to a respective block of the second plurality blocks. And, the block of the first plurality of blocks can correspond to the respective block of the second plurality blocks by having a same block address as the respective block of the second plurality of blocks. Also, as shown inFIG.21, the computing device can include a first physical-to-logical-mapping-set-mapping (PLSM) register (e.g., PLSM register 12108a) configured to receive a first valid bit of a block of the first plurality of blocks. The first valid bit can be indicative of the validity of the cached data of the block of the first plurality of blocks. It can also be indicative of whether to use, in the main thread, the block of the first plurality of blocks or the corresponding block of the second plurality of blocks. Also, as shown inFIG.21, the computing device can include a second PLSM register (e.g., PLSM register 22108b) configured to receive a second valid bit of a block of the second plurality of blocks. The second valid bit being indicative of the validity of the cached data of the block of the second plurality of blocks. It can also be indicative of whether to use, in the main thread, the block of the second plurality of blocks or the corresponding block of the first plurality of blocks. Also, as shown inFIG.21, the computing device can include a logic unit2104afor the first cache set, which is configured to determine whether a block of the first plurality of blocks hits or misses. The logic unit2104ais shown including a comparator2106aand an AND gate2107a. The comparator2106acan determine whether there is a match between the tag of the block and a corresponding tag of the address in memory. And, if the tags match and the valid bit for the block is valid, then the AND gate2107aoutputs an indication that the block hits. Otherwise, the AND gate2107aoutputs an indication that the block misses. To put it another way, the logic unit2104afor the first cache is configured to output a first hit-or-miss result according to the determination at the logic unit. Also, as shown inFIG.21, the computing device can include a logic unit2104bfor the second cache set, which is configured to determine whether a block of the second plurality of blocks hits or misses. The logic unit2104bis shown including a comparator2106band an AND gate2107b. The comparator2106bcan determine whether there is a match between the tag of the block and a corresponding tag of the address in memory. And, if the tags match and the valid bit for the block is valid, then the AND gate2107boutputs an indication that the block hits. Otherwise, the AND gate2107boutputs an indication that the block misses. To put it another way, the logic unit2104bfor the second cache is configured to output a second hit-or-miss result according to the determination at the logic unit. Also, as shown inFIG.21, the computing device can include a first multiplexor (e.g., multiplexor2110a) configured to output, to the processor, the first hit-or-miss result or the second hit-or-miss result according to the first valid bit received by the first PLSM register. The computing device can also include a second multiplexor (e.g., multiplexor2110b) configured to output, to the processor, the second hit-or-miss result or the first hit-or-miss result according to the second valid bit received by the second PLSM register. In some embodiments, when the first valid bit received by the first PLSM register indicates valid, the first multiplexor outputs the first hit-or-miss result, and when the first valid bit received by the first PLSM register indicates invalid, the first multiplexor outputs the second hit-or-miss result. Also, when the second valid bit received by the second PLSM register indicates valid, the second multiplexor outputs the second hit-or-miss result. And, when the second valid bit received by the second PLSM register indicates invalid, the second multiplexor outputs the first hit-or-miss result. In some embodiments, block selection can be based on a combination of a block index and a main or shadow setting. In some embodiments, only one address (e.g., tag and index) are fed into the interchangeable cache sets (e.g., cache sets1810a,1810band1810c). In such embodiments, there is a signal controlling which cache set is updated according to memory if that cache set produces a miss. Similar to the selection of hit or miss signals, the data looked up from the interchangeable caches can be selected to produce one result for the processor (such as if there is a hit). For example, in a first state of the cache sets, if cache set1810ais used as main cache set and cache set1810bis used as shadow cache set, then the multiplexor2110ais controlled by the PLSM register2108ato select the hit or miss output of cache set1804aand hit or miss status of the main cache set. And, multiplexor2110bis controlled by the PLSM register2108bto provide hit or miss output of cache set1810band thus the hit or miss status of the shadow cache set. In such embodiments, when the cache sets are in a second state, when cache set1810ais used as shadow cache and cache set1810bis used as main cache, the multiplexor2110acan be controlled by the PLSM register2108bto select the hit or miss output of cache set1810band hit or miss status of the main cache. And, multiplexor2110bcan be controlled by the PLSM register2108bto provide hit or miss output of cache set1810aand thus the hit or miss status of the shadow cache. Thus, multiplexor2110acan output whether the main cache has hit or miss in the cache for the address; and the multiplexor2110bcan output whether a shadow cache has hit or miss in the cache for the same address. Then, depending on whether or not the address is speculative, the one of the output can be selected. When there is a cache miss, the address is used in the memory to load data to a corresponding cache. The PLSM registers can similarly enable the update of the corresponding cache set1810aor set1810b. In some embodiments, in the first state of the cache sets, during speculative execution of a first instruction by the speculative thread, effects of the speculative execution are stored within the second cache set (e.g., cache set1810b). During the speculative execution of the first instruction, the processor can be configured to assert a signal indicative of the speculative execution which is configured to block changes to the first cache set (e.g., cache set1810a). When the signal is asserted by the processor, the processor can be further configured to block the second cache set (e.g., cache set1810b) from updating the memory. When the state of the cache sets changes to the second state, in response to a determination that execution of the first instruction is to be performed with the main thread, the second cache set (instead of the first cache set) is used with the first instruction. In response to a determination that execution of the first instruction is not to be performed with the main thread, the first cache set is used with the first instruction. In some embodiments, in the first state, during the speculative execution of first instruction, the processor accesses the memory via the second cache set (e.g., cache set1810b). And, during the speculative execution of one or more instructions, access to content in the second cache is limited to the speculative execution of the first instruction by the processor. During the speculative execution of the first instruction, the processor can be prohibited from changing the first cache set (e.g., cache set1810a). In some embodiments, the content of the first cache set (e.g., cache set1810a) and/or the second cache set (e.g., cache set1810b) can be accessible via a cache coherency protocol. FIGS.22and23show methods2200and2300, respectively, for using interchangeable cache sets for speculative and non-speculative executions by a processor, in accordance with some embodiments of the present disclosure. In particular, the methods2200and2300can be performed by a computing device illustrated inFIG.21. Also, somewhat similar methods could be performed by the computing device illustrated inFIGS.18-20as well as any of the computing devices disclosed herein; however, such computing devices would control cache state, cache set state, or cache set register state via another parameter besides the valid bit of a block address. For example, inFIG.16a state of the cache set is controlled via a cache set indicator within the tag of a block of the cache set. And, for example, inFIG.17, a state of the cache set is controlled via the state of the cache set register associated with the cache set. In such an example, the state is controlled via the cache set index stored in the cache set register. On the other hand, for the embodiments disclosed throughFIGS.21to23, the state of a cache set is controlled via the valid bit of a block address within the cache set. Method2200includes, at block2202, executing, by a processor processor1001), a main thread and a speculative thread. The method2200, at block2204, includes providing, in a first cache set of a cache system coupled in between a memory system and the processor (e.g., cache set1810aas shown inFIG.21), a first plurality of blocks for the main thread (e.g., blocks2101a,2101b, and2101cdepicted inFIG.21). Each block of the first plurality of blocks can include cached data, a first valid bit, and a block address having an index and a tag. The method2200, at block2206, includes providing, in a second cache set of the cache system coupled in between the memory system and the processor (e.g., cache set1810b), a second plurality of blocks for the speculative thread (e.g., blocks2101d,2101e, and2101f). Each block of the second plurality of blocks can include cached data, a second valid bit, and a block address having an index and a tag. At block2207, the method2200continues with identifying, such as by the processor, whether a speculation of the speculative thread is successful so that the first plurality of blocks becomes accessible for the speculative thread and blocked for the main thread and so that the second plurality of blocks becomes accessible for the main thread and blocked for the speculative thread. As shown inFIG.22, if the speculation of the speculative thread fails, then validity bits of the first and second plurality of blocks are not changed by the processor and remain with the same validity values as prior to the determination of whether the speculative thread was successful at block2207. Thus, the state of the cache sets does not change from a first state to a second state. At block2208, the method2200continues with changing, by the processor solely or in combination with a cache controller, each first valid bit from indicating valid to invalid when a speculation of the speculative thread is successful so that the first plurality of blocks becomes accessible for the speculative thread and blocked for the main thread. Also, at block2210, the method2200continues with changing, by the processor solely or in combination with the cache controller, each second valid bit from indicating invalid to valid when a speculation of the speculative thread is successful so that the second plurality of blocks becomes accessible for the main thread and blocked for the speculative thread. Thus, the state of the cache sets does change from the first state to the second state. In some embodiments, during speculative execution of a first instruction by the speculative thread, effects of the speculative execution are stored within the second cache set. In such embodiments, during the speculative execution of the first instruction, the processor can assert a signal indicative of the speculative execution which can block changes to the first cache. Also, when the signal is asserted by the processor, the processor can block the second cache from updating the memory. This occurs while the cache sets are in the first state. Also, in such embodiments, in response to a determination that execution of the first instruction is to be performed with the main thread, the second cache set (instead of the first cache set) is used with the first instruction. In response to a determination that execution of the first instruction is not to be performed with the main thread, the first cache is used with the first instruction. This occurs while the cache sets are in the second state. In some embodiments, during the speculative execution of first instruction, the processor accesses the memory via the second cache. And, during the speculative execution of one or more instructions, access to content in the second cache is limited to the speculative execution of the first instruction by the processor. In such embodiments, during the speculative execution of the first instruction, the processor is prohibited from changing the first cache. In some embodiments, content of the first cache is accessible via a cache coherency protocol. InFIG.23, method2300includes the operations at blocks2202,2204,2206,2207,2208, and2210of method2200. Method2300, at block2302, includes receiving, by a first physical-to-logical-mapping-set-mapping (PLSM) register (e.g., PLSM register2108ashown inFIG.21), a first valid bit of a block of the first plurality of blocks. The first valid bit can be indicative of the validity of the cached data of the block of the first plurality of blocks. Also, the method2300, at block2304, includes receiving, by a second PLSM register (e.g., PLSM register2108b), a second valid bit of a block of the second plurality of blocks. The second valid bit can be indicative of the validity of the cached data of the block of the second plurality of blocks. At block2306, the method2300includes determining, by a first logic unit (e.g., logic unit2104adepicted inFIG.21) for the first cache set, whether a block of the first plurality of blocks hits or misses. At block2307, the method2300continues with outputting, by the first logic unit, a first hit-or-miss result according to the determination. Also, at block2308, the method2300includes determining, by a second logic unit for the second cache set (e.g., logic unit2104b), whether a block of the second plurality of blocks hits or misses. At block2309, the method2300continues with outputting, by the second logic unit, a second hit-or-miss result according to the determination. At block2310, the method2300continues with outputting to the processor, by a first multiplexor (e.g., multiplexor2110adepicted inFIG.21), the first hit-or-miss result or the second hit-or-miss result according to the first valid bit received by the first PLSM register. In some embodiments, when the first valid bit received by the first PLSM register indicates valid, the first multiplexor outputs the first hit-or-miss result, and when the first valid bit received by the first PLSM register indicates invalid, the first multiplexor outputs the second hit-or-miss result. And, at block2312, outputting to the processor, by a second multiplexor (e.g., multiplexor2110b), the second hit-or-miss result or the first hit-or-miss result according to the second valid bit received by the second PLSM register. In some embodiments, when the second valid bit received by the second PLSM register indicates valid, the second multiplexor outputs the second hit-or-miss result. And, when the second valid bit received by the second PLSM register indicates invalid, the second multiplexor outputs the first hit-or-miss result. Some embodiments can include a central processing unit having processing circuitry configured to execute a main thread and a speculative thread. The central processing unit can also include or be connected to a first cache set of a cache system configured to couple in between a main memory and the processing circuitry, having a first plurality of blocks for the main thread. Each block of the first plurality of blocks can include cached data, a first valid bit, and a block address including an index and a tag. The processing circuitry, solely or in combination with a cache controller, can be configured to change each first valid bit from indicating valid to invalid when a speculation of the speculative thread is successful, so that the first plurality of blocks becomes accessible for the speculative thread and blocked for the main thread. The central processing unit can also include or be connected to a second cache set of the cache system coupled in between the main memory and the processing circuitry, including a second plurality of blocks for the speculative thread. Each block of the second plurality of blocks can include cached data, a second valid bit, and a block address having an index and a tag. The processing circuitry, solely or in combination with the cache controller, can be configured to change each second valid bit from indicating invalid to valid when a speculation of the speculative thread is successful, so that the second plurality of blocks becomes accessible for the main thread and blocked for the speculative thread. And, a block of the first plurality of blocks corresponds to a respective block of the second plurality blocks by having a same block address as the respective block of the second plurality of blocks. The techniques disclosed herein can be applied to at least to computer systems where processors are separated from memory and processors communicate with memory and storage devices via communication buses and/or computer networks. Further, the techniques disclosed herein can be applied to computer systems in which processing capabilities are integrated within memory/storage. For example, the processing circuits, including executing units and/or registers of a typical processor, can be implemented within the integrated circuits and/or the integrated circuit packages of memory media to perform processing within a memory device. Thus, a processor (e.g., see processor201,401,601, and1001) as discussed above and illustrated in the drawings is not necessarily a central processing unit in the von Neumann architecture. The processor can be a unit integrated within memory to overcome the von Neumann bottleneck that limits computing performance as a result of a limit in throughput caused by latency in data moves between a central processing unit and memory configured separately according to the von Neumann architecture. The description and drawings of the present disclosure are illustrative and are not to be construed as limiting. Numerous specific details are described to provide a thorough understanding. However, in certain instances, well known or conventional details are not described in order to avoid obscuring the description. References to one or an embodiment in the present disclosure are not necessarily references to the same embodiment; and, such references mean at least one. In the foregoing specification, the disclosure has been described with reference to specific exemplary embodiments thereof. It will be evident that various modifications can be made thereto without departing from the broader spirit and scope as set forth in the following claims. The specification and drawings are, accordingly, to be regarded in an illustrative sense rather than a restrictive sense.
260,032
11860787
DETAILED DESCRIPTION Some implementations provide a method for retrieving information based on cache miss prediction. A prediction that a cache lookup for the information will miss a cache is made based on a history table. The cache lookup for the information is performed based on the request. A main memory fetch for the information is begun before the cache lookup completes, based on the prediction that the cache lookup for the information will miss the cache. In some implementations, the prediction, based on the history table, that the cache lookup for the information will miss the cache includes comparing a first set of bits stored in the history table with a second set of bits stored in the history table. In some implementations, a resolution of the history table corresponds to a number of bits tracking a history of cache misses. In some implementations, the resolution of the history table is adjusted based on an amount of available memory bandwidth. In some implementations, the resolution of the history table is adjusted responsive to the cache lookup for the information hitting the cache. In some implementations, the history table tracks a history of cache misses per core, per source type, or per thread. In some implementations, the prediction, based on the history table, that the cache lookup for the information will miss the cache includes comparing at least a portion of an address of the request for the information with a set of bits in the history table. In some implementations, a resolution of the history table corresponds to a number of bits including the set of bits in the history table. In some implementations, the resolution of the history table is adjusted based on an amount of available memory bandwidth. In some implementations, the resolution of the history table is adjusted responsive to predicting that the request for the information will miss the cache and to detecting a corresponding cache hit that does not match the prediction. Some implementations provide a processor configured to retrieve information based on cache miss prediction. The processor includes circuitry configured to receive a request for information. The processor also includes circuitry configured to predict, based on a history table, that a cache lookup for the information will miss a cache. The processor also includes circuitry configured to perform the cache lookup for the information based on the request. The processor also includes circuitry configured to begin a main memory fetch for the information before the cache lookup completes, based on the prediction that the cache lookup for the information will miss the cache. In some implementations the circuitry configured to predict, based on a history table, that a cache lookup for the information will miss a cache includes circuitry configured to compare a first set of bits stored in the history table with a second set of bits stored in the history table. In some implementations a resolution of the history table corresponds to a number of bits tracking a history of cache misses. In some implementations the processor includes circuitry configured to adjust the resolution of the history table based on an amount of available memory bandwidth. In some implementations the processor includes circuitry configured to adjust the resolution of the history table responsive to the cache lookup for the information hitting the cache. In some implementations the history table tracks a history of cache misses per core, per source type, or per thread. In some implementations the circuitry configured to predict, based on the history table, that the cache lookup for the information will miss the cache includes circuitry configured to compare at least a portion of an address of the request for the information with a set of bits in the history table. In some implementations a resolution of the history table corresponds to a number of bits including the set of bits in the history table. In some implementations, the processor includes circuitry configured to adjust the resolution of the history table based on an available memory bandwidth. In some implementations, the processor includes circuitry configured to adjust the resolution of the history table responsive to predicting that the request for the information will miss the cache and to detection of a corresponding cache hit that does not match the prediction. FIG.1is a block diagram of an example device100in which one or more features of the disclosure can be implemented. The device100can include, for example, a computer, a gaming device, a handheld device, a set-top box, a television, a mobile phone, server, a tablet computer or other types of computing devices. The device100includes a processor102, a memory104, a storage106, one or more input devices108, and one or more output devices110. The device100can also optionally include an input driver112and an output driver114. It is understood that the device100can include additional components not shown inFIG.1. In various alternatives, the processor102includes a central processing unit (CPU), a graphics processing unit (GPU), a CPU and GPU located on the same die, or one or more processor cores, wherein each processor core can be a CPU or a GPU. In various alternatives, the processor102includes registers and one or more levels of cache memory. In various alternatives, the processor102includes a memory controller and/or other circuitry configured to manage a memory hierarchy, which includes the registers, cache memory, and memory104. In various alternatives, the memory104is located on the same die as the processor102, or is located separately from the processor102. The memory104includes a volatile or non-volatile memory, for example, random access memory (RAM), dynamic RAM, or a cache. The storage106includes a fixed or removable storage, for example, a hard disk drive, a solid-state drive, an optical disk, or a flash drive. In various alternatives, storage106is also part of the memory hierarchy. The input devices108include, without limitation, a keyboard, a keypad, a touch screen, a touch pad, a detector, a microphone, an accelerometer, a gyroscope, a biometric scanner, or a network connection (e.g., a wireless local area network card for transmission and/or reception of wireless IEEE 802 signals). The output devices110include, without limitation, a display, a speaker, a printer, a haptic feedback device, one or more lights, an antenna, or a network connection (e.g., a wireless local area network card for transmission and/or reception of wireless IEEE 802 signals). The input driver112communicates with the processor102and the input devices108, and permits the processor102to receive input from the input devices108. The output driver114communicates with the processor102and the output devices110, and permits the processor102to send output to the output devices110. It is noted that the input driver112and the output driver114are optional components, and that the device100will operate in the same manner if the input driver112and the output driver114are not present. The output driver116includes an accelerated processing device (“APD”)116which is coupled to a display device118. The APD accepts compute commands and graphics rendering commands from processor102, processes those compute and graphics rendering commands, and provides pixel output to display device118for display. As described in further detail below, the APD116includes one or more parallel processing units to perform computations in accordance with a single-instruction-multiple-data (“SIMD”) paradigm. Thus, although various functionality is described herein as being performed by or in conjunction with the APD116, in various alternatives, the functionality described as being performed by the APD116is additionally or alternatively performed by other computing devices having similar capabilities that are not driven by a host processor (e.g., processor102) and provides graphical output to a display device118. For example, it is contemplated that any processing system that performs processing tasks in accordance with a SIMD paradigm may perform the functionality described herein. Alternatively, it is contemplated that computing systems that do not perform processing tasks in accordance with a SIMD paradigm can also perform the functionality described herein. If a processor, memory controller, or other hardware requests information from a level of the memory hierarchy and the information is available at that level, the request can be referred to as a hit. If the information is not available at that level, the request can be referred to as a miss. In an example computing system, if a processor executes an instruction to load a certain piece of information into a processor register, the memory system determines whether the information is available at the next level of the memory hierarchy, such as a top level or L0 cache. In some implementations, the determination is made by a memory controller or other suitable hardware. If the information is not available in the top-level cache, the instruction can be said to miss the top level cache. In this circumstance, the memory system will typically perform a lookup in the next lower level of the memory hierarchy (e.g., an L1 cache) to determine whether the information is available there. This lookup may also hit or miss, and the process may continue down the memory hierarchy until the information is found and ultimately loaded into the processor register. As the memory system proceeds to search down the memory hierarchy for the requested information, the lookup at each level typically becomes slower and slower due to the increasing access latency in lower levels of the memory hierarchy. For example, in a memory hierarchy which includes four levels of cache (L0, L1, L2, and L3 caches) above a main memory (e.g., DRAM) level, there may be significant differences in access latency between the different levels. In some implementations, the difference in access latency may be due to the significantly larger size of the next lower level cache, the longer wireline distance to the main memory, and/or increased complexity in accessing the main memory, etc. Because the access latency of the next lower level in the memory hierarchy may be significant, in some cases it is advantageous to begin the lookup of the next lower level before the current level returns a hit or miss. In an example, upon an L2 cache miss, it is typically necessary to perform a relatively slow L3 cache lookup. Because the L3 cache lookup is slow, and because a main memory (DRAM in this example) lookup (e.g., main memory fetch) is slower still, in some implementations, a DRAM lookup is performed in parallel with the L3 cache lookup. In some implementations, the parallel DRAM lookup is begun at the same time as the L3 cache lookup, or begun before the end of the L3 cache lookup. If the L3 cache lookup misses, the parallel DRAM lookup returns its result sooner than if the memory system had waited for the L3 miss before beginning the DRAM lookup. The parallel DRAM lookup is ignored, terminated, and/or discarded, etc., if the L3 cache hits. This parallel lookup is conceptually possible between other levels of a memory hierarchy, such as performing both L2 and L3 lookups in parallel in response to an L1 cache miss, or performing both DRAM and hard disk lookups in parallel in response to an L3 cache miss. Further parallelism is also conceptually possible, such as performing parallel L3, DRAM, and hard disk lookups in response to an L2 cache miss. In some implementations, launching a parallel lookup of two or more lower levels of the memory hierarchy comes at a cost to communications bandwidth. For example, main memory typically communicates with a processor (and on-chip caches) over a memory bus. The memory bus is a shared electrical connection. In some implementations, the memory bus transfers information between different modules of the main memory and different processors over the same memory bus. Thus, a potentially unnecessary main memory fetch (e.g., DRAM lookup) upon an L2 cache miss can have the effect of reducing the amount of memory bus bandwidth available for communications by other processors, DRAM modules, etc. Analogous bandwidth costs would also apply to other memory communications, such as on-chip interconnect between cache memories and processor registers, or between a backing store and multiple computer systems, etc. Thus, in some implementations, whether and which parallel lookups are used at a particular level of the memory hierarchy depend on factors such as a balance of improved access latency, design cost, complexity, and/or memory bandwidth. In some implementations, miss prediction is used to reduce, minimize, or otherwise manage the memory bandwidth penalty in performing parallel lookups. For example, in some implementations, if it is predicted to be likely that the L3 will miss on a given L2 miss, a parallel DRAM lookup is performed, whereas the parallel DRAM lookup is not performed if it is predicted that the L3 will not miss. Some implementations provide cache miss prediction based on tracking a history of hit and miss, or by tracking address space proximity. In some implementations, the hit and miss history or address space proximity history is tracked using a variable number of bits. In some implementations, the number of bits is adjusted based on feedback. In some implementations, the feedback relates to the accuracy of past predictions and/or available memory bandwidth. In some implementations, the number of bits is adjusted based on the accuracy of past predictions and/or available memory bandwidth. In some implementations, miss prediction is based on tracking hit and miss history of a cache, and can be referred to as temporal prediction. For example, in some implementations, temporal prediction is based on a past history of cache lookups. In some implementations, miss prediction is based on tracking hit and miss history of a memory region, and can be referred to as spatial prediction. For example, in some implementations, spatial prediction is based on tracking whether a hit or miss occurred for data stored in a same region (e.g., within a given address range) of memory or cache memory. In some implementations, prediction is based on a combination of temporal and spatial prediction. The prediction is carried out by circuitry configured to predict whether a request misses (or hits) the cache (or other memory). In some implementations, temporal miss prediction circuitry (miss predictors herein) track hit-or-miss history and determine recurring patterns in the history. If a recurring pattern is detected, a prediction is made as to whether the next cache lookup will hit or miss based on the pattern. In some implementations, temporal miss predictors track hit-or-miss history on a per-thread basis. In such implementations, a history of cache hits and misses is recorded for each particular thread. If a recurring pattern is detected, a prediction is made as to whether the next cache lookup by that thread will hit or miss based on the pattern. Tracking cache hits/misses per-thread can improve prediction. In some implementations, temporal miss predictors track hit-or-miss history on a per-source basis. In this context, a “source” is the source of a request or “lookup” for information, stored in a location. Cache memory, and/or other memory is queried for the information based on the request from the source. For example, cache lookup requests may be made from different sources or types/classes of sources (e.g., a code request, data request, translation lookaside buffer (TLB lookup), prefetch, etc.) In some implementations, a history of cache hits and misses is recorded for each particular source. Tracking cache hits/misses based on source, type, and/or class can improve prediction. In some implementations, temporal miss predictors track hit-or-miss history on both a per-source and per-thread basis. For example, each thread may receive cache lookup requests from different sources or types/classes of sources (e.g., a code request, data request, TLB, prefetch, etc.) Tracking cache hits/misses for each thread based on source class can improve prediction. In some implementations, temporal miss predictors track hit-or-miss history on other bases, such as on a per-core basis. In the following example, cache hits/misses are tracked using an 8-bit history. For each request, the hit or miss result is stored as the most significant bit, after the bits of the history are shifted down the remaining bits. The most significant 4 bits are compared to the least significant 4 bits to determine whether they match, indicating a repeated pattern. A prediction about whether the next request will hit or miss the cache is made based on whether or not a match (i.e., a repeated pattern) is detected. In some implementations, more than one group of the history is comparable to detect patterns. For example, in a case where history is tracked with 9 bits (e.g.,8-0), the most significant three bits (8-6) are compared with the least significant three bits (2-0) and also the middle significant bits (5-3) and a pattern is indicated where all three groups match. In some implementations, this can have the advantage of yielding a higher confidence prediction. Table 1 shows an 8-bit history for a temporal miss predictor which tracks hit-or-miss (indicated by H or M respectively) history on a per-core and per-thread basis, for a single source type. TABLE 176543210(Bit Number - 8 bit history)HMHMHMHMCore 0 Thread 0 / Source TypeHHMHHHMHCore 0 Thread 1 / Source TypeHMHHMMMMCore 1 Thread 0 / Source Type...Core X Thread Y / Source Type In this example, the most recent cache lookup by the source type for thread 0 on core 0 was a hit, as indicated by an H in the most significant bit7, in the leftmost column. Comparing the most significant 4 bits to the least significant bits indicates a repeating pattern. In this example, the H M H M pattern of bits7-4matches the H M H M pattern of bits3-0. In some implementations, the next cache lookup by the source type for thread 0 on core 0 is predicted to be a miss, based on the 4-bit repeating pattern that is detected. Thus, in some implementations, a parallel lookup to a lower level of the cache or to DRAM is performed based on the prediction that it will miss. In some implementations the decision involves other factors, such as available memory bandwidth, and so forth, as further discussed herein for example. As shown in the second line of Table 1, the most recent cache lookup by the source type for thread 1 on core 0 was a hit, as indicated by an H in the most significant bit7, in the leftmost column. Comparing the most significant 4 bits to the least significant bits indicates a repeated pattern of hits and misses. In this example, the HHMH pattern of bits7-4matches the HHMH pattern of bits3-0. In some implementations, the next cache lookup by the source type for thread 1 on core 0 is predicted to be a hit, based on the 4-bit repeating pattern that is detected. Thus, in some implementations, a parallel lookup to a lower level of the cache or to DRAM is not performed based on the prediction that it will hit. In some implementations the decision involves other factors, such as available memory bandwidth, and so forth, as further discussed herein for example. As shown in the third line of Table 1, the most recent cache lookup by the source type for thread 0 on core 1 was a hit, as indicated by an H in the most significant bit7, in the leftmost column. Comparing the most significant 4 bits to the least significant bits indicates that there is no detected repeating pattern. In this example, the H M H H pattern of bits7-4does not match the M M M M pattern of bits3-0. In some implementations, the next cache lookup by the source type for thread 0 on core 1 is not predicted, based on the absence of a detected repeating pattern. Thus, in some implementations, whether or not a parallel lookup to a lower level of the cache or to DRAM is performed is based on other factors, such as default setting, available memory bandwidth, and/or other factors, and so forth, as further discussed herein for example. In some implementations, a default to launch a parallel lookup or not where no repeating pattern is detected is selectable (e.g., as an implementation choice, or a tuning parameter, etc.) The number of bits used to track the history, or resolution of the history, can be considered an indication of the accuracy of the prediction. For example, in some implementations the number of bits used to track the history relates to a level of confidence in the prediction. Here, more history tracking bits will yield a higher resolution match, and a stronger confidence that a miss prediction based on a repeated pattern (or absence of a repeated pattern) will be accurate. Fewer history tracking bits will yield a lower resolution match, and a weaker confidence that a miss prediction based on a repeated pattern (or absence of a repeated pattern) will be accurate. Accordingly, the accuracy of the prediction, and accordingly, the reliability of the miss prediction, can be increased or relaxed based on the number of bits used to store the history. For example, using 8 bits to track the hit and miss history will produce a higher resolution and potentially more accurate prediction than using 2 bits to track the hit and miss history. In some implementations, the resolution of the prediction is adjustable based on available bandwidth (e.g., memory bandwidth). Prediction resolution is adjusted based on memory bandwidth for the examples herein, however it is noted that in some implementations the resolution of the prediction is adjustable based on other bandwidth (e.g., interconnect bandwidth) in addiction to or instead of memory bandwidth. For example, if available memory bandwidth is relatively plentiful (e.g., is high enough to allow for a parallel lookup without impacting other memory traffic, or without delaying other memory traffic more than a threshold amount), a repetition within 2 bits provides a sufficiently accurate miss prediction in some circumstances, since the memory bandwidth cost of an incorrect prediction is relatively lower. In other words, launching an unnecessary parallel lookup (e.g., of DRAM) by incorrectly predicting a miss would cause a relatively lower performance penalty if there is enough bandwidth to handle the parallel lookup without delaying (or overly delaying) other memory traffic. On the other hand, if the available memory bandwidth is relatively scarce, in some implementations it is preferable to base the miss prediction on a repeated pattern over a greater number of bits in order to provide a more accurate miss prediction. In some implementations, basing the miss prediction on a greater number of history tracking bits can have the advantage of reducing or avoiding unnecessary DRAM lookups. In other words, increasing the number of traffic history bits can have the advantage of reducing delays to other memory traffic by reducing the chance of inaccurately predicting a lookup miss. In some implementations, increasing the number of history bits used for prediction can have the advantage of increasing prediction accuracy. In some implementations, increasing the number of history bits used for prediction can have the disadvantage of missing detection of some patterns which are mixed with other patterns. In some implementations, increasing the number of history bits used for prediction can have the disadvantage of taking longer to detect patterns, e.g., where the history buffer takes longer to fill due to its increased size. FIG.2is a flow chart illustrating example adjustment of history tracking resolution based on available memory bandwidth. In step202, the memory system receives a request for a cache lookup. In some implementations, the memory system receives the request from a source, such as a code request, data request, TLB, prefetch, etc. In some implementations, the memory system includes a memory controller which receives the request. In step204, the memory system compares the currently available memory bandwidth with the memory bandwidth that was available following a previous request for a cache lookup. In some implementations, the comparison is done by a component other than the memory system. If the currently available memory bandwidth is the same as (or within a threshold amount as) the available memory bandwidth following a previous request for a cache lookup (i.e., whether there has been no change in the available memory bandwidth, or any change is within a threshold amount), the history tracking resolution is not changed at step206, and the flow returns to step202for the next cache lookup. If the available memory bandwidth following a previous request for a cache lookup was greater (or greater by more than a threshold amount) than the currently available memory bandwidth (i.e., whether there has been a decrease in the available memory bandwidth, or a decrease by more than a threshold amount), the history tracking resolution is increased (e.g., changed from 2 bits to 8 bits) at step208, and the flow returns to step202for the next cache lookup. If the available memory bandwidth following a previous request for a cache lookup was less (or less by more than a threshold amount) than the currently available memory bandwidth (i.e., whether there has been an increase in the available memory bandwidth, or an increase by more than a threshold amount), the history tracking resolution is decreased (e.g., changed from 8 bits to 2 bits) at step210. In some implementations, the flow returns to step202for the next cache lookup, however in this example, on a condition212that the current available bandwidth exceeds a maximum threshold, the predictor is turned off in step214before the flow returns to step202, and on a condition212that the current available bandwidth does not exceed the maximum threshold, the flow returns directly to step202. In such implementations, miss prediction is considered to be unnecessary if the memory bandwidth is above the threshold and a parallel lookup is launched on every cache lookup, since there is enough bandwidth to handle the parallel lookup without delaying (or overly delaying) other memory traffic. In some implementations, the resolution of the prediction is also or alternatively adjustable based on feedback. For example, if a request for a cache (or other memory) lookup is received by the memory system (e.g., a memory controller), a prediction of whether the cache lookup will miss is made based on the current history. After the lookup is eventually complete, the actual result is compared with the prediction. This comparison is used as feedback to the prediction history. In other words, the resolution of the history (e.g., number of bits) is adjustable based on the feedback in some implementations. For example, if the miss prediction was incorrect, more bits can be used for future predictions, and vice versa. FIG.3is a flow chart illustrating example adjustment of history tracking resolution based on feedback. In step302, the memory system receives a request for a cache or other memory lookup. In some implementations, the memory system receives the lookup request from a source, such as a code request, data request, TLB, prefetch, etc. In some implementations, the memory system includes a memory controller which receives the request. In step304, the prediction circuitry generates a prediction of whether the request hits or misses the cache. On condition308that the predictor incorrectly predicts whether the request hits or misses the cache (i.e., the prediction does not match the actual result), the tracking history resolution is updated (e.g., the number of bits is increased) in step310. On condition308that the predictor correctly predicts whether the request hits or misses the cache (i.e., the prediction matches the actual result), the tracking history resolution is not updated (e.g., the number of bits remains the same) as indicated by box312. In either case, the flow returns to step302for the next cache lookup. In some implementations, tracking history resolution is updated in step310on a condition that the prediction of a miss is incorrect a threshold number of times. Some implementations provide cache miss prediction based on whether a hit or miss was made in a same region of memory. For example, for a L2 cache lookup, the requested memory address is compared to a history of L3 misses within a range of addresses of the requested memory address. Similar to the Temporal Predictor above, a history is maintained for each L3 lookup, however the tracking stores past miss addresses, rather than past hit or miss results. For example, if a lookup request is for an address on the same memory page as an address which missed the L3 cache, and that is stored in the history, the request may be predicted to miss the L3 cache. Table 2 shows a history for an example spatial miss predictor which tracks memory addresses that missed the L3 cache. TABLE 2Example 8-bit history for spatial predictionAddress HistoryThread IDSource Type0x FAB12345 . . .TID 0DC0x CAFEBB8 . . .TID 0DC0x CODEB123 . . .TID 0IC. . .. . .. . .Ox DEADBOB . . .TID XTLB In this example, the history is also tracked on a per-thread and per-source basis, illustrating the combinability of these concepts with spatial prediction. Here, the history reflects that at least one lookup request for a memory addresses beginning with the most significant 32 bits 0x FAB12345 missed the L3 cache. It is also indicated that this memory request was for thread 0 (indicated by thread identifier TID) and came from a data cache (DC). Likewise, historical cache misses for memory addresses beginning with the most significant bits 0x CAFEBB8, 0x CODEB123 (from an instruction cache (ICC)), and 0x DEADBOB (from a translation lookaside buffer (TLB)) are also recorded in the history. In some implementations, a cache lookup for a memory address within a given range of a recorded miss address is predicted to miss. In some implementations, the miss depends on the cache lookup coming from the same source type, or the same thread, core, etc. In some implementations, the prediction resolution is defined by a number of bits. For example, if the resolution is defined as 16 bits, a lookup request to an address having the same 16 most significant bits as an address in the history that missed the same cache is predicted to also miss the cache. The spatial predictor also stores a certain number of addresses. This number can be referred to as the depth of the predictor. The resolution of the spatial predictor is adjustable by changing the number of most significant bits used for comparison. For example, if the resolution is changed from 16 to 256 bits, the prediction is for addresses within a smaller range of addresses from the missed address stored in the spatial prediction history, and is considered to be stricter or more accurate in some implementations. The resolution and/or depth of the spatial predictor are also adjustable dynamically similarly to the temporal predictor discussed above. For example, the prediction may be set to be stricter if available memory bandwidth is below a threshold or decreases, and vice versa. In some implementations, spatial predictors track hit-or-miss history on a per-thread basis. In such implementations, a history of cache hits and misses is recorded for each particular thread. If a recurring pattern is detected, a prediction is made as to whether the next cache lookup by that thread will hit or miss based on the pattern. Tracking cache hits/misses per-thread can improve prediction. In some implementations, spatial miss predictors track hit-or-miss history on a per-source basis. For example, cache lookup requests may be made from different sources or types/classes of sources (e.g., a code request, data request, TLB, prefetch, etc.) In some implementations, a history of cache hits and misses is recorded for each particular source. Tracking cache hits/misses based on source, type, and/or class can improve prediction. In some implementations, spatial miss predictors track hit-or-miss history on both a per-source and per-thread basis. For example, each thread may receive cache lookup requests from different sources or types/classes of sources (e.g., a code request, data request, TLB, prefetch, etc.) Tracking cache hits/misses for each thread based on source class can improve prediction. The spatial and temporal predictors are combinable. For example, some implementations track both a history of hits and misses, and a history of missed addresses, and the hit or miss prediction is made based on both the spatial and temporal prediction. FIG.4is a block diagram illustrating an example portion of a memory system400. The memory hierarchy of memory system400includes L2 cache402, L3 cache404, and DRAM406. The memory system400also includes a miss predictor408and miss history buffers410and412. In this example, if an information request to L2 cache402misses, the L2 cache miss is recorded in miss history buffer410, and miss predictor408predicts (or does not predict) whether a request for the information will also miss the L3 cache. If predictor408predicts an L3 cache miss, it launches a parallel lookup414for the information in DRAM406. If predictor408predicts that the request will hit the L3 cache, or does not make a prediction, parallel lookup414of DRAM406is not launched. In either case, if the request misses L3 cache404, the L3 cache miss is recorded in miss history buffer412, and a DRAM lookup416for the information is sent to DRAM406. If the L3 cache miss was predicted by predictor408, the information (or indication of a DRAM miss) will be available sooner than if the miss was not predicted by predictor408, due to the earlier parallel DRAM lookup414. In either case, DRAM406returns the information, or a DRAM miss indication, to L3 Cache404. It should be understood that many variations are possible based on the disclosure herein. Although features and elements are described above in particular combinations, each feature or element can be used alone without the other features and elements or in various combinations with or without other features and elements. The various functional units illustrated in the figures and/or described herein (including, but not limited to, the processor102, the input driver112, the input devices108, the output driver114, the output devices110, the accelerated processing device116, the scheduler136, the graphics processing pipeline134, the compute units132, the SIMD units138) may be implemented as a general purpose computer, a processor, or a processor core, or as a program, software, or firmware, stored in a non-transitory computer readable medium or in another medium, executable by a general purpose computer, a processor, or a processor core. The methods provided can be implemented in a general purpose computer, a processor, or a processor core. Suitable processors include, by way of example, a general purpose processor, a special purpose processor, a conventional processor, a digital signal processor (DSP), a plurality of microprocessors, one or more microprocessors in association with a DSP core, a controller, a microcontroller, Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) circuits, any other type of integrated circuit (IC), and/or a state machine. Such processors can be manufactured by configuring a manufacturing process using the results of processed hardware description language (HDL) instructions and other intermediary data including netlists (such instructions capable of being stored on a computer readable media). The results of such processing can be maskworks that are then used in a semiconductor manufacturing process to manufacture a processor which implements features of the disclosure. The methods or flow charts provided herein can be implemented in a computer program, software, or firmware incorporated in a non-transitory computer-readable storage medium for execution by a general purpose computer or a processor. Examples of non-transitory computer-readable storage mediums include a read only memory (ROM), a random access memory (RAM), a register, cache memory, semiconductor memory devices, magnetic media such as internal hard disks and removable disks, magneto-optical media, and optical media such as CD-ROM disks, and digital versatile disks (DVDs).
36,850
11860788
DETAILED DESCRIPTION Storage nodes in a distributed storage system can include a cache of data and one or more storage devices of additional data. Data in the cache may be more quickly and easily accessible, such that a request from a client for data in the cache may be serviced more quickly than data in a storage device. Thus, it may be beneficial for the distributed storage system to predict data that will be requested by a client so that the storage node can store data in the cache prior to the data being requested by the client. To implement such a prediction system, the prediction system gathers information about input/output (IO) patterns from clients simultaneously, which is complex in a distributed storage system without a central component that logs the IO patterns to the system. Rather, each storage node works against the other components independently and does not have information about the load on other nodes. As a result, predictions in a distributed storage system may not be possible, or may be inaccurate if determined. Some examples of the present disclosure can overcome one or more of the abovementioned problems by prefetching data before requests are received for the data in a distributed storage system. For example, the distributed storage system can include a client, storage nodes, and a computing device. The distributed storage system can also include an asynchronous message passing mechanism, such as a message queue. Each of the storage nodes can lack load information for other storage nodes. The client can send at least one request for an IO operation. The message queue can receive metadata associated with the at least one request as a message. The computing device can receive the message(s) from the message queue and determine an additional IO operation predicted to be requested by the client subsequent to the at least one request for the IO operation. The computing device can then send a notification to a storage node that is associated with the additional IO operation for prefetching data of the additional IO operation prior to the client requesting the additional IO operation. The storage node can retrieve the data from a storage device and store the data in a cache of the storage node prior to receiving the request for the additional IO operation from the client. This may significantly reduce read IO latency in the distributed storage system. As a more specific example, a computing device can receive a message indicating a read request from a client for reading data A from a hard disk of storage node A. The computing device can determine data B, stored on storage node B, has historically been requested by the client after a read request for the data A. The computing device can send a notification to storage node B indicating data B is likely to be read. Storage node B can obtain the data B from the hard disk and store the data B in the cache prior to receiving the request for data B. This may aid in reducing latency when storage node B receives a subsequent request for data B. These illustrative examples are given to introduce the reader to the general subject matter discussed here and are not intended to limit the scope of the disclosed concepts. The following sections describe various additional features and examples with reference to the drawings in which like numerals indicate like elements but, like the illustrative examples, should not be used to limit the present disclosure. FIG.1is a block diagram of an example of a distributed storage system100for implementing data prefetching according to some aspects of the present disclosure. The distributed storage system100can include a compute node110. Examples of the client compute node110can include a laptop computer, desktop computer, server, mobile phone, etc. The distributed storage system100can additionally include a message queue120, computing devices130a-b, storage nodes140a-b, a management node150, and storage devices170a-d. Examples of the storage devices170a-dcan include hard disk drives, solid state drives, magnetoresistive random access memory (MRAM) drives, external just a bunch of disks (JBOD) systems, or any other internal or external device capable of storing data. Although not shown inFIG.1, the message queue120may be in a memory device, such as a dynamic random access memory (DRAM). As shown inFIG.1, some or all of these components can communicate with one another via a network160, such as a local area network (LAN) or the Internet. Each of the storage nodes140a-bcan include one or more of the storage devices170a-d, which can be accessible by each of the clients112a-b. Each of the storage nodes140a-bcan lack load information for the other storage nodes in the distributed storage system100. Clients112a-bcan run on the client compute node110. The clients112a-bmay be software applications using a particular type of storage, such as a block, file, or object. The clients112a-bcan communicate with other components of the distributed storage system100via a dedicated driver. For example, if the client112ais a software application that uses a block volume, there can be a block driver that converts IO operations for the client112ainto messages that can be routed to a correct storage node for performing the IO operation. The drivers of the clients112a-bcan also send messages about the IO operations to one or more message queues, for example the message queue120. The messages can include metadata106associated with the IO operations. The message queue120can store the metadata106as a message122. Alternatively, the message queue120may immediately forward the message122to a computing device. Sending the metadata106to the message queue120can occur in parallel and asynchronous to sending requests for operations to storage nodes. The message queue120may include multiple topics124a-b, and each topic can receive messages from a particular client. For example, the topic124acan receive messages from the client112aand the topic112bcan receive messages from the client112b. Alternatively, the distributed storage system100may include multiple message queues. Each of the multiple message queues can include one or more topics that each are associated with a particular client. In some examples, a computing device can receive the message(s) from a particular message queue. The computing device can include a prediction engine for determining data that is to be prefetched based on the message(s) in the message queue. If the distributed storage system100includes multiple message queues that each store messages, the messages in a message queue can be processed by a single computing device so that the computing device can make accurate determinations about data to prefetch. For example, the computing device130acan receive the message122of the metadata106associated with requests for IO operations from the client112a. The computing device130acan process the message122to determine a storage node that is likely to receive a subsequent IO operation from the client112abased on the message122. For example, at a first point in time, the client112amay send an IO operation request102afor data stored on the storage device170a, associated with the storage node140a. Based on the metadata106for the IO operation request102athat is processed by the prediction engine132aas part of the message122, the computing device130acan determine that the storage node140bis likely to receive an IO operation request102bfrom the client112aat a subsequent point in time for data that is stored in the storage device170c. The IO operation request102bcan be a read request that is sent by the client112aimmediately subsequent to the IO operation request102a. In some examples, the prediction engine132acan use a historical pattern of requests for IO operations to determine that data stored in a disk storage device associated with the storage node140bis to be prefeteched. For example, the prediction engine132acan analyze stored information indicating a historical pattern of requests for IO operations from the client112a. Based on this analysis, the prediction engine132acan determine that the client112ahas historically followed a an IO operation request102afor the data in the storage device170awith another IO operation request for the data stored in the storage device170cassociated with the storage node140bmultiple times in the past. The prediction engine132amay additionally or alternatively determine a series of predictions of data to be read. For example, the prediction engine132amay predict that first data is predicted to be requested next and second data is predicted to be requested subsequent to the first data. In some examples, the prediction engine132acan determine multiple possible data objects that may be requested. For example, the prediction engine132amay determine that the client112ahas historically followed a an IO operation request102afor the data in the storage device170awith an IO operation request for data stored in the storage device170cor an IO operation for data stored in the storage device170d. The prediction engine132amay analyze the stored information using a machine-learning model (e.g., a deep neural network) or using any other suitable technique to identify this historical pattern. The machine-learning model can receive the metadata106of the message122as input. Based on this historical pattern, the prediction engine132acan determine that data is to be prefetched based on the message122for the IO operation request102abeing received from the client112a. The prefetching may involve reading the data. In some examples the prediction engine132amay also determine a probability indicating a likelihood of the client112asending the IO operation request102bsubsequent to the IO operation request102a. For example, the prediction engine132acan determine a probability of 80%, indicating that the probability of the client112asending the IO operation request102bis 80%. In some examples, the prediction engine132acan send a notification104to the storage node140bindicating that data in the storage device170cis to be read. In response, the storage node140bcan obtain the data from the storage device170c, store the data in a cache142b, and provide the data back to client112afor subsequent use in reply to subsequently receiving the IO operation request102bfor the data. Prior to sending the notification104, the prediction engine132amay determine whether the probability of the client112asending the IO operation request102bexceeds a probability threshold134a. If the probability exceeds the probability threshold134a, the prediction engine132acan send the notification104to the storage node140b. If the prediction engine132adetermines that the probability is less than the probability threshold134a, the prediction engine132amay not send the notification104to the storage node140b. This can ensure that only data for predictions with a likelihood above a predefined probability are considered for prefetching, resulting in a smaller load on the distributed storage system100than if data is considered for prefetching for each prediction. In some examples, the probability threshold134amay be adjusted based on a load of the distributed storage system100. For example, as the distributed storage system is more loaded, the probability threshold134acan be higher. Additionally, when the distributed storage system100is less loaded, the probability threshold134acan be lower such that data for lower certainty predictions can be prefetched without impacting the performance of the distributed storage system100for non-predicted IO operations. Additionally or alternatively, each of the storage nodes140a-bcan decide whether to prefetch data based on the load on the storage node. For example, as the storage node140ais more loaded, a probability threshold for prefetching data of the notification104can be higher. The probability may additionally inform a priority of the prefetching for different clients. For example, the storage node140amay receive a first notification for prefetching data from the storage device170afor the client112awith a probability of 80% and a second notification for prefetching data from the storage device170afor the client112bwith a probability of 40%. The storage node140acan determine that the data of the first notification is to be served first based on the higher probability. The management node150may also monitor the loads136a-bassociated with each of the computing devices130a-bto determine whether additional computing devices should be generated. For example, the distributed storage system100may only include the computing device130afor predicting data to prefetch for the clients112a-bin the distributed storage system100. The management node150can determine that the load136aassociated with the computing device130aexceeds a load threshold152and generate the computing device130bwith a prediction engine132bfor servicing a portion of the load. For example, the computing device130bcan receive messages from a message queue associated with the client112b, while the computing device130areceives messages from a message queue associated with the client112a. Alternatively, if the load associated with a computing device is below a minimum limit154, the computing device can be disabled. For example, the distributed storage system100can include the computing devices130a-b. The management node150can determine that the load136bassociated with the computing device130bis below the minimum limit154. The management node150can determine that if the load136bis added to the load136a, the sum is less than the load threshold152. The management node150can then disable the computing device130band the computing device130acan take over receiving the messages from the message queue(s) that the computing device130bwas receiving. In addition to a number of computing devices being adjustable, a number of message queues or topics included in a message queue may also be adjustable based on a load of each message queue. For example, if a number of clients associated with a message queue exceeds a threshold, an additional message queue can be generated. The threshold can be based on a number of topics the message queue includes, since each topic can serve one client. As the number of computing devices, topics, or message queues changes, the message queue associated with a client or a computing device associated with a message queue may also change. If the metadata106is sent to a message queue that is not serving the client112aanymore, a response can be sent suggesting to the client112ato reload the information from the management node150about which message queue120is serving the client112a. While the example shown inFIG.1depicts a specific number and arrangement of components for simplicity, other examples may include more components, fewer components, different components, or a different arrangement of the components shown inFIG.1. For instance, althoughFIG.1shows one compute node, one message queue, two computing devices, two storage nodes, and four storage devices, other examples may include a smaller or larger number of each of these components. Additionally, while the example ofFIG.1describes message queues as receiving and sending messages, other examples may involve any suitable asynchronous message passing mechanism. FIG.2is a block diagram of another example of a distributed storage system200for implementing data prefetching according to some aspects of the present disclosure. The distributed storage system200includes a computing device210with a processor202communicatively coupled with a memory204. In some examples, the processor202and the memory204can be part of the same computing device, such as the computing device130aofFIG.1. In other examples, the processor202and the memory204can be distributed from (e.g., remote to) one another. The computing device210is communicatively coupled to a message queue220and a plurality of storage nodes240. The message queue220is also communicatively coupled to a client212. The client212can send at least one request for an IO operation262. The client212can also send metadata224associated with the at least one request for the IO operation262as a message222, which can be received by the message queue220. The metadata224can be sent in parallel and asynchronous to the at least one request for an IO operation262. Each storage node of the plurality of storage nodes240can lack load information for other storage nodes of the plurality of storage nodes240. The processor202can include one processor or multiple processors. Non-limiting examples of the processor202include a Field-Programmable Gate Array (FPGA), an application-specific integrated circuit (ASIC), a microprocessor, etc. The processor202can store a prediction engine232as instructions206in the memory204that are executable to perform operations. In some examples, the instructions206can include processor-specific instructions generated by a compiler or an interpreter from code written in any suitable computer-programming language, such as C, C++, C#, etc. The memory204can include one memory or multiple memories. The memory204can be non-volatile and may include any type of memory that retains stored information when powered off. Non-limiting examples of the memory204include electrically erasable and programmable read-only memory (EEPROM), flash memory, or any other type of non-volatile memory. At least some of the memory204can include a non-transitory computer-readable medium from which the processor202can read instructions206. A computer-readable medium can include electronic, optical, magnetic, or other storage devices capable of providing the processor202with computer-readable instructions or other program code. Non-limiting examples of a computer-readable medium include magnetic disk(s), memory chip(s), ROM, random-access memory (RAM), an ASIC, a configured processor, optical storage, or any other medium from which a computer processor can read the instructions206. In some examples, the processor202can execute instructions206to perform various operations. For example, the processor202can receive the message222from the message queue220. The processor202can determine, based on the message222from the message queue220, an additional IO operation predicted to be requested by the client212subsequent to the at least one request for the IO operation262, as indicated by the additional IO operation indicator264. The processor202may use a machine-learning model or another analysis technique to determine the additional IO operation. The processor202can send a notification208to a storage node244of the plurality of storage nodes240associated with the additional IO operation for prefetching data226of the additional IO operation prior to the client212requesting the additional IO operation. The storage node244can then retrieve the data226from a storage device and store the data226in a cache of the storage node244prior to the storage node244receiving a request for the additional IO operation. This can allow the data226to be rapidly retrieved from the cache for responding to a subsequent IO request for the data226. In some examples, the processor202can implement some or all of the steps shown inFIG.3. Other examples can include more steps, fewer steps, different steps, or a different order of the steps than is shown inFIG.3. The steps ofFIG.3are discussed below with reference to the components discussed above in relation toFIG.2. In block302, a processor202receives a message222from a message queue220. The message222includes metadata224associated with at least one request for an IO operation262sent from a client212associated with the message queue220. The metadata224can be sent to the message queue220by the client212in parallel and asynchronous to the client212sending the at least one request for the IO operation262to a storage node of a plurality of storage nodes240. Thus, prediction of subsequent requests for IO operations can be performed while the at least one request for the IO operation262is serviced. In block304, the processor202determines, based on the message222from the message queue220, an additional IO operation264predicted to be requested by the client212subsequent to the at least one request for the IO operation262. The metadata224may be input to a machine-learning model, or a different historical analysis may be performed, to determine the additional IO operation264. Additionally, the processor202can determine a probability of the client212requesting the additional IO operation264. In block306, the processor202sends a notification208to a storage node244of the plurality of storage nodes240associated with the additional IO operation264. The notification208can indicate data226associated with additional IO operation264that is to be prefetched prior to the client212requesting the additional IO operation264. Prefetching the data226can involve the storage node244retrieving the data from a storage device and storing the data226in a cache of the storage node244prior to the storage node244receiving a request for the additional IO operation264. In some examples, the notification208may only be sent if the probability of the client212requesting the additional IO operation264is above a probability threshold. Prefetching data can allow for clients to receive data more quickly and reduce latency of the distributed storage system200. The foregoing description of certain examples, including illustrated examples, has been presented only for the purpose of illustration and description and is not intended to be exhaustive or to limit the disclosure to the precise forms disclosed. Numerous modifications, adaptations, and uses thereof will be apparent to those skilled in the art without departing from the scope of the disclosure. For instance, examples described herein can be combined together to yield still further examples.
22,016
11860789
In accordance with common practice, the various features illustrated in the drawings may not be drawn to scale. Accordingly, the dimensions of the various features may be arbitrarily expanded or reduced for clarity. In addition, some of the drawings may not depict all of the components of a given system, process, or device. Finally, like reference numerals may be used to denote like features throughout the specification and figures. DETAILED DESCRIPTION Zero downtime (e.g., resilient computing, etc.) is a differentiating feature of some high performance information handling systems. A foundational building block of many resilient information handling system technologies is the purging of its cache(s). For example, memory in a running system can be reallocated between multiple processors within the same information handling system or between multiple processors in different but computationally related information handling systems. As part of these memory reallocation techniques, the applicable processors' cache(s) are typically purged. A traditional cache purge sequence will clear all valid entries inside a cache and cast out data stored therein that is dirty (i.e., data that has been modified within the cache, but memory stores old data). At the end of a traditional cache purge sequence, the entire cache contains non-valid entries and memory contains the newest data. To test or verify the resiliency of an information handling system, prior to the system's intended or ultimate field of use, a cache purge simulation may ensure that its cache purge operations or features are working. Such simulations take an increasing amount of time, due to ever-growing cache sizes (e.g., millions of simulation cycles are currently needed to complete a single cache purge simulation). Numerous details are described herein in order to provide a thorough understanding of the example embodiments illustrated in the accompanying drawings. However, some embodiments may be practiced without many of the specific details, and the scope of the claims is only limited by those features and aspects specifically recited in the claims. Furthermore, well-known methods, components, circuits, or the like, have not been described in exhaustive detail so as not to unnecessarily obscure more pertinent aspects of the embodiments described herein. FIG.1illustrates components and an interconnection topology for an information handling system, for example a computer system100, that may utilize one or more embodiments the present invention. Computer system100may comprise a host102having multiple processors104. Each processor104may be connected to a memory120by an internal bus105and/or a host system bus115. Each processor104has at least one general-purpose programmable processor unit (CPU)106that may execute program instructions stored in memory120. Although a multiple processors104are shown inFIG.1, it should be understood that a computer system may include a single processor104. Cache108may be physically included within each processor104. Together the cache108and memory120may make up a known hierarchy of one or more levels of on board cache and off board memory. Memory120may be for example a random access memory for storing data and/or program instructions, such as an operating system or application. Though cache108and memory120are shown conceptually as a single monolithic entity, cache108and/or memory120may be arranged as a hierarchy of caches and other memory devices, respectively. Memory120may include operating system (OS)122and applications (Apps)124. Operating system122may provide functions such as device drivers or interfaces, management of memory pages, management of multiple tasks, etc., as is known in the art. Applications124may for example include a server software application whereby a network interface170may interact with the server software application to enable computer system100to be a network server. Host system bus115may support the transfer of data, commands, and other information between the host processor system102and peripheral or external devices attached to it, and communication of data which may occur between the external devices independent of the host processor complex102. While shown in simplified form as a single bus, the host system bus115may be structured as multiple buses which may be for example hierarchically arranged. Host system bus115may be connected to other components integral to the computer system100(such as a touch screen131, touch pad, etc.) and/or to a myriad of external or peripheral devices through a connection hub130, through an adapter140, a multifunction adapter150, or through network interface170. These peripheral devices may include a monitor or display132, a keyboard134, a mouse or other handheld device136, and/or a printer138. Display132may be a cathode-ray tube display, a flat panel display, or a touch screen, or other display technology. One or more adapters140may support keyboard134and mouse136; it being understood that other forms of input devices could be used. The number and types of devices shown inFIG.1are illustrative only and ordinary users of computer systems now know that a great variety of connected devices exist; e.g., microphones, speakers, infrared remote controls, wireless connected devices, etc. and therefore computer system100is not limited to those devices illustrated inFIG.1. The host system bus115may also be connected to an adapter140(e.g., an I/O adapter connected to an external memory device144). External memory device144may be a disk storage, rotating or static optical storage, magnetic tape storage, FLASH storage, etc. based storage or memory device. Adapter140may include adapter microcode or firmware and decision logic which may be embodied as a message processor142. The adapter140may also be provided with at least one fast nonvolatile write cache, queues, interrupt registers connected to the message processor142and/or decision logic. The message processor142may process incoming messages from the host processor complex102and generate and transmit response messages back to the host processor complex102. The adapter140may contain electronic components and logic to adapt or convert data of one protocol on one bus to another protocol on another bus. Therefore, adapter140may connect a wide variety of devices to the host computer system102and to each other such as, but not limited to, tape drives, optical drives, printers, disk controllers, other bus adapters, PCI adapters, workstations using one or more protocols including, but not limited to, Token Ring, Gigabyte Ethernet, Ethernet, Fibre Channel, SSA, Fiber Channel Arbitrated Loop (FCAL), Serial SCSI, Ultra3 SCSI, Infiniband, FDDI, ATM, 1394, ESCON, wireless relays, Twinax, LAN connections, WAN connections, high performance graphics, etc. The host system bus115may also be connected to a multifunction adapter150to which more I/O devices may be connected either directly, or through one or more bridge devices160, or through another multifunction adapter150on either a primary bus155or a secondary bus165. Various components may be connected to the primary bus155including, for example, an adapter140, a bridge device160, or another multifunction I/O processor or a multifunction adapter150. The bridge device160bridges the primary bus155and a secondary bus165to which various adapters140may be connected. The adapters140, the primary bus155, and the secondary bus165may conform to the PCI/PCI-X or other industry bus specification. One skilled in the art realizes, however, that the implementation is not limited to a PCI/PCI-X or a SCSI or USB bus implementation but is applicable to any electrical, optical, or wireless bus where data must be efficiently transferred. Network interface170provides an operative connection for transmission of data to and from a network. The network may be an internet but could also be any smaller self-contained network such as an intranet, a WAN, a LAN, or other internal or external network using; e.g., telephone transmission lines, cable services, satellites, fiber optics, T1 lines, wireless, Bluetooth, etc., and any other various technologies. Finally, computer system100need not be a computer at all, but may be a simpler appliance-like client device with less memory such as a network terminal, a thin client, a terminal-like device, a voice response unit, etc. The convergence of computing, telecommunications and consumer electronics is causing a tremendous growth in the number and variety of pervasive mobile devices as clients. This mobile architecture enables the multitude of clients including laptops, sub-notebooks, handheld computers such as personal digital assistants and companion devices, and mobile appliances such as smartphones, pages, simple messaging devices and wearable devices. Thus, when the computer system100is a mobile device, the adapters140and network interfaces170may support a variety of multi-modal interfaces including traditional keyboard and mouse interfaces, small text screens, pen, touch screens, speech recognition, text-to-speech, and/or wearable devices. The computer system100is intended to be a simplified representation, it being understood that many variations in system configuration are possible in addition to those specifically mentioned here. While computer system100could conceivably be a personal computer system, the computer system100may also be a larger computer system such as a general purpose server. Computer system100and its components are shown and described inFIG.1above as a more or less a single, self-contained computer system. Various embodiments of the present invention pertain to processes that may be implemented upon or by a single computer system100or may be implemented upon multiple computer systems100. When computer system100performs particular as directed from the program instructions stored in memory120, such computer system100in effect becomes a special purpose computer particular to the various processes as described further herein. FIG.2is a block diagram illustrating verification environment200, in accordance with some embodiments. Verification Environment200may include a verification system202and a device under test210communicatively connected by an communication interconnect220. In one embodiment, verification system202and device under test210may be separate instances of computer system100. In another embodiment, verification system202may be a computer system100and device under test210may be a processor104. In another embodiment, verification system202may be a hardware device such as a field programmable gate array (FPGA) and device under test210may be processor104. In another embodiment, verification environment200include software instances of verification system202and device under test210. For example, verification environment200may be a simulated hardware environment where one or more components of the device under test210are modeled in a hardware description language and the verification system202takes the form of a verification and/or cache purge simulation application124. In this embodiment, the modeled one or more components of the device under test210and verification application124may be called or evoked by the processor104of the same or single computer100. Alternatively in this embodiment, the modeled one or more components of the device under test210may be called or evoked by a processor104of a first computer100and the verification application124may be called or evoked by a processor104of a different second computer100. Communication interconnect220are one or more communication pathways that connect the verification system202and a device under test210. For example, communication interconnect220may include data passing or handling between software entities, may include communication hardware such as network interface170, adapter140, multifunction adapter150, connection hub130, and/or host system bus115of the first computer system100, or the like. To test or verify the resiliency of the device under test210, a cache purge simulation application124may be called by the processor104of the verification system202to cause the verification system202to perform a series of operational steps, further described herein, on the verification system202and/or the device under test210. The cache purge simulation application124may utilize a cache skip switch300,301shown for example inFIG.4orFIG.5, respectively, within the processor104of the device under test210to purge the entire cache108if the cache skip switch300,301is not enabled or alternatively purge only a subset or physical partition of the cache108of the device under test210if the cache skip switch300,301is enabled. FIG.3is a block diagram illustrating cache108of device under test210, in accordance with some embodiments. Cache108may include multiple storage locations230which may be physically and/or logically arranged in rows, columns, congruency classes, and/or sets. Each storage location230may include multiple memory cells that are each configured to store one bit of data (e.g., store a “1”, or a “0”, etc.). Each memory cell may be identified by a unique identifier, such as an address, etc. Likewise, a subset or series of memory cells within the storage location230may be identified by a unique identifier. Similarly, all the memory cells within the storage location230may be identified by a unique identifier. For example, as depicted, the entire set of memory cells within storage location230may be identified by a congruency class identifier and a set identifier. The storage locations230that share the same concurrency class identifier may be in the same physical or logical row of cache108and the storage locations230that share the same set identifier may be in the same physical or logical column of cache108. For clarity, neighboring or sequential logical storage locations230may be physically far apart and not physically neighboring one another. In an embodiment, when cache skip switch300,301is enabled, cache skip switch300defines or otherwise identifies a physical subset or physical partition232of the cache108. Similarly, when cache skip switch300,301is not enabled, cache skip switch300defines or otherwise identifies the entire cache108. Generally, the partition232includes fewer than the total number of storage locations230within cache108. As depicted, the partition232may be defined as a series of consecutive congruency classes. The physical partition232may define the storage locations230within the cache108that cache purge simulation application124perform cache purge simulation, verification, and/or test operations. When placed in the enabled state by the skip switch300,301, unique identifiers of the storage locations230within the partition232may define the cache entries234. In this manner, because partition232includes fewer storage locations230relative to the entire cache108, cache purge simulation application124perform cache purge simulation, verification, and/or test operations on an effectively smaller cache, thereby reducing the time to completion. FIG.4illustrates an implementation of a cache purge circuit that includes cache skip switch300, in accordance with some embodiments. As depicted, the cache purge circuit may include an address count register302, multiplexer304, comparator306, incrementor308, a constant cache maximum address signal310, a purge done signal312, a purge hold signal314, a constant cache minimum address signal316, a purge address signal318, skip configuration register320, multiplexer322, and a purge simulation max address signal324. The cache skip switch300may include configuration register320, purge simulation max address signal324, multiplexer322, etc. Purge hold signal314may be an internal signal utilized by device under test210and may identify whether a purge simulation and/or purge signal is being called by e.g., cache purge simulation application124. If cache purge simulation application124is inactive, purge hold signal314may be held in a reset state (e.g., purge hold signal314=1). In this state, the address counter register302may be preloaded with the constant cache min address identified by the constant cache minimum address signal316. If cache purge simulation application124is active, purge hold signal314may be held in an active state (e.g., purge hold signal314=0). When a purge command is received by the host complex102within the device under test210, the purge hold signal314may be changed to the active state and the storage location230that is identified by the address within the address counter register302is purged. The address within the address counter register302may be sequentially advanced by incrementor308and the storage location230identified thereby (i.e., storage location230identified by purge address signal318) may be purged. The address within the address counter register302may be sequentially compared by comparator306with either constant cache maximum address signal310or with the simulation max address signal324as is outputted from multiplexer322as indicated by the state of skip configuration register320and advanced by incrementor308(and the associated storage location230associated therewith purged) until the currently purged storage location230is identified by the constant cache maximum address signal310or the simulation max address signal324. Skip configuration register320may receive an input from verification system202and which sets its internal state to indicate either an active or inactive skip state. If the skip state is active, only partition232may be subjected to cache purges. If the skip state is inactive, the entire cache108may be subjected to cache purges. When the storage location230associated with the cache maximum address signal310or with the simulation max address signal324is purged, respectively as is dictated by the skip configuration register320, the purge done signal312may be sent to the verification system202and the purge hold signal314may be again held in the reset state. For clarity, is to be understood that cache skip switch300may be configured to identify and purge all storage locations230within cache108or only a partition230of storage locations230within cache108, as is dictated by the state of the configuration register320. In these embodiments, the partition232may begin at the storage location230associated with the cache minimum address signal316and ends at a storage location230associated with the simulation max address signal324. FIG.5illustrates an implementation of a cache purge circuit that includes cache skip switch301, in accordance with some embodiments. As depicted, the cache purge circuit may include an address count register302, multiplexer304, comparator306, incrementor308, a start address configuration register330, an end address configuration register332, a purge done signal312, a purge hold signal314, and a purge address signal318. Cache skip switch301may include start address configuration register330, end address configuration register332, etc. Purge hold signal314may be an internal signal utilized by device under test210and may identify whether a purge simulation and/or purge signal is being called by e.g., cache purge simulation application124. If cache purge simulation application124is inactive, purge hold signal314may be held in a reset state (e.g., purge hold signal314=1). In this state, the address counter register302may be preloaded with the starting storage location230address identified by the start address configuration register330. If cache purge simulation application124is active, purge hold signal314may be held in an active state (e.g., purge hold signal314=0). When a purge command is received by the host complex102within the device under test210, the purge hold signal314may be changed to the active state and the storage location230that is identified by the address within the address counter register302is purged. The address within the address counter register302may be sequentially advanced by incrementor308and the storage location230identified thereby (i.e., storage location230identified by purge address signal318) may be purged. The address within the address counter register302may be sequentially compared by comparator306with the address identified by the stop address configuration register332and advanced by incrementor308(and the associated storage location230associated therewith purged) until the currently purged storage location230is identified by the stop address configuration register332. When the storage location230associated with the stop address configuration register332is purged, the purge done signal312may be sent to the verification system202and the purge hold signal314may be again held in the reset state. For clarity, is to be understood that the depicted cache skip switch301may be configured to identify and purge all storage locations230within cache108(i.e., if the address as indicated by start address configuration register330equals the cache minimum address signal316, if the address as indicated by stop address configuration register332equals the cache max address signal310). Further it is to be understood that the depicted cache skip switch may be configured to identify and purge only a partition230of storage locations230within cache108as is dictated by respective addresses if start address330and stop address332(if the address as indicated by start address configuration register330does not equal the cache minimum address signal316, if the address as indicated by stop address configuration register332does not equal the cache max address signal310). For clarity, the cache skip switch300,301may be included in hardware generally located in the host processor complex102of the device under test210. For example, cache skip switch300,301may be included within processor104, CPU106, etc. of the device under test210. FIG.6illustrates a flow diagram of cache purge verification process400, in accordance with some embodiments. Cache purge verification process400may be utilized in unison by both the verification system202and the device under test210. When enabled, the device under test210is hardwired to purge a predetermined number of congruency classes. An additional or multiple cache skip switches300,301may be utilized to change the number of congruency classes to be purged or configure a range of congruency classes to be purged. Process400begins with the verification system202sets up the cache purge simulation (block402). For example, verification system202may set a simulation address space and decide on if the skip switch300,301is enabled. In some embodiments, the simulation address space is a mapping of an address to a cache line (data). The simulation address space can range from a handful to thousands of addresses and associated data generated and associated therewith. The simulation address space can be randomly chosen addresses and data, or biased, based on an predefined algorithm, or the like. In a particular implementation, the simulation address space may be biased to generate multiple addresses and associated data only within cache partition232, if skip switch300,301is enabled. Process400may continue with the verification system202determining, setting, or obtaining the cache reference values (block404). For example, verification system202may obtain the reference values from the simulation address space. Further, verification system202may select the addresses and associated data are to be written into cache108of the device under test210. This selection process can be random, biased, algorithm. For example, the simulation address space might define ten addresses and associated data that map to cache CC=0. If the cache has 12 sets, then all entries fit into cache108. However, verification system202at block404might be biased to write only half of the generated addresses and associated data into device under test210. Further, verification system202may select a set to which the cache line (data) is written into. This selection process can be random, biased, and/or algorithmically based. Process400may continue by verification system202directly writing refence values to the appropriate memory locations230within device under test210(block428). Alternatively, verification system202may send the reference values to the device under test210, whereby the device under test210writes the reference value to the associated storage location230within its cache108(block428). The reference values may be written to the entire cache108independent of the state of the skip switch300,301. The simulation address space may be biased to partition232and only contain addresses for partition232if the state of the skip switch300,301is active. The simulation address space may be not biased and contain addresses for memory locations230both inside partition232and outside partition232. As such, the verification system202knows or otherwise maps a predetermined reference value(s) to an assigned or specific storage location230within the cache108of the device under test210. Process400may continue with the verification system202sending a purge command or instruction to the device under test210(block408) which is received by the device under test210(block432). Process400may continue with the device under test210determining whether a full cache108purge should be completed or whether a skipped or partial cache108purge should be completed in response to the received purge command (block434). The device under test210may make such determination by its internal state designated within register320, by the start address register330and the stop address register330, or the like. If the cache skip switch300is utilized, the skip state is active within register320when only partition232is subjected to cache108purge (block438) and the skip state is inactive within register320when the entire cache108is subjected to cache108purge (block436). If the cache skip switch301is utilized, the skip state is active when the start address and end address within the respective register define partition232and not the entire cache108and the partition232is subjected to cache108purge (block438) and the skip state is inactive when the start address and end address within the respective register defines the entire cache108and the entire cache108is subjected to cache108purge (block436). The cache skip switch300,301, then purges the appropriate storage locations230within the device under test210as described above and the device under test210returns a cache purge done signal312to the verification system202(block440). Process400may continue with the verification system202determining whether to expect a full cache108purge or whether a skipped or partial cache108purge should be expected in response to the received purge command (block410). The verification system202may make such determination by the read internal state of the applicable skip switch300,301, as is designated within register320, by the start address register330and the stop address register330, or the like. If the cache skip switch300is utilized, the skip state is active within register320when only partition232is subjected to cache108purge and verification system202should expect a skipped cache purge (block414). For example, verification system202may expect around 384 purges from partition232that includes 32 congruency classes and 12 sets of the device under test210. Further, if the cache skip switch300is utilized, the skip state is inactive within register320when the entire cache108is subjected to cache108purge and verification system202should expect a full cache purge (block412). For example, verification system202may expect around 98,304 purges from the full cache108that includes 8192 congruency classes and 12 sets of the device under test210. If the cache skip switch301is utilized, the skip state is active when the start address and end address within the respective register define partition232and not the entire cache108and the partition232is subjected to cache108purge and verification system202should expect a skipped cache purge (block414). Further, if the cache skip switch301is utilized, the skip state is inactive when the start address and end address within the respective register defines the entire cache108and the entire cache108is subjected to cache108purge and verification system202should expect a full cache purge (block412). The verification system202may subsequently expect receipt of the associated cache purge done signal312which may trigger purge check, unloading and verification, or the like (block416). Process400may continue with the verification system202determining whether to expect a full cache108purge or whether a skipped or partial cache108purge should be expected in response to the received purge command (block418). The verification system202may make such determination by reading the internal state of the applicable skip switch300,301, as is designated within register320, by the start address register330and the stop address register330, or the like. If the device under test210purged the full cache108, the verification system checks the purge by unloading the purged data and verifying the purged data against the original predetermined reference values (block420). For example, the verification system202unloads the 98,308 entries of the purged data from the full cache108of the device under test and compares the purged data against the original predetermined reference values (block420). If the device under test210purged merely the partition232of cache108, the verification system purge checks by unloading only the purged data from the partition232storage locations230and verifying the purged data against the original predetermined reference values. For example, the verification system202unloads the 384 entries of the purged data from the partition232of the device under test and compares the purged data against the original predetermined reference values. In other words, if the skip state is active within the cache skip switch300,301, only the predetermined/configured CCs are purged and subsequently checked for valid entries. Process400may continue at block424with continuing purge verification operations, with determining the purge verification has passed, failed, or the like (block424). FIG.7illustrates a cache purge verification example of cache108partition230, in accordance with some embodiments. The verification example may begin by the verification system202setting the state of skip switch300,301within the device under test210. The state of skip switch300,301may be either a full cache108purge state such that the device under test210purges all storage locations230within cache108or a skipped cache108purge state such that the device under test210to purge the subset or partition232of storage locations230within cache108. Further, the verification system202sets the simulation address space with four addresses (e.g., 0x0:00008000, 0x0:00010000, 0x0:00018000, 0x0:00020000) and writes corresponding refence data thereto (block428). Verification system202may load or write reference values to only memory locations230that are defined by the simulation address space, to all memory locations230that are defined within the partition232, to only some partition memory locations230that are defined within the partition232, or the like. In a particular implementation, verification system202may only load and/or unload data stored in those predefined memory locations230defined within partition232. As indicated herein, either the entire cache108is purged or the partition232is purged depending upon the state of skip switch300,301. In the embodiments where the skip switch300,301is active, after the purge sequence, cache108might still have valid data within some storage locations230that are located outside of the partition232. In this cache purge verification example, the four addresses are mapped within partition232and to set identifier 1, N−1, 0 and N respectively by verification system202(e.g., block404). The four addresses are depicted inFIG.7as storage locations230with a grey background. The verification system202may send a cache108purge signal to the device under test210. Utilizing the cache skip switch300,301, the device under test210identifies and purges memory locations230. For example, as depicted, device under test210may sequentially purge of memory locations230. In such example, a first memory location230identified by the CC=0, Set=0 identifier may be initially identified and purged. Next, the set identifier is incremented and the next sequential memory location230in the congruency class identified by the CC=0, Set=1 identifier may be identified and purged. This storage location230was preloaded with a valid entry and block422ensures this storage location230is invalid at the end of the cache purge sequence. In this sequential purging of memory locations230, the device under test210may identify and purge set memory locations230in congruency class 0, may then identify and purge set memory locations230in congruency class 1, and the like, until that last congruency class within the partition232or cache108(as appropriate) is purged. ThoughFIG.7depicts device under test210sequentially purging memory locations230, device under test210may purge memory locations230in other order(s), as is known in the art. When all of the appropriate storage locations230are purged, the device under test210generates and sends the purge done signal312to the verification system202. The verification system202may then check the purged or unloaded entries from the purged storage locations against the know reference model. Such cache purge verification process may be beneficial over current state of the art techniques which utilize the verification system to artificially advance and/or increment a counter or other component that identifies the addressed storage location to be purged, as opposed to the device under test210incrementing the counter308that identifies the address storage location to be purged, as is enabled by the present embodiments. Such state of the art techniques may be disadvantageous due to a preference of allowing the device under test to reach its purge done state with artificial intervention. Such state of the art techniques may be further disadvantageous because of the resulting need of the verification environment to read the current state or count of the counter or other component in order for the verification environment to artificially advance to and/or increment to the counter or other component to point to the intended addressed storage location to be purged. Such state of the art techniques may be further disadvantageous because of timing requirements of the artificial advancement of the counter occurring during a specific window of time, or else such artificial advancing may force the device under test into an prohibited or unrealistic state. Additionally, process400may be extended such that after the cache purge command has been received by the device under test210(block432) and either the full cache is purged (block436) or partially purged (block438), other commands may be received by the device under test210that may alter the cache108state and/or the cache line (data) therein. Such commands may modify the cache reference value stored in the cache108, partition232, and the updated reference value may be considered during the cache purge check (block416). FIG.8illustrates a table500of a particular example of a number of cache purges of an exemplarity cache108that includes an enabled or active and disabled or inactive cache skip switch300,301, in accordance with some embodiments. As is depicted, in this particular cache108, when the cache skip switch300,301is inactive and configured to purge each storage location230within the cache108, there is an average of 1,184,228 purges across various cache purge simulations. However, when the cache skip switch300,301is active and configured to purge only the storage locations230within the partition232of cache108, there is an average of 7,488 purges across various cache purge simulations. In this manner, because partition232includes fewer storage locations230relative to the entire cache108, cache purge simulation application124performs cache purge simulation, verification, and/or test operations on an effectively smaller cache, thereby reducing the number of cache purges and reduces the overall time to complete the cache purge simulation or test. The present invention may be a system, a process, and/or a computer program product. The computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present invention. The computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. A non-exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing. A computer readable storage medium, as used herein, is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire. Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network. The network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. A network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device. Computer readable program instructions for carrying out operations of the present invention may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C++ or the like, and conventional procedural programming languages, such as the “C” programming language or similar programming languages. The computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). In some embodiments, electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present invention. Aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of processes, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions. These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram block or blocks. The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus or other device to produce a computer implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks. The flowcharts and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, processes, and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions. The descriptions of the various embodiments of the present invention have been presented for purposes of illustration but are not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein was chosen to best explain the principles of the embodiments, the practical application or technical improvement over those found in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.
44,368
11860790
DETAILED DESCRIPTION FIG.1illustrates a dual scalar/vector datapath processor according to a preferred embodiment of this invention. Processor100includes separate level one instruction cache (L1I)121and level one data cache (L1D)123. Processor100includes a level two combined instruction/data cache (L2)130that holds both instructions and data.FIG.1illustrates connection between level one instruction cache121and level two combined instruction/data cache130(bus142).FIG.1illustrates connection between level one data cache123and level two combined instruction/data cache130(bus145). In the preferred embodiment of processor100level two combined instruction/data cache130stores both instructions to back up level one instruction cache121and data to back up level one data cache123. In the preferred embodiment level two combined instruction/data cache130is further connected to higher level cache and/or main memory in a manner not illustrated inFIG.1. In the preferred embodiment central processing unit core110, level one instruction cache121, level one data cache123and level two combined instruction/data cache130are formed on a single integrated circuit. This signal integrated circuit optionally includes other circuits. Central processing unit core110fetches instructions from level one instruction cache121as controlled by instruction fetch unit111. Instruction fetch unit111determines the next instructions to be executed and recalls a fetch packet sized set of such instructions. The nature and size of fetch packets are further detailed below. As known in the art, instructions are directly fetched from level one instruction cache121upon a cache hit (if these instructions are stored in level one instruction cache121). Upon a cache miss (the specified instruction fetch packet is not stored in level one instruction cache121), these instructions are sought in level two combined cache130. In the preferred embodiment the size of a cache line in level one instruction cache121equals the size of a fetch packet. The memory locations of these instructions are either a hit in level two combined cache130or a miss. A hit is serviced from level two combined cache130. A miss is serviced from a higher level of cache (not illustrated) or from main memory (not illustrated). As is known in the art, the requested instruction may be simultaneously supplied to both level one instruction cache121and central processing unit core110to speed use. In the preferred embodiment of this invention, central processing unit core110includes plural functional units to perform instruction specified data processing tasks. Instruction dispatch unit112determines the target functional unit of each fetched instruction. In the preferred embodiment central processing unit110operates as a very long instruction word (VLIW) processor capable of operating on plural instructions in corresponding functional units simultaneously. Preferably a complier organizes instructions in execute packets that are executed together. Instruction dispatch unit112directs each instruction to its target functional unit. The functional unit assigned to an instruction is completely specified by the instruction produced by a compiler. The hardware of central processing unit core110has no part in this functional unit assignment. In the preferred embodiment instruction dispatch unit112may operate on plural instructions in parallel. The number of such parallel instructions is set by the size of the execute packet. This will be further detailed below. One part of the dispatch task of instruction dispatch unit112is determining whether the instruction is to execute on a functional unit in scalar datapath side A115or vector datapath side B116. An instruction bit within each instruction called the s bit determines which datapath the instruction controls. This will be further detailed below. Instruction decode unit113decodes each instruction in a current execute packet. Decoding includes identification of the functional unit performing the instruction, identification of registers used to supply data for the corresponding data processing operation from among possible register files and identification of the register destination of the results of the corresponding data processing operation. As further explained below, instructions may include a constant field in place of one register number operand field. The result of this decoding is signals for control of the target functional unit to perform the data processing operation specified by the corresponding instruction on the specified data. Central processing unit core110includes control registers114. Control registers114store information for control of the functional units in scalar datapath side A115and vector datapath side B116in a manner not relevant to this invention. This information could be mode information or the like. The decoded instructions from instruction decode113and information stored in control registers114are supplied to scalar datapath side A115and vector datapath side B116. As a result functional units within scalar datapath side A115and vector datapath side B116perform instruction specified data processing operations upon instruction specified data and store the results in an instruction specified data register or registers. Each of scalar datapath side A115and vector datapath side B116includes plural functional units that preferably operate in parallel. These will be further detailed below in conjunction withFIG.2. There is a datapath117between scalar datapath side A115and vector datapath side B116permitting data exchange. Central processing unit core110includes further non-instruction based modules. Emulation unit118permits determination of the machine state of central processing unit core110in response to instructions. This capability will typically be employed for algorithmic development. Interrupts/exceptions unit119enable central processing unit core110to be responsive to external, asynchronous events (interrupts) and to respond to attempts to perform improper operations (exceptions). Central processing unit core110includes streaming engine125. Streaming engine125supplies two data streams from predetermined addresses typically cached in level two combined cache130to register files of vector datapath side B. This provides controlled data movement from memory (as cached in level two combined cache130) directly to functional unit operand inputs. This is further detailed below. FIG.1illustrates exemplary data widths of busses between various parts. Level one instruction cache121supplies instructions to instruction fetch unit111via bus141. Bus141is preferably a 512-bit bus. Bus141is unidirectional from level one instruction cache121to central processing unit core110. Level two combined cache130supplies instructions to level one instruction cache121via bus142. Bus142is preferably a 512-bit bus. Bus142is unidirectional from level two combined cache130to level one instruction cache121. Level one data cache123exchanges data with register files in scalar datapath side A115via bus143. Bus143is preferably a 64-bit bus. Level one data cache123exchanges data with register files in vector datapath side B116via bus144. Bus144is preferably a 512-bit bus. Busses143and144are illustrated as bidirectional supporting both central processing unit core110data reads and data writes. Level one data cache123exchanges data with level two combined cache130via bus145. Bus145is preferably a 512-bit bus. Bus145is illustrated as bidirectional supporting cache service for both central processing unit core110data reads and data writes. Level two combined cache130supplies data of a first data stream to streaming engine125via bus146. Bus146is preferably a 512-bit bus. Streaming engine125supplies data of this first data stream to functional units of vector datapath side B116via bus147. Bus147is preferably a 512-bit bus. Level two combined cache130supplies data of a second data stream to streaming engine125via bus148. Bus148is preferably a 512-bit bus. Streaming engine125supplies data of this second data stream to functional units of vector datapath side B116via bus149. Bus149is preferably a 512-bit bus. Busses146,147,148and149are illustrated as unidirectional from level two combined cache130to streaming engine125and to vector datapath side B116in accordance with the preferred embodiment of this invention. In the preferred embodiment of this invention, both level one data cache123and level two combined cache130may be configured as selected amounts of cache or directly addressable memory in accordance with U.S. Pat. No. 6,606,686 entitled UNIFIED MEMORY SYSTEM ARCHITECTURE INCLUDING CACHE AND DIRECTLY ADDRESSABLE STATIC RANDOM ACCESS MEMORY. FIG.2illustrates further details of functional units and register files within scalar datapath side A115and vector datapath side B116. Scalar datapath side A115includes global scalar register file211, L1/S1 local register file212, M1/N1 local register file213and D1/D2 local register file214. Scalar datapath side A115includes L1 unit221, S1 unit222, M1 unit223, N1 unit224, D1 unit225and D2 unit226. Vector datapath side B116includes global vector register file231, L2/S2 local register file232, M2/N2/C local register file233and predicate register file234. Vector datapath side B116includes L2 unit241, S2 unit242, M2 unit243, N2 unit244, C unit245and P unit246. There are limitations upon which functional units may read from or write to which register files. These will be detailed below. Scalar datapath side A115includes L1 unit221. L1 unit221generally accepts two 64-bit operands and produces one 64-bit result. The two operands are each recalled from an instruction specified register in either global scalar register file211or L1/S1 local register file212. L1 unit221preferably performs the following instruction selected operations: 64-bit add/subtract operations; 32-bit min/max operations; 8-bit Single Instruction Multiple Data (SIMD) instructions such as sum of absolute value, minimum and maximum determinations; circular min/max operations; and various move operations between register files. The result may be written into an instruction specified register of global scalar register file211, L1/S1 local register file212, M1/N1 local register file213or D1/D2 local register file214. Scalar datapath side A115includes S1 unit222. S1 unit222generally accepts two 64-bit operands and produces one 64-bit result. The two operands are each recalled from an instruction specified register in either global scalar register file211or L1/S1 local register file212. S1 unit222preferably performs the same type operations as L1 unit221. There optionally may be slight variations between the data processing operations supported by L1 unit221and S1 unit222. The result may be written into an instruction specified register of global scalar register file211, L1/S1 local register file212, M1/N1 local register file213or D1/D2 local register file214. Scalar datapath side A115includes M1 unit223. M1 unit223generally accepts two 64-bit operands and produces one 64-bit result. The two operands are each recalled from an instruction specified register in either global scalar register file211or M1/N1 local register file213. M1 unit223preferably performs the following instruction selected operations: 8-bit multiply operations; complex dot product operations; 32-bit bit count operations; complex conjugate multiply operations; and bit-wise Logical Operations, moves, adds and subtracts. The result may be written into an instruction specified register of global scalar register file211, L1/S1 local register file212, M1/N1 local register file213or D1/D2 local register file214. Scalar datapath side A115includes N1 unit224. N1 unit224generally accepts two 64-bit operands and produces one 64-bit result. The two operands are each recalled from an instruction specified register in either global scalar register file211or M1/N1 local register file213. N1 unit224preferably performs the same type operations as M1 unit223. There may be certain double operations (called dual issued instructions) that employ both the M1 unit223and the N1 unit224together. The result may be written into an instruction specified register of global scalar register file211, L1/S1 local register file212, M1/N1 local register file213or D1/D2 local register file214. Scalar datapath side A115includes D1 unit225and D2 unit226. D1 unit225and D2 unit226generally each accept two 64-bit operands and each produce one 64-bit result. D1 unit225and D2 unit226generally perform address calculations and corresponding load and store operations. D1 unit225is used for scalar loads and stores of 64 bits. D2 unit226is used for vector loads and stores of 512 bits. D1 unit225and D2 unit226preferably also perform: swapping, pack and unpack on the load and store data; 64-bit SIMD arithmetic operations; and 64-bit bit-wise logical operations. D1/D2 local register file214will generally store base and offset addresses used in address calculations for the corresponding loads and stores. The two operands are each recalled from an instruction specified register in either global scalar register file211or D1/D2 local register file214. The calculated result may be written into an instruction specified register of global scalar register file211, L1/S1 local register file212, M1/N1 local register file213or D1/D2 local register file214. Vector datapath side B116includes L2 unit241. L2 unit241generally accepts two 512-bit operands and produces one 512-bit result. The two operands are each recalled from an instruction specified register in either global vector register file231, L2/S2 local register file232or predicate register file234. L2 unit241preferably performs instruction similar to L1 unit221except on wider 512-bit data. The result may be written into an instruction specified register of global vector register file231, L2/S2 local register file232, M2/N2/C local register file233or predicate register file234. Vector datapath side B116includes S2 unit242. S2 unit242generally accepts two 512-bit operands and produces one 512-bit result. The two operands are each recalled from an instruction specified register in either global vector register file231, L2/S2 local register file232or predicate register file234. S2 unit242preferably performs instructions similar to S1 unit222except on wider 512-bit data. The result may be written into an instruction specified register of global vector register file231, L2/S2 local register file232, M2/N2/C local register file233or predicate register file234. There may be certain double operations (called dual issued instructions) that employ both L2 unit241and the S2 unit242together. The result may be written into an instruction specified register of global vector register file231, L2/S2 local register file232or M2/N2/C local register file233. Vector datapath side B116includes M2 unit243. M2 unit243generally accepts two 512-bit operands and produces one 512-bit result. The two operands are each recalled from an instruction specified register in either global vector register file231or M2/N2/C local register file233. M2 unit243preferably performs instructions similar to M1 unit223except on wider 512-bit data. The result may be written into an instruction specified register of global vector register file231, L2/S2 local register file232or M2/N2/C local register file233. Vector datapath side B116includes N2 unit244. N2 unit244generally accepts two 512-bit operands and produces one 512-bit result. The two operands are each recalled from an instruction specified register in either global vector register file231or M2/N2/C local register file233. N2 unit244preferably performs the same type operations as M2 unit243. There may be certain double operations (called dual issued instructions) that employ both M2 unit243and the N2 unit244together. The result may be written into an instruction specified register of global vector register file231, L2/S2 local register file232or M2/N2/C local register file233. Vector datapath side B116includes C unit245. C unit245generally accepts two 512-bit operands and produces one 512-bit result. The two operands are each recalled from an instruction specified register in either global vector register file231or M2/N2/C local register file233. C unit245preferably performs: “Rake” and “Search” instructions; up to 512 2-bit PN*8-bit multiplies I/Q complex multiplies per clock cycle; 8-bit and 16-bit Sum-of-Absolute-Difference (SAD) calculations, up to 512 SADs per clock cycle; horizontal add and horizontal min/max instructions; and vector permutes instructions. C unit245includes also contains 4 vector control registers (CUCR0 to CUCR3) used to control certain operations of C unit245instructions. Control registers CUCR0 to CUCR3 are used as operands in certain C unit245operations. Control registers CUCR0 to CUCR3 are preferably used: in control of a general permutation instruction (VPERM); and as masks for SIMD multiple DOT product operations (DOTPM) and SIMD multiple Sum-of-Absolute-Difference (SAD) operations. Control register CUCR0 is preferably used to store the polynomials for Galois Field Multiply operations (GFMPY). Control register CUCR1 is preferably used to store the Galois field polynomial generator function. Vector datapath side B116includes P unit246. P unit246performs basic logic operations on registers of local predicate register file234. P unit246has direct access to read from and write to predication register file234. These operations include AND, ANDN, OR, XOR, NOR, BITR, NEG, SET, BITCNT, RMBD, BIT Decimate and Expand. A commonly expected use of P unit246includes manipulation of the SIMD vector comparison results for use in control of a further SIMD vector operation. FIG.3illustrates global scalar register file211. There are 16 independent 64-bit wide scalar registers designated A0 to A15. Each register of global scalar register file211can be read from or written to as 64-bits of scalar data. All scalar datapath side A115functional units (L1 unit221, S1 unit222, M1 unit223, N1 unit224, D1 unit225and D2 unit226) can read or write to global scalar register file211. Global scalar register file211may be read as 32-bits or as 64-bits and may only be written to as 64-bits. The instruction executing determines the read data size. Vector datapath side B116functional units (L2 unit241, S2 unit242, M2 unit243, N2 unit244, C unit245and P unit246) can read from global scalar register file211via crosspath117under restrictions that will be detailed below. FIG.4illustrates D1/D2 local register file214. There are 16 independent 64-bit wide scalar registers designated D0 to D15. Each register of D1/D2 local register file214can be read from or written to as 64-bits of scalar data. All scalar datapath side A115functional units (L1 unit221, S1 unit222, M1 unit223, N1 unit224, D1 unit225and D2 unit226) can write to global scalar register file211. Only D1 unit225and D2 unit226can read from D1/D2 local scalar register file214. It is expected that data stored in D1/D2 local scalar register file214will include base addresses and offset addresses used in address calculation. FIG.5illustrates L1/S1 local register file212. The embodiment illustrated inFIG.5has 8 independent 64-bit wide scalar registers designated AL0 to AL7. The preferred instruction coding (seeFIG.13) permits L1/S1 local register file212to include up to 16 registers. The embodiment ofFIG.5implements only 8 registers to reduce circuit size and complexity. Each register of L1/S1 local register file212can be read from or written to as 64-bits of scalar data. All scalar datapath side A115functional units (L1 unit221, S1 unit222, M1 unit223, N1 unit224, D1 unit225and D2 unit226) can write to L1/S1 local scalar register file212. Only L1 unit221and S1 unit222can read from L1/S1 local scalar register file212. FIG.6illustrates M1/N1 local register file213. The embodiment illustrated inFIG.6has 8 independent 64-bit wide scalar registers designated AM0 to AM7. The preferred instruction coding (seeFIG.13) permits M1/N1 local register file213to include up to 16 registers. The embodiment ofFIG.6implements only 8 registers to reduce circuit size and complexity. Each register of M1/N1 local register file213can be read from or written to as 64-bits of scalar data. All scalar datapath side A115functional units (L1 unit221, S1 unit222, M1 unit223, N1 unit224, D1 unit225and D2 unit226) can write to M1/N1 local scalar register file213. Only M1 unit223and N1 unit224can read from M1/N1 local scalar register file213. FIG.7illustrates global vector register file231. There are 16 independent 512-bit wide scalar registers. Each register of global vector register file231can be read from or written to as 64-bits of scalar data designated B0 to B15. Each register of global vector register file231can be read from or written to as 512-bits of vector data designated VB0 to VB15. The instruction type determines the data size. All vector datapath side B116functional units (L2 unit241, S2 unit242, M2 unit243, N2 unit244, C unit245and P unit246) can read or write to global vector register file231. Scalar datapath side A115functional units (L1 unit221, S1 unit222, M1 unit223, N1 unit224, D1 unit225and D2 unit226) can read from global vector register file231via crosspath117under restrictions that will be detailed below. FIG.8illustrates P local register file234. There are 8 independent 64-bit wide registers designated P0 to P15. Each register of P local register file234can be read from or written to as 64-bits of scalar data. Vector datapath side B116functional units L2 unit241, S2 unit242, C unit244and P unit246can write to P local register file234. Only L2 unit241, S2 unit242and P unit246can read from P local scalar register file234. A commonly expected use of P local register file234includes: writing one bit SIMD vector comparison results from L2 unit241, S2 unit242or C unit245; manipulation of the SIMD vector comparison results by P unit246; and use of the manipulated results in control of a further SIMD vector operation. FIG.9illustrates L2/S2 local register file232. The embodiment illustrated inFIG.9has 8 independent 512-bit wide scalar registers. The preferred instruction coding (seeFIG.13) permits L2/S2 local register file232to include up to 16 registers. The embodiment ofFIG.9implements only 8 registers to reduce circuit size and complexity. Each register of L2/S2 local vector register file232can be read from or written to as 64-bits of scalar data designated BL0 to BL7. Each register of L2/S2 local vector register file232can be read from or written to as 512-bits of vector data designated VBL0 to VBL7. The instruction type determines the data size. All vector datapath side B116functional units (L2 unit241, S2 unit242, M2 unit243, N2 unit244, C unit245and P unit246) can write to L2/S2 local vector register file232. Only L2 unit241and S2 unit242can read from L2/S2 local vector register file232. FIG.10illustrates M2/N2/C local register file233. The embodiment illustrated inFIG.10has 8 independent 512-bit wide scalar registers. The preferred instruction coding (seeFIG.13) permits M2/N2/C local register file233to include up to 16 registers. The embodiment ofFIG.10implements only 8 registers to reduce circuit size and complexity. Each register of M2/N2/C local vector register file233can be read from or written to as 64-bits of scalar data designated BMO to BM7. Each register of M2/N2/C local vector register file233can be read from or written to as 512-bits of vector data designated VBM0 to VBM7. All vector datapath side B116functional units (L2 unit241, S2 unit242, M2 unit243, N2 unit244, C unit245and P unit246) can write to M2/N2/C local vector register file233. Only M2 unit243, N2 unit244and C unit245can read from M2/N2/C local vector register file233. The provision of global register files accessible by all functional units of a side and local register files accessible by only some of the functional units of a side is a design choice. This invention could be practiced employing only one type of register file corresponding to the disclosed global register files. Crosspath117permits limited exchange of data between scalar datapath side A115and vector datapath side B116. During each operational cycle one 64-bit data word can be recalled from global scalar register file A211for use as an operand by one or more functional units of vector datapath side B116and one 64-bit data word can be recalled from global vector register file231for use as an operand by one or more functional units of scalar datapath side A115. Any scalar datapath side A115functional unit (L1 unit221, S1 unit222, M1 unit223, N1 unit224, D1 unit225and D2 unit226) may read a 64-bit operand from global vector register file231. This 64-bit operand is the least significant bits of the 512-bit data in the accessed register of global vector register file231. Plural scalar datapath side A115functional units may employ the same 64-bit crosspath data as an operand during the same operational cycle. However, only one 64-bit operand is transferred from vector datapath side B116to scalar datapath side A115in any single operational cycle. Any vector datapath side B116functional unit (L2 unit241, S2 unit242, M2 unit243, N2 unit244, C unit245and P unit246) may read a 64-bit operand from global scalar register file211. If the corresponding instruction is a scalar instruction, the crosspath operand data is treated as any other 64-bit operand. If the corresponding instruction is a vector instruction, the upper 448 bits of the operand are zero filled. Plural vector datapath side B116functional units may employ the same 64-bit crosspath data as an operand during the same operational cycle. Only one 64-bit operand is transferred from scalar datapath side A115to vector datapath side B116in any single operational cycle. The streaming engine125transfers data in certain restricted circumstances. The streaming engine125controls two data streams in the illustrated embodiment. A stream consists of a sequence of elements of a particular type. Programs that operate on streams read the data sequentially, operating on each element in turn. Every stream has the following basic properties. The stream data have a well-defined beginning and ending in time. The stream data have fixed element size and type throughout the stream. The stream data have fixed sequence of elements. Thus programs cannot seek randomly within the stream. The stream data is read-only while active. Programs cannot write to a stream while simultaneously reading from it. Once a stream is opened streaming engine125: calculates the address; fetches the defined data type from level two unified cache (which may require cache service from a higher level memory); performs data type manipulation such as zero extension, sign extension, data element sorting/swapping such as matrix transposition; and delivers the data directly to the programmed data register file within central processing unit core110. Streaming engine125is thus useful for real-time digital filtering operations on well-behaved data. Streaming engine125frees these memory fetch tasks from the corresponding central processing unit core110enabling other processing functions. Streaming engine125provides the following benefits. Streaming engine125permits multi-dimensional memory accesses. Streaming engine125increases the available bandwidth to the functional units. Streaming engine125minimizes the number of cache miss stalls since the stream buffer bypasses level one data cache123. Streaming engine125reduces the number of scalar operations required to maintain a loop. Streaming engine125manages address pointers. Streaming engine125handles address generation automatically freeing up the address generation instruction slots and D1 unit225and D2 unit226for other computations. Central processing unit core110operates on an instruction pipeline. Instructions are fetched in instruction packets of fixed length further described below. All instructions require the same number of pipeline phases for fetch and decode, but require a varying number of execute phases. FIG.11illustrates the following pipeline phases: program fetch phase1110, dispatch and decode phases1120and execution phases1130. Program fetch phase1110includes three stages for all instructions. Dispatch and decode phases1120include three stages for all instructions. Execution phase1130includes one to four stages dependent on the instruction. Fetch phase1110includes program address generation stage1111(PG), program access stage1112(PA) and program receive stage1113(PR). During program address generation stage1111(PG), the program address is generated in central processing unit core110and the read request is sent to the memory controller for the level one instruction cache L1I. During the program access stage1112(PA) the level one instruction cache L1I processes the request, accesses the data in its memory and sends a fetch packet to the central processing unit core110boundary. During the program receive stage1113(PR) central processing unit core110registers the fetch packet. Instructions are always fetched sixteen 32-bit wide slots, constituting a fetch packet, at a time.FIG.12illustrates 16 instructions1201to1216of a single fetch packet. Fetch packets are aligned on 512-bit (16-word) boundaries. The preferred embodiment employs a fixed 32-bit instruction length. Fixed length instructions are advantageous for several reasons. Fixed length instructions enable easy decoder alignment. A properly aligned instruction fetch can load plural instructions into parallel instruction decoders. Such a properly aligned instruction fetch can be achieved by predetermined instruction alignment when stored in memory (fetch packets aligned on 512-bit boundaries) coupled with a fixed instruction packet fetch. An aligned instruction fetch permits operation of parallel decoders on instruction-sized fetched bits. Variable length instructions require an initial step of locating each instruction boundary before they can be decoded. A fixed length instruction set generally permits more regular layout of instruction fields. This simplifies the construction of each decoder which is an advantage for a wide issue VLIW central processor. The execution of the individual instructions is partially controlled by a p bit in each instruction. This p bit is preferably bit 0 of the 32-bit wide slot. The p bit determines whether an instruction executes in parallel with a next instruction. Instructions are scanned from lower to higher address. If the p bit of an instruction is 1, then the next following instruction (higher memory address) is executed in parallel with (in the same cycle as) that instruction. If the p bit of an instruction is 0, then the next following instruction is executed in the cycle after the instruction. Central processing unit core110and level one instruction cache L1I121pipelines are de-coupled from each other. Fetch packet returns from level one instruction cache L1I can take different number of clock cycles, depending on external circumstances such as whether there is a hit in level one instruction cache121or a hit in level two combined cache130. Therefore program access stage1112(PA) can take several clock cycles instead of 1 clock cycle as in the other stages. The instructions executing in parallel constitute an execute packet. In the preferred embodiment an execute packet can contain up to sixteen instructions. No two instructions in an execute packet may use the same functional unit. A slot is one of five types: 1) a self-contained instruction executed on one of the functional units of central processing unit core110(L1 unit221, S1 unit222, M1 unit223, N1 unit224, D1 unit225, D2 unit226, L2 unit241, S2 unit242, M2 unit243, N2 unit244, C unit245and P unit246); 2) a unitless instruction such as a NOP (no operation) instruction or multiple NOP instruction; 3) a branch instruction; 4) a constant field extension; and 5) a conditional code extension. Some of these slot types will be further explained below. Dispatch and decode phases1120include instruction dispatch to appropriate execution unit stage1121(DS), instruction pre-decode stage1122(DC1), and instruction decode, operand reads stage1123(DC2). During instruction dispatch to appropriate execution unit stage1121(DS), the fetch packets are split into execute packets and assigned to the appropriate functional units. During the instruction pre-decode stage1122(DC1), the source registers, destination registers and associated paths are decoded for the execution of the instructions in the functional units. During the instruction decode, operand reads stage1123(DC2), more detailed unit decodes are done, as well as reading operands from the register files. Execution phases1130includes execution stages1131to1135(E1 to E5). Different types of instructions require different numbers of these stages to complete their execution. These stages of the pipeline play an important role in understanding the device state at central processing unit core110cycle boundaries. During execute 1 stage1131(E1) the conditions for the instructions are evaluated and operands are operated on. As illustrated inFIG.11, execute 1 stage1131may receive operands from a stream buffer1141and one of the register files shown schematically as1142. For load and store instructions, address generation is performed and address modifications are written to a register file. For branch instructions, branch fetch packet in PG phase1111is affected. As illustrated inFIG.11, load and store instructions access memory here shown schematically as memory1151. For single-cycle instructions, results are written to a destination register file. This assumes that any conditions for the instructions are evaluated as true. If a condition is evaluated as false, the instruction does not write any results or have any pipeline operation after execute 1 stage1131. During execute 2 stage1133(E2) load instructions send the address to memory. Store instructions send the address and data to memory. Single-cycle instructions that saturate results set the SAT bit in the control status register (CSR) if saturation occurs. For 2-cycle instructions, results are written to a destination register file. During execute 3 stage1133(E3) data memory accesses are performed. Any multiply instructions that saturate results set the SAT bit in the control status register (CSR) if saturation occurs. For 3-cycle instructions, results are written to a destination register file. During execute 4 stage1134(E4) load instructions bring data to the central processing unit core110boundary. For 4-cycle instructions, results are written to a destination register file. During execute 5 stage1135(E5) load instructions write data into a register. This is illustrated schematically inFIG.11with input from memory1151to execute 5 stage1135. FIG.13illustrates an example of the instruction coding1300of functional unit instructions used by this invention. Those skilled in the art would realize that other instruction codings are feasible and within the scope of this invention. Each instruction consists of 32 bits and controls the operation of one of the individually controllable functional units (L1 unit221, S1 unit222, M1 unit223, N1 unit224, D1 unit225, D2 unit226, L2 unit241, S2 unit242, M2 unit243, N2 unit244, C unit245and P unit246). The bit fields are defined as follows. The creg field1301(bits 29 to 31) and the z bit1302(bit 28) are optional fields used in conditional instructions. These bits are used for conditional instructions to identify the predicate register and the condition. The z bit1302(bit 28) indicates whether the predication is based upon zero or not zero in the predicate register. If z=1, the test is for equality with zero. If z=0, the test is for nonzero. The case of creg=0 and z=0 is treated as always true to allow unconditional instruction execution. The creg field1301and the z field1302are encoded in the instruction as shown in Table 1. TABLE 1ConditionalcregzRegister31302928Unconditional0000Reserved0001A0001zA1010zA2011zA3100zA4101zA5110zReserved11xx Execution of a conditional instruction is conditional upon the value stored in the specified data register. This data register is in the global scalar register file211for all functional units. Note that “z” in the z bit column refers to the zero/not zero comparison selection noted above and “x” is a don't care state. This coding can only specify a subset of the 16 global registers as predicate registers. This selection was made to preserve bits in the instruction coding. Note that unconditional instructions do not have these optional bits. For unconditional instructions these bits in fields1301and1302(28 to 31) are preferably used as additional opcode bits. The dst field1303(bits 23 to 27) specifies a register in a corresponding register file as the destination of the instruction results. The src2/cst field1304(bits 18 to 22) has several meanings depending on the instruction opcode field (bits 3 to 12 for all instructions and additionally bits 28 to 31 for unconditional instructions). The first meaning specifies a register of a corresponding register file as the second operand. The second meaning is an immediate constant. Depending on the instruction type, this is treated as an unsigned integer and zero extended to a specified data length or is treated as a signed integer and sign extended to the specified data length. The src1 field1305(bits 13 to 17) specifies a register in a corresponding register file as the first source operand. The opcode field1306(bits 3 to 12) for all instructions (and additionally bits 28 to 31 for unconditional instructions) specifies the type of instruction and designates appropriate instruction options. This includes unambiguous designation of the functional unit used and operation performed. A detailed explanation of the opcode is beyond the scope of this invention except for the instruction options detailed below. The e bit1307(bit 2) is only used for immediate constant instructions where the constant may be extended. If e=1, then the immediate constant is extended in a manner detailed below. If e=0, then the immediate constant is not extended. In that case the immediate constant is specified by the src2/cst field1304(bits 18 to 22). Note that this e bit1307is used for only some instructions. Accordingly, with proper coding this e bit1307may be omitted from instructions which do not need it and this bit used as an additional opcode bit. The s bit1308(bit 1) designates scalar datapath side A115or vector datapath side B116. If s=0, then scalar datapath side A115is selected. This limits the functional unit to L1 unit221, S1 unit222, M1 unit223, N1 unit224, D1 unit225and D2 unit226and the corresponding register files illustrated inFIG.2. Similarly, s=1 selects vector datapath side B116limiting the functional unit to L2 unit241, S2 unit242, M2 unit243, N2 unit244, P unit246and the corresponding register file illustrated inFIG.2. The p bit1309(bit 0) marks the execute packets. The p-bit determines whether the instruction executes in parallel with the following instruction. The p-bits are scanned from lower to higher address. If p=1 for the current instruction, then the next instruction executes in parallel with the current instruction. If p=0 for the current instruction, then the next instruction executes in the cycle after the current instruction. All instructions executing in parallel constitute an execute packet. An execute packet can contain up to twelve instructions. Each instruction in an execute packet must use a different functional unit. There are two different condition code extension slots. Each execute packet can contain one each of these unique 32-bit condition code extension slots which contains the 4-bit creg/z fields for the instructions in the same execute packet.FIG.14illustrates the coding for condition code extension slot 0 andFIG.15illustrates the coding for condition code extension slot 1. FIG.14illustrates the coding for condition code extension slot 0 (1400) having 32 bits. Field1401(bits 28 to 31) specify 4 creg/z bits assigned to the L1 unit221instruction in the same execute packet. Field1402(bits 27 to 24) specify 4 creg/z bits assigned to the L2 unit241instruction in the same execute packet. Field1403(bits 20 to 23) specify 4 creg/z bits assigned to the S1 unit222instruction in the same execute packet. Field1404(bits 16 to 19) specify 4 creg/z bits assigned to the S2 unit242instruction in the same execute packet. Field1405(bits 12 to 15) specify 4 creg/z bits assigned to the D1 unit225instruction in the same execute packet. Field1406(bits 8 to 11) specify 4 creg/z bits assigned to the D2 unit226instruction in the same execute packet. Field1407(bits 6 and 7) is unused/reserved. Field1408(bits 0 to 5) are coded a set of unique bits (CCEX0) to identify the condition code extension slot 0. Once this unique ID of condition code extension slot 0 is detected, the corresponding creg/z bits are employed to control conditional execution of any L1 unit221, L2 unit241, S1 unit222, S2 unit242, D1 unit225and D2 unit226instruction in the same execution packet. These creg/z bits are interpreted as shown in Table 1. If the corresponding instruction is conditional (includes creg/z bits) the corresponding bits in the condition code extension slot 0 override the condition code bits in the instruction. Note that no execution packet can have more than one instruction directed to a particular execution unit. No execute packet of instructions can contain more than one condition code extension slot 0. Thus the mapping of creg/z bits to functional unit instruction is unambiguous. Setting the creg/z bits equal to “0000” makes the instruction unconditional. Thus a properly coded condition code extension slot 0 can make some corresponding instructions conditional and some unconditional. FIG.15illustrates the coding for condition code extension slot 1 (1500) having 32 bits. Field1501(bits 28 to 31) specify 4 creg/z bits assigned to the M1 unit223instruction in the same execute packet. Field1502(bits 27 to 24) specify 4 creg/z bits assigned to the M2 unit243instruction in the same execute packet. Field1503(bits 19 to 23) specify 4 creg/z bits assigned to the C unit245instruction in the same execute packet. Field1504(bits 16 to 19) specify 4 creg/z bits assigned to the N1 unit224instruction in the same execute packet. Field1505(bits 12 to 15) specify 4 creg/z bits assigned to the N2 unit244instruction in the same execute packet. Field1506(bits 6 to 11) is unused/reserved. Field1507(bits 0 to 5) are coded a set of unique bits (CCEX1) to identify the condition code extension slot 1. Once this unique ID of condition code extension slot 1 is detected, the corresponding creg/z bits are employed to control conditional execution of any M1 unit223, M2 unit243, C unit245, N1 unit224and N2 unit244instruction in the same execution packet. These creg/z bits are interpreted as shown in Table 1. If the corresponding instruction is conditional (includes creg/z bits) the corresponding bits in the condition code extension slot 1 override the condition code bits in the instruction. Note that no execution packet can have more than one instruction directed to a particular execution unit. No execute packet of instructions can contain more than one condition code extension slot 1. Thus the mapping of creg/z bits to functional unit instruction is unambiguous. Setting the creg/z bits equal to “0000” makes the instruction unconditional. Thus a properly coded condition code extension slot 1 can make some instructions conditional and some unconditional. It is feasible for both condition code extension slot 0 and condition code extension slot 1 to include a p bit to define an execute packet as described above in conjunction withFIG.13. In the preferred embodiment, as illustrated inFIGS.14and15, code extension slot 0 and condition code extension slot 1 preferably have bit 0 (p bit) always encoded as 1. Thus neither condition code extension slot 0 nor condition code extension slot 1 can be in the last instruction slot of an execute packet. There are two different constant extension slots. Each execute packet can contain one each of these unique 32-bit constant extension slots which contains 27 bits to be concatenated as high order bits with the 5-bit constant field1305to form a 32-bit constant. As noted in the instruction coding description above only some instructions define the src2/cst field1304as a constant rather than a source register identifier. At least some of those instructions may employ a constant extension slot to extend this constant to 32 bits. FIG.16illustrates the fields of constant extension slot 0 (1600). Each execute packet may include one instance of constant extension slot 0 and one instance of constant extension slot 1.FIG.16illustrates that constant extension slot 01600includes two fields. Field1601(bits 5 to 31) constitute the most significant 27 bits of an extended 32-bit constant including the target instruction scr2/cst field1304as the five least significant bits. Field1602(bits 0 to 4) are coded a set of unique bits (CSTX0) to identify the constant extension slot 0. In the preferred embodiment constant extension slot 01600can only be used to extend the constant of one of an L1 unit221instruction, data in a D1 unit225instruction, an S2 unit242instruction, an offset in a D2 unit226instruction, an M2 unit243instruction, an N2 unit244instruction, a branch instruction, or a C unit245instruction in the same execute packet. Constant extension slot 1 is similar to constant extension slot 0 except that bits 0 to 4 are coded a set of unique bits (CSTX1) to identify the constant extension slot 1. In the preferred embodiment constant extension slot 1 can only be used to extend the constant of one of an L2 unit241instruction, data in a D2 unit226instruction, an S1 unit222instruction, an offset in a D1 unit225instruction, an M1 unit223instruction or an N1 unit224instruction in the same execute packet. Constant extension slot 0 and constant extension slot 1 are used as follows. The target instruction must be of the type permitting constant specification. As known in the art this is implemented by replacing one input operand register specification field with the least significant bits of the constant as described above with respect to scr2/cst field1304. Instruction decoder113determines this case, known as an immediate field, from the instruction opcode bits. The target instruction also includes one constant extension bit (e bit1307) dedicated to signaling whether the specified constant is not extended (preferably constant extension bit=0) or the constant is extended (preferably constant extension bit=1). If instruction decoder113detects a constant extension slot 0 or a constant extension slot 1, it further checks the other instructions within that execute packet for an instruction corresponding to the detected constant extension slot. A constant extension is made only if one corresponding instruction has a constant extension bit (e bit1307) equal to 1. FIG.17is a partial block diagram1700illustrating constant extension.FIG.17assumes that instruction decoder113detects a constant extension slot and a corresponding instruction in the same execute packet. Instruction decoder113supplies the27extension bits from the constant extension slot (bit field1601) and the 5 constant bits (bit field1305) from the corresponding instruction to concatenator1701. Concatenator1701forms a single 32-bit word from these two parts. In the preferred embodiment the27extension bits from the constant extension slot (bit field1601) are the most significant bits and the 5 constant bits (bit field1305) are the least significant bits. This combined 32-bit word is supplied to one input of multiplexer1702. The 5 constant bits from the corresponding instruction field1305supply a second input to multiplexer1702. Selection of multiplexer1702is controlled by the status of the constant extension bit. If the constant extension bit (e bit1307) is 1 (extended), multiplexer1702selects the concatenated 32-bit input. If the constant extension bit is 0 (not extended), multiplexer1702selects the 5 constant bits from the corresponding instruction field1305. Multiplexer1702supplies this output to an input of sign extension unit1703. Sign extension unit1703forms the final operand value from the input from multiplexer1703. Sign extension unit1703receives control inputs Scalar/Vector and Data Size. The Scalar/Vector input indicates whether the corresponding instruction is a scalar instruction or a vector instruction. The functional units of data path side A115(L1 unit221, S1 unit222, M1 unit223, N1 unit224, D1 unit225and D2 unit226) can only perform scalar instructions. Any instruction directed to one of these functional units is a scalar instruction. Data path side B functional units L2 unit241, S2 unit242, M2 unit243, N2 unit244and C unit245may perform scalar instructions or vector instructions. Instruction decoder113determines whether the instruction is a scalar instruction or a vector instruction from the opcode bits. P unit246may only perform scalar instructions. The Data Size may be 8 bits (byte B), 16 bits (half-word H), 32 bits (word W) or 64 bits (double word D). Table 2 lists the operation of sign extension unit1703for the various options. TABLE 2InstructionOperandConstantTypeSizeLengthActionScalarB/H/W/D5 bitsSign extend to 64 bitsScalarB/H/W/D32 bitsSign extend to 64 bitsVectorB/H/W/D5 bitsSign extend to operand size andreplicate across whole vectorVectorB/H/W32 bitsReplicate 32-bit constant acrosseach 32-bit (W) laneVectorD32 bitsSign extend to 64 bits and replicateacross each 64-bit (D) lane It is feasible for both constant extension slot 0 and constant extension slot 1 to include a p bit to define an execute packet as described above in conjunction withFIG.13. In the preferred embodiment, as in the case of the condition code extension slots, constant extension slot 0 and constant extension slot 1 preferably have bit 0 (p bit) always encoded as 1. Thus neither constant extension slot 0 nor constant extension slot 1 can be in the last instruction slot of an execute packet. It is technically feasible for an execute packet to include a constant extension slot 0 or 1 and more than one corresponding instruction marked constant extended (e bit=1). For constant extension slot 0 this would mean more than one of an L1 unit221instruction, data in a D1 unit225instruction, an S2 unit242instruction, an offset in a D2 unit226instruction, an M2 unit243instruction or an N2 unit244instruction in an execute packet have an e bit of 1. For constant extension slot 1 this would mean more than one of an L2 unit241instruction, data in a D2 unit226instruction, an S1 unit222instruction, an offset in a D1 unit225instruction, an M1 unit223instruction or an N1 unit224instruction in an execute packet have an e bit of 1. Supplying the same constant extension to more than one instruction is not expected to be a useful function. Accordingly, in one embodiment instruction decoder113may determine this case an invalid operation and not supported. Alternately, this combination may be supported with extension bits of the constant extension slot applied to each corresponding functional unit instruction marked constant extended. Special vector predicate instructions use registers in predicate register file234to control vector operations. In the current embodiment all these SIMD vector predicate instructions operate on selected data sizes. The data sizes may include byte (8 bit) data, half word (16 bit) data, word (32 bit) data, double word (64 bit) data, quad word (128 bit) data and half vector (256 bit) data. Each bit of the predicate register controls whether a SIMD operation is performed upon the corresponding byte of data. The operations of P unit245permit a variety of compound vector SIMD operations based upon more than one vector comparison. For example a range determination can be made using two comparisons. A candidate vector is compared with a first vector reference having the minimum of the range packed within a first data register. A second comparison of the candidate vector is made with a second reference vector having the maximum of the range packed within a second data register. Logical combinations of the two resulting predicate registers would permit a vector conditional operation to determine whether each data part of the candidate vector is within range or out of range. L1 unit221, S1 unit222, L2 unit241, S2 unit242and C unit245often operate in a single instruction multiple data (SIMD) mode. In this SIMD mode the same instruction is applied to packed data from the two operands. Each operand holds plural data elements disposed in predetermined slots. SIMD operation is enabled by carry control at the data boundaries. Such carry control enables operations on varying data widths. FIG.18illustrates the carry control. AND gate1801receives the carry output of bit N within the operand wide arithmetic logic unit (64 bits for scalar datapath side A115functional units and 512 bits for vector datapath side B116functional units). AND gate1801also receives a carry control signal which will be further explained below. The output of AND gate1801is supplied to the carry input of bit N+1 of the operand wide arithmetic logic unit. AND gates such as AND gate1801are disposed between every pair of bits at a possible data boundary. For example, for 8-bit data such an AND gate will be between bits 7 and 8, bits 15 and 16, bits 23 and 24, etc. Each such AND gate receives a corresponding carry control signal. If the data size is the minimum, then each carry control signal is 0, effectively blocking carry transmission between the adjacent bits. The corresponding carry control signal is 1 if the selected data size requires both arithmetic logic unit sections. Table 3 below shows example carry control signals for the case of a 512 bit wide operand such as used by vector datapath side B116functional units which may be divided into sections of 8 bits, 16 bits, 32 bits, 64 bits, 128 bits or 256 bits. In Table 3 the upper 32 bits control the upper bits (bits 128 to 511) carries and the lower 32 bits control the lower bits (bits 0 to 127) carries. No control of the carry output of the most significant bit is needed, thus only 63 carry control signals are required. TABLE 3Data SizeCarry Control Signals8 bits (B)−000 0000 0000 0000 0000 0000 0000 00000000 0000 0000 0000 0000 0000 0000 000016 bits (H)−101 0101 0101 0101 0101 0101 0101 01010101 0101 0101 0101 0101 0101 0101 010132 bits (W)−111 0111 0111 0111 0111 0111 0111 01110111 0111 0111 0111 0111 0111 0111 011164 bits (D)−111 1111 0111 1111 0111 1111 0111 11110111 1111 0111 1111 0111 1111 0111 0111128 bits−111 1111 1111 1111 0111 1111 1111 11110111 1111 1111 1111 0111 1111 1111 1111256 bits−111 1111 1111 1111 1111 1111 1111 11110111 1111 1111 1111 1111 1111 1111 1111 It is typical in the art to operate on data sizes that are integral powers of 2 (2N). However, this carry control technique is not limited to integral powers of 2. One skilled in the art would understand how to apply this technique to other data sizes and other operand widths. FIG.19illustrates a conceptual view of the streaming engines of this invention.FIG.19illustrates the process of a single stream. Streaming engine1900includes stream address generator1901. Stream address generator1901sequentially generates addresses of the elements of the stream and supplies these element addresses to system memory1910. Memory1910recalls data stored at the element addresses (data elements) and supplies these data elements to data first-in-first-out (FIFO) memory1902. Data FIFO1902provides buffering between memory1910and CPU1920. Data formatter1903receives the data elements from data FIFO memory1902and provides data formatting according to the stream definition. This process will be described below. Streaming engine1900supplies the formatted data elements from data formatter1903to the CPU1920. The program on CPU1920consumes the data and generates an output. Stream elements typically reside in normal memory. The memory itself imposes no particular structure upon the stream. Programs define streams and therefore impose structure, by specifying the following stream attributes: address of the first element of the stream; size and type of the elements in the stream; formatting for data in the stream; and the address sequence associated with the stream. The streaming engine defines an address sequence for elements of the stream in terms of a pointer walking through memory. A multiple-level nested loop controls the path the pointer takes. An iteration count for a loop level indicates the number of times that level repeats. A dimension gives the distance between pointer positions of that loop level. In a basic forward stream the innermost loop always consumes physically contiguous elements from memory. The implicit dimension of this innermost loop is 1 element. The pointer itself moves from element to element in consecutive, increasing order. In each level outside the inner loop, that loop moves the pointer to a new location based on the size of that loop level's dimension. This form of addressing allows programs to specify regular paths through memory in a small number of parameters. Table 4 lists the addressing parameters of a basic stream. TABLE 4ParameterDefinitionELEM_BYTESSize of each element in bytesICNT0Number of iterations for the innermost loop level 0.At loop level 0 all elements are physically contiguousDIM0 is ELEM_BYTESICNT1Number of iterations for loop level 1DIM1Number of bytes between the starting points forconsecutive iterations of loop level 1ICNT2Number of iterations for loop level 2DIM2Number of bytes between the starting points forconsecutive iterations of loop level 2ICNT3Number of iterations for loop level 3DIM3Number of bytes between the starting points forconsecutive iterations of loop level 3ICNT4Number of iterations for loop level 4DIM4Number of bytes between the starting points forconsecutive iterations of loop level 4ICNT5Number of iterations for loop level 5DIM5Number of bytes between the starting points forconsecutive iterations of loop level 5 The definition above maps consecutive elements of the stream to increasing addresses in memory. This works well for most algorithms but not all. Some algorithms are better served by reading elements in decreasing memory addresses, reverse stream addressing. For example, a discrete convolution computes vector dot-products, as per the formula: (f,g)[t]=∑x=-∞∞f[x[g[t-x] In most DSP code, f[ ] and g[ ] represent arrays in memory. For each output, the algorithm reads f[ ] in the forward direction, but reads g[ ] in the reverse direction. Practical filters limit the range of indices for [x] and [t-x] to a finite number elements. To support this pattern, the streaming engine supports reading elements in decreasing address order. Matrix multiplication presents a unique problem to the streaming engine. Each element in the matrix product is a vector dot product between a row from the first matrix and a column from the second. Programs typically store matrices all in row-major or column-major order. Row-major order stores all the elements of a single row contiguously in memory. Column-major order stores all elements of a single column contiguously in memory. Matrices typically get stored in the same order as the default array order for the language. As a result, only one of the two matrices in a matrix multiplication map on to the streaming engine's 2-dimensional stream definition. In a typical example a first index steps through columns on array first array but rows on second array. This problem is not unique to the streaming engine. Matrix multiplication's access pattern fits poorly with most general-purpose memory hierarchies. Some software libraries transposed one of the two matrices, so that both get accessed row-wise (or column-wise) during multiplication. The streaming engine supports implicit matrix transposition with transposed streams. Transposed streams avoid the cost of explicitly transforming the data in memory. Instead of accessing data in strictly consecutive-element order, the streaming engine effectively interchanges the inner two loop dimensions in its traversal order, fetching elements along the second dimension into contiguous vector lanes. This algorithm works, but is impractical to implement for small element sizes. Some algorithms work on matrix tiles which are multiple columns and rows together. Therefore, the streaming engine defines a separate transposition granularity. The hardware imposes a minimum granularity. The transpose granularity must also be at least as large as the element size. Transposition granularity causes the streaming engine to fetch one or more consecutive elements from dimension 0 before moving along dimension 1. When the granularity equals the element size, this results in fetching a single column from a row-major array. Otherwise, the granularity specifies fetching 2, 4 or more columns at a time from a row-major array. This is also applicable for column-major layout by exchanging row and column in the description. A parameter GRANULE indicates the transposition granularity in bytes. Another common matrix multiplication technique exchanges the innermost two loops of the matrix multiply. The resulting inner loop no longer reads down the column of one matrix while reading across the row of another. For example the algorithm may hoist one term outside the inner loop, replacing it with the scalar value. On a vector machine, the innermost loop can be implements very efficiently with a single scalar-by-vector multiply followed by a vector add. The central processing unit core110of this invention lacks a scalar-by-vector multiply. Programs must instead duplicate the scalar value across the length of the vector and use a vector-by-vector multiply. The streaming engine of this invention directly supports this and related use models with an element duplication mode. In this mode, the streaming engine reads a granule smaller than the full vector size and replicates that granule to fill the next vector output. The streaming engine treats each complex number as a single element with two sub-elements that give the real and imaginary (rectangular) or magnitude and angle (polar) portions of the complex number. Not all programs or peripherals agree what order these sub-elements should appear in memory. Therefore, the streaming engine offers the ability to swap the two sub-elements of a complex number with no cost. This feature swaps the halves of an element without interpreting the contents of the element and can be used to swap pairs of sub-elements of any type, not just complex numbers. Algorithms generally prefer to work at high precision, but high precision values require more storage and bandwidth than lower precision values. Commonly, programs will store data in memory at low precision, promote those values to a higher precision for calculation and then demote the values to lower precision for storage. The streaming engine supports this directly by allowing algorithms to specify one level of type promotion. In the preferred embodiment of this invention every sub-element may be promoted to a larger type size with either sign or zero extension for integer types. It is also feasible that the streaming engine may support floating point promotion, promoting 16-bit and 32-bit floating point values to 32-bit and 64-bit formats, respectively. The streaming engine defines a stream as a discrete sequence of data elements, the central processing unit core110consumes data elements packed contiguously in vectors. Vectors resemble streams in as much as they contain multiple homogeneous elements with some implicit sequence. Because the streaming engine reads streams, but the central processing unit core110consumes vectors, the streaming engine must map streams onto vectors in a consistent way. Vectors consist of equal-sized lanes, each lane containing a sub-element. The central processing unit core110designates the rightmost lane of the vector as lane 0, regardless of device's current endian mode. Lane numbers increase right-to-left. The actual number of lanes within a vector varies depending on the length of the vector and the data size of the sub-element. FIG.20illustrates the sequence of the formatting operations of formatter1903. Formatter1903includes three sections: input section2010; formatting section2020; and output section2030. Input section2010receives the data recalled from system memory1910as accessed by stream address generator1901. This data could be via linear fetch stream2011or transposed fetch stream2012. Formatting section2020includes various formatting blocks. The formatting performed by formatter1903by these blocks will be further described below. Complex swap block2021optionally swaps two sub-elements forming a complex number element. Type promotion block2022optionally promotes each data element into a larger data size. Promotion includes zero extension for unsigned integers and sign extension for signed integers. Decimation block2023optionally decimates the data elements. In the preferred embodiment decimation can be 2:1 retaining every other data element of 4:1 retaining every fourth data element. Element duplication block2024optionally duplicates individual data elements. In the preferred embodiment this data element duplication is an integer power of 2 (2N, when N is an integer) including 2×, 4×, 8×, 16×, 32× and 64×. In the preferred embodiment data duplication can extend over plural destination vectors. Vector length masking/group duplication block2025has two primary functions. An independently specified vector length VECLEN controls the data elements supplied to each output data vector. When group duplication is off, excess lanes in the output data vector are zero filled. When group duplication is on, input data elements of the specified vector length are duplicated to fill the output data vector. Output section2030holds the data for output to the corresponding functional units. Register and buffer for CPU2031stores a formatted vector of data to be used as an operand by the functional units of central processing unit core110. FIG.21illustrates a first example of lane allocation in a vector. Vector2100is divided into 8 64-bit lanes (8×64 bits=512 bits the vector length). Lane 0 includes bits 0 to 63; lane 1 includes bits 64 to 127; lane 2 includes bits 128 to 191; lane 3 includes bits 192 to 255, lane 4 includes bits 256 to 319, lane 5 includes bits 320 to 383, lane 6 includes bits 384 to 447 and lane 7 includes bits 448 to 511. FIG.22illustrates a second example of lane allocation in a vector. Vector2210is divided into 16 32-bit lanes (16×32 bits=512 bits the vector length). Lane 0 includes bits 0 to 31; lane 1 includes bits 32 to 63; lane 2 includes bits 64 to 95; lane 3 includes bits 96 to 127; lane 4 includes bits 128 to 159; lane 5 includes bits 160 to 191; lane 6 includes bits 192 to 223; lane 7 includes bits 224 to 255; lane 8 includes bits 256 to 287; lane 9 occupied bits 288 to 319; lane 10 includes bits 320 to 351; lane 11 includes bits 352 to 383; lane 12 includes bits 384 to 415; lane 13 includes bits 416 to 447; lane 14 includes bits 448 to 479; and lane 15 includes bits 480 to 511. The streaming engine maps the innermost stream dimension directly to vector lanes. It maps earlier elements within that dimension to lower lane numbers and later elements to higher lane numbers. This is true regardless of whether this particular stream advances in increasing or decreasing address order. Whatever order the stream defines, the streaming engine deposits elements in vectors in increasing-lane order. For non-complex data, it places the first element in lane 0 of the first vector central processing unit core110fetches, the second in lane 1, and so on. For complex data, the streaming engine places the first element in lanes 0 and 1, second in lanes 2 and 3, and so on. Sub-elements within an element retain the same relative ordering regardless of the stream direction. For non-swapped complex elements, this places the sub-elements with the lower address of each pair in the even numbered lanes, and the sub-elements with the higher address of each pair in the odd numbered lanes. Swapped complex elements reverse this mapping. The streaming engine fills each vector central processing unit core110fetches with as many elements as it can from the innermost stream dimension. If the innermost dimension is not a multiple of the vector length, the streaming engine pads that dimension out to a multiple of the vector length with zeros. Thus for higher-dimension streams, the first element from each iteration of an outer dimension arrives in lane 0 of a vector. The streaming engine always maps the innermost dimension to consecutive lanes in a vector. For transposed streams, the innermost dimension consists of groups of sub-elements along dimension 1, not dimension 0, as transposition exchanges these two dimensions. Two dimensional streams exhibit greater variety as compared to one dimensional streams. A basic two dimensional stream extracts a smaller rectangle from a larger rectangle. A transposed 2-D stream reads a rectangle column-wise instead of row-wise. A looping stream, where the second dimension overlaps first executes a finite impulse response (FIR) filter taps which loops repeatedly or FIR filter samples which provide a sliding window of input samples. FIG.23illustrates a basic two dimensional stream. The inner two dimensions, represented by ELEM_BYTES, ICNT0, DIM1 and ICNT1 give sufficient flexibility to describe extracting a smaller rectangle2320having dimensions2321and2322from a larger rectangle2310having dimensions2311and2312. In this example rectangle2320is a 9 by 13 rectangle of 64-bit values and rectangle2310is a larger 11 by 19 rectangle. The following stream parameters define this stream: ICNT0=9 ELEM_BYTES=8 ICNT1=13 DIM1=88 (11 times 8) Thus the iteration count in the 0 dimension 2321 is 9. The iteration count in the 1 direction2322is 13. Note that the ELEM_BYTES only scales the innermost dimension. The first dimension has ICNT0 elements of size ELEM_BYTES. The stream address generator does not scale the outer dimensions. Therefore, DIM1=88, which is 11 elements scaled by 8 bytes per element. FIG.24illustrates the order of elements within this example stream. The streaming engine fetches elements for the stream in the order illustrated in order2400. The first 9 elements come from the first row of rectangle2320, left-to-right in hops 1 to 8. The 10th through 24th elements comes from the second row, and so on. When the stream moves from the 9th element to the 10th element (hop 9 inFIG.24), the streaming engine computes the new location based on the pointer's position at the start of the inner loop, not where the pointer ended up at the end of the first dimension. This makes DIM1 independent of ELEM_BYTES and ICNT0. DIM1 always represents the distance between the first bytes of each consecutive row. Transposed streams access along dimension 1 before dimension 0. The following examples illustrate a couple transposed streams, varying the transposition granularity.FIG.25illustrates extracting a smaller rectangle2520(12×8) having dimensions2521and2522from a larger rectangle2510(14×13) having dimensions2511and2512. InFIG.25ELEM_BYTES equals 2. FIG.26illustrates how the streaming engine would fetch the stream of this example with a transposition granularity of 4 bytes. Fetch pattern2600fetches pairs of elements from each row (because the granularity of 4 is twice the ELEM_BYTES of 2), but otherwise moves down the columns. Once it reaches the bottom of a pair of columns, it repeats this pattern with the next pair of columns. FIG.27illustrates how the streaming engine would fetch the stream of this example with a transposition granularity of 8 bytes. The overall structure remains the same. The streaming engine fetches 4 elements from each row (because the granularity of 8 is four times the ELEM_BYTES of 2) before moving to the next row in the column as shown in fetch pattern2700. The streams examined so far read each element from memory exactly once. A stream can read a given element from memory multiple times, in effect looping over a piece of memory. FIR filters exhibit two common looping patterns. FIRs re-read the same filter taps for each output. FIRs also read input samples from a sliding window. Two consecutive outputs will need inputs from two overlapping windows. FIG.28illustrates the details of streaming engine2800. Streaming engine2800contains three major sections: Stream 02810; Stream 12820; and Shared L2 Interfaces2830. Stream 02810and Stream 12820both contain identical hardware that operates in parallel. Stream 02810and Stream 12820both share L2 interfaces2830. Each stream2810and2820provides central processing unit core110with up to 512 bits/cycle, every cycle. The streaming engine architecture enables this through its dedicated stream paths and shared dual L2 interfaces. Each streaming engine2800includes a dedicated 6-dimensional stream address generator2811/2821that can each generate one new non-aligned request per cycle. Address generators2811/2821output 512-bit aligned addresses that overlap the elements in the sequence defined by the stream parameters. This will be further described below. Each address generator2811/2811connects to a dedicated micro table look-aside buffer (μTLB)2812/2822. The μTLB2812/2822converts a single 48-bit virtual address to a 44-bit physical address each cycle. Each μTLB2812/2822has 8 entries, covering a minimum of 32 kB with 4 kB pages or a maximum of 16 MB with 2 MB pages. Each address generator2811/2821generates 2 addresses per cycle. The μTLB2812/2822only translates1address per cycle. To maintain throughput, streaming engine2800takes advantage of the fact that most stream references will be within the same 4 kB page. Thus the address translation does not modify bits 0 to 11 of the address. If aout0 and aout1 line in the same 4 kB page (aout0[47:12] are the same aout1 [47:12]), then the μTLB2812/2822only translates aout0 and reuses the translation for the upper bits of both addresses. Translated addresses are queued in command queue2813/2823. These addresses are aligned with information from the corresponding Storage Allocation and Tracking block2814/2824. Streaming engine2800does not explicitly manage μTLB2812/2822. The system memory management unit (MMU) invalidates μTLBs as necessary during context switches. Storage Allocation and Tracking2814/2824manages the stream's internal storage, discovering data reuse and tracking the lifetime of each piece of data. This will be further described below. Reference queue2815/2825stores the sequence of references generated by the corresponding address generator2811/2821. This information drives the data formatting network so that it can present data to central processing unit core110in the correct order. Each entry in reference queue2815/2825contains the information necessary to read data out of the data store and align it for central processing unit core110. Reference queue2815/2825maintains the following information listed in Table 5 in each slot: TABLE 5Data Slot LowSlot number for the lower half of dataassociated with aout0Data Slot HighSlot number for the upper half of dataassociated with aout1RotationNumber of bytes to rotate data to alignnext element with lane 0LengthNumber of valid bytes in this reference Storage allocation and tracking2814/2824inserts references in reference queue2815/2825as address generator2811/2821generates new addresses. Storage allocation and tracking2814/2824removes references from reference queue2815/2825when the data becomes available and there is room in the stream head registers. As storage allocation and tracking2814/2824removes slot references from reference queue2815/2825and formats data, it checks whether the references represent the last reference to the corresponding slots. Storage allocation and tracking2814/2824compares reference queue2815/2825removal pointer against the slot's recorded Last Reference. If they match, then storage allocation and tracking2814/2824marks the slot inactive once it's done with the data. Streaming engine2800has data storage2816/2826for an arbitrary number of elements. Deep buffering allows the streaming engine to fetch far ahead in the stream, hiding memory system latency. The right amount of buffering might vary from product generation to generation. In the current preferred embodiment streaming engine2800dedicates 32 slots to each stream. Each slot holds 64 bytes of data. Butterfly network2817/2827consists of a 7 stage butterfly network. Butterfly network2817/2827receives 128 bytes of input and generates 64 bytes of output. The first stage of the butterfly is actually a half-stage. It collects bytes from both slots that match a non-aligned fetch and merges them into a single, rotated 64-byte array. The remaining 6 stages form a standard butterfly network. Butterfly network2817/2827performs the following operations: rotates the next element down to byte lane 0; promotes data types by a power of 2, if requested; swaps real and imaginary components of complex numbers, if requested; converts big endian to little endian if central processing unit core110is presently in big endian mode. The user specifies element size, type promotion and real/imaginary swap as part of the stream's parameters. Streaming engine2800attempts to fetch and format data ahead of central processing unit core110's demand for it, so that it can maintain full throughput. Stream head registers2818/2828provide a small amount of buffering so that the process remains fully pipelined. Holding registers2818/2828are not directly architecturally visible, except for the fact that streaming engine2800provides full throughput. The two streams2810/2820share a pair of independent L2 interfaces2830: L2 Interface A (IFA)2833and L2 Interface B (IFB)2834. Each L2 interface provides 512 bits/cycle throughput direct to the L2 controller for an aggregate bandwidth of 1024 bits/cycle. The L2 interfaces use the credit-based multicore bus architecture (MBA) protocol. The L2 controller assigns each interface its own pool of command credits. The pool should have sufficient credits so that each interface can send sufficient requests to achieve full read-return bandwidth when reading L2 RAM, L2 cache and multicore shared memory controller (MSMC) memory (described below). To maximize performance, both streams can use both L2 interfaces, allowing a single stream to send a peak command rate of 2 requests/cycle. Each interface prefers one stream over the other, but this preference changes dynamically from request to request. IFA2833and IFB2834always prefer opposite streams, when IFA2833prefers Stream 0, IFB2834prefers Stream 1 and vice versa. Arbiter2831/2832ahead of each interface2833/2834applies the following basic protocol on every cycle it has credits available. Arbiter2831/2832checks if the preferred stream has a command ready to send. If so, arbiter2831/2832chooses that command. Arbiter2831/2832next checks if an alternate stream has at least two requests ready to send, or one command and no credits. If so, arbiter2831/2832pulls a command from the alternate stream. If either interface issues a command, the notion of preferred and alternate streams swap for the next request. Using this simple algorithm, the two interfaces dispatch requests as quickly as possible while retaining fairness between the two streams. The first rule ensures that each stream can send a request on every cycle that has available credits. The second rule provides a mechanism for one stream to borrow the other's interface when the second interface is idle. The third rule spreads the bandwidth demand for each stream across both interfaces, ensuring neither interface becomes a bottleneck by itself. Coarse Grain Rotator2835/2836enables streaming engine2800to support a transposed matrix addressing mode. In this mode, streaming engine2800interchanges the two innermost dimensions of its multidimensional loop. This accesses an array column-wise rather than row-wise. Rotator2835/2836is not architecturally visible, except as enabling this transposed access mode. The stream definition template provides the full structure of a stream that contains data. The iteration counts and dimensions provide most of the structure, while the various flags provide the rest of the details. For all data-containing streams, the streaming engine defines a single stream template. All stream types it supports fit this template. The streaming engine defines a six-level loop nest for addressing elements within the stream. Most of the fields in the stream template map directly to the parameters in that algorithm.FIG.29illustrates stream template register2900. The numbers above the fields are bit numbers within a 256-bit vector. Table 6 shows the stream field definitions of a stream template. TABLE 6FIG. 29FieldReferenceSizeNameNumberDescriptionBitsICNT02901Iteration count for loop 016ICNT12902Iteration count for loop 116ICNT22903Iteration count for loop 216ICNT32904Iteration count for loop 316ICNT42905Iteration count for loop 416ICNT52906Iteration count for loop 516DIM12922Signed dimension for loop 116DIM22923Signed dimension for loop 216DIM32924Signed dimension for loop 316DIM42925Signed dimension for loop 432DIM42926Signed dimension for loop 532FLAGS2911Stream modifier flags48 Loop0is the innermost loop and loop5is the outermost loop. In the current example DIM0 is always equal to is ELEM_BYTES defining physically contiguous data. Thus the stream template register2900does not define DIM0. Streaming engine2800interprets all iteration counts as unsigned integers and all dimensions as unscaled signed integers. The template above fully specifies the type of elements, length and dimensions of the stream. The stream instructions separately specify a start address. This would typically be by specification of a scalar register in scalar register file211which stores this start address. This allows a program to open multiple streams using the same template. FIG.30illustrates sub-field definitions of the flags field2911. As shown inFIG.30the flags field2911is 6 bytes or 48 bits.FIG.30shows bit numbers of the fields. Table 7 shows the definition of these fields. TABLE 7FIG. 30FieldReferenceSizeNameNumberDescriptionBitsELTYPE3001Type of data element4TRANSPOSE3002Two dimensional transpose mode3PROMOTE3003Promotion mode3VECLEN3004Stream vector length3ELDUP3005Element duplication3GRDUP3006Group duplication1DECIM3007Element decimation2THROTTLE3008Fetch ahead throttle mode2DIMFMT3009Stream dimensions format3DIR3010Stream direction10 forward direction1 reverse directionCBK03011First circular block size number4CBK13012Second circular block size number4AM03013Addressing mode for loop 02AM13014Addressing mode for loop 12AM23015Addressing mode for loop 22AM33016Addressing mode for loop 32AM43017Addressing mode for loop 42AM53018Addressing mode for loop 52 The Element Type (ELTYPE) field3001defines the data type of the elements in the stream. The coding of the four bits of the ELTYPE field3001is defined as shown in Table 8. TABLE 8Real/Sub-elementTotal ElementELTYPEComplexSize BitsSize Bits0000real880001real16160010real32320011real64640100reserved0101reserved0110reserved0111reserved1000complex816no swap1001complex1632no swap1010complex3264no swap1011complex64128no swap1100complex816swapped1101complex1632swapped1110complex3264swapped1111complex64128swapped Real/Complex Type determines whether the streaming engine treats each element as a real number or two parts (real/imaginary or magnitude/angle) of a complex number. This field also specifies whether to swap the two parts of complex numbers. Complex types have a total element size that is twice their sub-element size. Otherwise, the sub-element size equals total element size. Sub-Element Size determines the type for purposes of type promotion and vector lane width. For example, 16-bit sub-elements get promoted to 32-bit sub-elements or 64-bit sub-elements when a stream requests type promotion. The vector lane width matters when central processing unit core110operates in big endian mode, as it always lays out vectors in little endian order. Total Element Size determines the minimal granularity of the stream. In the stream addressing model, it determines the number of bytes the stream fetches for each iteration of the innermost loop. Streams always read whole elements, either in increasing or decreasing order. Therefore, the innermost dimension of a stream spans ICNT0×total-element-size bytes. The TRANSPOSE field3002determines whether the streaming engine accesses the stream in a transposed order. The transposed order exchanges the inner two addressing levels. The TRANSPOSE field3002also indicated the granularity it transposes the stream. The coding of the three bits of the TRANSPOSE field3002is defined as shown in Table 9 for normal 2D operations. TABLE 9TransposeMeaning000Transpose disabled001Transpose on 8-bit boundaries010Transpose on 16-bit boundariesOilTranspose on 32-bit boundaries100Transpose on 64-bit boundaries101Transpose on 128-bit boundaries110Transpose on 256-bit boundaries111Reserved Streaming engine2800may transpose data elements at a different granularity than the element size. This allows programs to fetch multiple columns of elements from each row. The transpose granularity must be no smaller than the element size. The TRANSPOSE field3002interacts with the DIMFMT field3009in a manner further described below. The PROMOTE field3003controls whether the streaming engine promotes sub-elements in the stream and the type of promotion. When enabled, streaming engine2800promotes types by powers-of-2 sizes. The coding of the three bits of the PROMOTE field3003is defined as shown in Table 10. TABLE 10PRO-PromotionPromotionResulting Sub-element SizeMOTEFactorType8-bit16-bit32-bit64-bit0001xN/A8-bit16-bit32-bit64-bit0012xzero extend16-bit32-bit64-bitInvalid0104xzero extend32-bit64-bitInvalidInvalid0118xzero extend64-bitInvalidInvalidInvalid100reserved1012xsign extend16-bit32-bit64-bitInvalid1104xsign extend32-bit64-bitInvalidInvalid1118xsign extend64-bitInvalidInvalidInvalid When PROMOTE is 000, corresponding to a 1× promotion, each sub-element is unchanged and occupies a vector lane equal in width to the size specified by ELTYPE. When PROMOTE is 001, corresponding to a 2× promotion and zero extend, each sub-element is treated as an unsigned integer and zero extended to a vector lane twice the width specified by ELTYPE. A 2× promotion is invalid for an initial sub-element size of 64 bits. When PROMOTE is 010, corresponding to a 4× promotion and zero extend, each sub-element is treated as an unsigned integer and zero extended to a vector lane four times the width specified by ELTYPE. A 4× promotion is invalid for an initial sub-element size of 32 or 64 bits. When PROMOTE is 011, corresponding to an 8× promotion and zero extend, each sub-element is treated as an unsigned integer and zero extended to a vector lane eight times the width specified by ELTYPE. An 8× promotion is invalid for an initial sub-element size of 16, 32 or 64 bits. When PROMOTE is 101, corresponding to a 2× promotion and sign extend, each sub-element is treated as a signed integer and sign extended to a vector lane twice the width specified by ELTYPE. A 2× promotion is invalid for an initial sub-element size of 64 bits. When PROMOTE is 110, corresponding to a 4× promotion and sign extend, each sub-element is treated as a signed integer and sign extended to a vector lane four times the width specified by ELTYPE. A 4× promotion is invalid for an initial sub-element size of 32 or 64 bits. When PROMOTE is 111, corresponding to an 8× promotion and zero extend, each sub-element is treated as a signed integer and sign extended to a vector lane eight times the width specified by ELTYPE. An 8× promotion is invalid for an initial sub-element size of 16, 32 or 64 bits. The VECLEN field3004defines the stream vector length for the stream in bytes. Streaming engine2800breaks the stream into groups of elements that are VECLEN bytes long. The coding of the three bits of the VECLEN field3004is defined as shown in Table 11. TABLE 11VECLENStream Vector Length0001 byte0012 bytes0104 bytes0118 bytes10016 bytes10132 bytes11064 bytes111Reserved VECLEN must be greater than or equal to the product of the element size in bytes and the duplication factor. As shown in Table 11, the maximum VECLEN of 64 bytes equals the preferred vector size of vector datapath side B116. When VECLEN is shorter than the native vector width of central processing unit core110, streaming engine2800pads the extra lanes in the vector provided to central processing unit core110. The GRDUP field3006determines the type of padding. The VECLEN field3004interacts with ELDUP field3005and GRDUP field3006in a manner detailed below. The ELDUP field3005specifies a number of times to duplicate each element. The element size multiplied with the element duplication amount must not exceed the 64 bytes. The coding of the three bits of the ELDUP field3005is defined as shown in Table 12. TABLE 12ELDUPDuplication Factor000No Duplication0012 times0104 times0118 times10016 times10132 times11064 times111Reserved The ELDUP field3005interacts with VECLEN field3004and GRDUP field3006in a manner detailed below. Tables 13A to 13D show connections between input bytes and output bytes for an ELEM_BYTES value of 1 and various values of ELDUP from 1 to 64. The columns of Tables 13A to 13D list the output bytes from byte 0 to byte 63. The body of Tables 13A to 13D list the corresponding input bytes for the element duplication factor of that row. TABLE 13AELDUP012345678910111213141510123456789101112131415200112233445566774000011112222333380000000011111111160000000000000000320000000000000000640000000000000000 TABLE 13BELDUP16171819202122232425262728293031116171819202122232425262728293031266991010111112121313141415154444455556666777782222222233333333161111111111111111320000000000000000640000000000000000 TABLE 13CELDUP32333435363738394041424344454647132333435363738394041424344454647216161717181819192020212122222323488889999101010101111111184444444455555555162222222222222222321111111111111111640000000000000000 TABLE 13DELDUP4849505152535455565758596061626314849505152535455565758596061626322424252526262727282829293030313141212121213131313141414141515151586666666677777777163333333333333333321111111111111111640000000000000000 The row for ELDUP=000 (indicating no duplication) of Tables 13A to 13D show that the output bytes are the corresponding input bytes. The row for ELDUP=110 (indicating a duplication factor of 64) of Tables 13A to 13D show that all the output bytes equal input byte 0. Tables 14A to 14D show connections between input bytes and output bytes for an ELEM_BYTES value of 2 and various values of ELDUP from 1 to 64. A duplication value of 64 for an ELEM_BYTES value of 2 extends beyond the vector size of vector datapath side B116into the next vector. TABLE 14AELDUP012345678910111213141510123456789101112131415201012323445456774010101012323232380101010101010101160101010101010101320101010101010101640101010101010101 TABLE 14BELDUP16171819202122232425262728293031116171819202122232425262728293031289891011101112131213141415154454545456767676782323232323232323160101010101010101320101010101010101640101010101010101 TABLE 14CELDUP32333435363738394041424344454647132333435363738394041424344454647216171617181918192021202122232223489898989101110111011101184545454545454545162323232323232323320101010101010101640101010101010101 TABLE 14DELDUP4849505152535455565758596061626314849505152535455565758596061626322425242526272627282928293031303141213121312131213141514151415141586767676767676767162323232323232323320101010101010101640101010101010101 In Tables 14A to 14D each element is two bytes and occupies as adjacent pair of input bytes 0/1, 2/3, 4/5 . . . 62/63. The row for ELDUP=000 (indicating no duplication) of Tables 14A to 14D show that the output bytes are the corresponding input bytes. This is the same as the ELEM_BYTES=1 case. The rows for ELDUP=101 (indicating a duplication factor of 32) and for ELDUP=110 (indicating a duplication factor of 64) of Tables 14A to 14D show that the output bytes are repeats of the first input element from input bytes 0/1. Tables 15A to 15D show connections between input bytes and output bytes for an ELEM_BYTES value of 4 and various values of ELDUP from 1 to 64. A duplication value of 32 or 64 for an ELEM_BYTES value of 4 extends beyond the vector size of vector datapath side B116into the next vector. TABLE 15AELDUP012345678910111213141510123456789101112131415201230123456745674012301230123012380123012301230123160123012301230123320123012301230123640123012301230123 TABLE 15BELDUP16171819202122232425262728293031116171819202122232425262728293031289101189101112131415121314154456745674567456780123012301230123160123012301230123320123012301230123640123012301230123 TABLE 15CELDUP32333435363738394041424344454647132333435363738394041424344454647216171819161718192021222320212223489101189101189101189101184567456745674567160123012301230123320123012301230123640123012301230123 TABLE 15DELDUP4849505152535455565758596061626314849505152535455565758596061626322425262724252627282930312829303141213141512131415121314151213141584567456745674567160123012301230123320123012301230123640123012301230123 In Tables 15A to 15D each element is four bytes and occupies a quad of input bytes 0/1/2/3, 4/5/6/7 . . . 60/61/62/63. The row for ELDUP=000 (indicating no duplication) of Tables 15A to 15D show that the output bytes are the corresponding input bytes. This is the same as the ELEM_BYTES=1 case. The rows for ELDUP=011 (indicating a duplication factor of 16), for ELDUP=101 (indicating a duplication factor of 32) and for ELDUP=110 (indicating a duplication factor of 64) of Tables 15A to 15D show that the output bytes are repeats of the first input element from input bytes 0/1/2/3. Tables 16A to 16D show connections between input bytes and output bytes for an ELEM_BYTES value of 8 and various values of ELDUP from 1 to 64. A duplication value of 16, 32 or 64 for an ELEM_BYTES value of 8 extends beyond the vector size of vector datapath side B116into the next vector. TABLE 16AELDUP012345678910111213141510123456789101112131415201234567012345674012345670123456780123456701234567160123456701234567320123456701234567640123456701234567 TABLE 16BELDUP16171819202122232425262728293031116171819202122232425262728293031289101112131415891011121314154012345670123456780123456701234567160123456701234567320123456701234567640123456701234567 TABLE 16CELDUP3233343536373839404142434445464713233343536373839404142434445464721617181916171819202122232021222344567891011456789101180123456701234567160123456701234567320123456701234567640123456701234567 TABLE 16DELDUP4849505152535455565758596061626314849505152535455565758596061626322425262724252627282930312829303144567891011456789101180123456701234567160123456701234567320123456701234567640123456701234567 In Tables 16A to 16D each element is eight bytes and occupies a set of eight input bytes 0/1/2/3/4/5/6/7 . . . 56/57/58/59/60/61/62/63. The row for ELDUP=000 (indicating no duplication) of Tables 16A to 16D show that the output bytes are the corresponding input bytes. This is the same as the ELEM_BYTES=1 case. The rows for ELDUP=011 (indicating a duplication factor of 8), for ELDUP=011 (indicating a duplication factor of 16), for ELDUP=101 (indicating a duplication factor of 32) and for ELDUP=110 (indicating a duplication factor of 64) of Tables 16A to 16D show that the output bytes are repeats of the first input element from input bytes 0/1/2/3/4/5/6/7. Tables 17A to 17D show connections between input bytes and output bytes for an ELEM_BYTES value of 16 and various values of ELDUP from 1 to 64. A duplication value of 8, 16, 32 or 64 for an ELEM_BYTES value of 16 extends beyond the vector size of vector datapath side B116into the next vector. TABLE 17AELDUP012345678910111213141510123456789101112131415201234567891011121314154012345678910111213141580123456789101112131415160123456789101112131415320123456789101112131415640123456789101112131415 TABLE 17BELDUP16171819202122232425262728293031116171819202122232425262728293031201234567891011121314154012345678910111213141580123456789101112131415160123456789101112131415320123456789101112131415640123456789101112131415 TABLE 17CELDUP323334353637383940414243444546471323334353637383940414243444546472161718192021222324252627282930314012345678910111213141580123456789101112131415160123456789101112131415320123456789101112131415640123456789101112131415 TABLE 17DELDUP484950515253545556575859606162631484950515253545556575859606162632161718192021222324252627282930314012345678910111213141580123456789101112131415160123456789101112131415320123456789101112131415640123456789101112131415 In Tables 17A to 17D each element is sixteen bytes and occupies a set of sixteen input bytes 0/1 . . . 14/15 . . . 32/33 . . . 62/63. The row for ELDUP=000 (indicating no duplication) of Tables 17A to 17D show that the output bytes are the corresponding input bytes. This is the same as the ELEM_BYTES=1 case. The rows for ELDUP=010 (indicating a duplication factor or 4), for ELDUP=011 (indicating a duplication factor of 8), for ELDUP=011 (indicating a duplication factor of 16), for ELDUP=101 (indicating a duplication factor of 32) and for ELDUP=110 (indicating a duplication factor of 64) of Tables 16A to 16D show that the output bytes are repeats of the first input element from input bytes 0/1 . . . 14/15. Tables 18A to 18D show connections between input bytes and output bytes for an ELEM_BYTES value of 32 and various values of ELDUP from 1 to 64. A duplication value of 4, 8, 16, 32 or 64 for an ELEM_BYTES value of 32 extends beyond the vector size of vector datapath side B116into the next vector. TABLE 18AELDUP012345678910111213141510123456789101112131415201234567891011121314154012345678910111213141580123456789101112131415160123456789101112131415320123456789101112131415640123456789101112131415 TABLE 18BELDUP16171819202122232425262728293031116171819202122232425262728293031216171819202122232425262728293031416171819202122232425262728293031816171819202122232425262728293031161617181920212223242526272829303132161718192021222324252627282930316416171819202122232425262728293031 TABLE 18CELDUP32333435363738394041424344454647132333435363738394041424344454647201234567891011121314154012345678910111213141580123456789101112131415160123456789101112131415320123456789101112131415640123456789101112131415 TABLE 18DELDUP48495051525354555657585960616263148495051525354555657585960616263216171819202122232425262728293031416171819202122232425262728293031816171819202122232425262728293031161617181920212223242526272829303132161718192021222324252627282930316416171819202122232425262728293031 In Tables 18A to 18D each element is thirty-two bytes and occupies a set of thirty-two input bytes 0/1 . . . 30/31 and 32/33 . . . 62/63. The row for ELDUP=000 (indicating no duplication) of Tables 18A to 18D show that the output bytes are the corresponding input bytes. This is the same as the ELEM_BYTES=1 case. The rows for ELDUP=001 (indicating a duplication factor of 2), for ELDUP=010 (indicating a duplication factor of 4), for ELDUP=011 (indicating a duplication factor of 8), for ELDUP=011 (indicating a duplication factor of 16), for ELDUP=101 (indicating a duplication factor of 32) and for ELDUP=110 (indicating a duplication factor of 64) of Tables 16A to 16D show that the output bytes are repeats of the first input element from input bytes 0/1 . . . 30/31. Tables 19A to 19D show connections between input bytes and output bytes for an ELEM_BYTES value of 64 and various values of ELDUP from 1 to 64. A duplication value of 2, 4, 8, 16, 32 or 64 for an ELEM_BYTES value of 64 extends beyond the vector size of vector datapath side B116into the next vector. TABLE 19AELDUP012345678910111213141510123456789101112131415201234567891011121314154012345678910111213141580123456789101112131415160123456789101112131415320123456789101112131415640123456789101112131415 TABLE 19BELDUP16171819202122232425262728293031116171819202122232425262728293031216171819202122232425262728293031416171819202122232425262728293031816171819202122232425262728293031161617181920212223242526272829303132161718192021222324252627282930316416171819202122232425262728293031 TABLE 19CELDUP32333435363738394041424344454647132333435363738394041424344454647232333435363738394041424344454647432333435363738394041424344454647832333435363738394041424344454647163233343536373839404142434445464732323334353637383940414243444546476432333435363738394041424344454647 TABLE 19DELDUP48495051525354555657585960616263148495051525354555657585960616263248495051525354555657585960616263448495051525354555657585960616263848495051525354555657585960616263164849505152535455565758596061626332484950515253545556575859606162636448495051525354555657585960616263 In Tables 19A to 19D each element is sixty-four bytes and occupies a set of sixty-four input bytes 0/1 . . . 62/63. The row for ELDUP=000 (indicating no duplication) of Tables 19A to 19D show that the output bytes are the corresponding input bytes. This is the same as the ELEM_BYTES=1 case. All other rows show that the output bytes are repeats of the first input element from input bytes 0/1 . . . 62/63. FIG.31illustrates an exemplary embodiment of element duplication block2024. Input register3100receives a vector input from decimation block2023. Input register3100includes 64 bytes arranged in 64 1-byte blocks byte0 to byte63. Note that bytes byte0 to byte63 are each equal in length to the minimum of ELEM_BYTES. A plurality of multiplexers3101to3163couple input bytes from source register3100to output register3170. Each multiplexer3101to3163supplies input to a corresponding byte1 to byte63 of output register3170. Not all input bytes byte0 to byte63 of input register3100are coupled to every multiplexer3101to3163. Note there is no multiplexer supplying byte0 of output register3170. Byte0 of output register3170is always supplied by byte0 of input register3100. Multiplexers3101to3163are controlled by multiplexer control encoder3180. Multiplexer control encoder3180receives ELEM_BYTES and ELDUP input signals and generates corresponding control signals for multiplexers3101to3163. Tables 13A-13D, 14A-14D, 15A-15D, 16A-16D, 17A-17D, 18A-18D and 19A-19D translate directly into multiplexer control of multiplexers3101to3163to achieve the desired element duplication. Inspection of tables 13A-13D, 14A-14D, 15A-15D, 16A-16D, 17A-17D, 18A-18D and 19A-19D show that not all input bytes can supply each output byte. Thus tables 13A-13D, 14A-14D, 15A-15D, 16A-16D, 17A-17D, 18A-18D and 19A-19D show the needed connections between input register3100and multiplexers3101to3163inFIG.31. Further, tables 13A-13D, 14A-14D, 15A-15D, 16A-16D, 17A-17D, 18A-18D and 19A-19D show the encoding needed in multiplexer control encode3180. As noted above there are combinations of element size and duplication factor (ELDUP) which produce more bits than the size (preferably 512 bits) of the destination register3170. In this event once destination register3170is full, this data is supplied from element duplication block2024to vector length masking/group duplication block2025for eventual supply to the corresponding stream head register2818/2828. The rate of data movement in formatter1903is set by the rate of consumption of data by central processing unit core110via stream read and advance instructions described below. Element duplication block2024then supplies duplicated elements to another destination vector. The nature of the relationship between the permitted element size, element duplication factor and destination vector length requires that a duplicated element that overflows the first destination register will fill an integer number of destination registers upon completion of duplication. The data of these additional destination registers eventually supplies the corresponding stream head register2818/2828. Upon completion of duplication of a first data element, the next data element is rotated down to the least significant bits of source register3100discarding the first data element. The process then repeats for this new data element. The GRDUP bit3006determines whether group duplication is enabled. If GRDUP bit3006is 0, then group duplication is disabled. If the GRDUP bit3006is 1, then group duplication is enabled. When enabled by GRDUP bit3006, streaming engine2800duplicates a group of elements to fill the vector width. VECLEN field3004defines the length of the group to replicate. When VECLEN field3004is less than the vector length of central processing unit core110and GRDUP bit3006enables group duplication, streaming engine2800fills the extra lanes (seeFIGS.20and21) with additional copies of the stream vector. Because stream vector length and vector length of central processing unit core110are always integral powers of two, group duplication always produces an integral power of two number of duplicate copies. Note GRDUP and VECLEN do not specify the number of duplications. The number of duplications performed is based upon the ratio of VECLEN to the native vector length, which is 64 bytes/512 bits in the preferred embodiment. The GRDUP field3006specifies how stream engine2800pads stream vectors for bits following the VECLEN length out to the vector length of central processing unit core110. When GRDUP bit3006is 0, streaming engine2800fills the extra lanes with zeros and marks these extra vector lanes invalid. When GRDUP bit3006is 1, streaming engine2800fills extra lanes with copies of the group of elements in each stream vector. Setting GRDUP bit3006to 1 has no effect when VECLEN is set to the native vector width of central processing unit core110. VECLEN must be at least as large as the product of ELEM_BYTES and the element duplication factor ELDUP. That is, an element or the duplication factor number of elements cannot be separated using VECLEN. Group duplication operates only to the destination vector size. Group duplication does not change the data supplied when the product of the element size ELEM_BYTES and element duplication factor ELDUP equals or exceeds the destination vector width. Under these conditions the state of the GRDUP bit3006and the VECLEN field3004have no effect on the supplied data. The set of examples below illustrate the interaction between VECLEN and GRDUP. Each of the following examples show how the streaming engine maps a stream onto vectors across different stream vector lengths and the vector size of vector datapath side B116. The stream of this example consists of 29 elements (E0 to E28), each element 64 bits/8 bytes. This stream could be a linear stream of 29 elements or an inner loop of 29 elements. These tables illustrate 8 byte lanes such as shown inFIG.20. Each illustrated vector is stored in the corresponding stream head register2818/2828in turn. Table 20 illustrates how the example stream maps onto bits within the 64-byte CPU vectors when VECLEN is 64 bytes. TABLE 20CPUVectorsLane 7Lane 6Lane 5Lane 4Lane 3Lane 2Lane 1Lane 01E7E6E5E4E3E2E1E02E15E14E13E12E11E10E9E83E23E22E21E20E19E18E17E164000E28E27E26E25E24 As shown in Table 20 the stream extends over 4 vectors. As previously described, the lanes within vector 4 that extend beyond the stream are zero filled. When VECLEN has a size equal to the native vector length, it does not matter whether GRDUP is 0 or 1. No duplication can take place with such a VECLEN. Table 21 shows the same parameters as shown in Table 20, except with VECLEN of 32 bytes. Group duplicate is disabled (GRDUP=0). TABLE 21CPUVectorsLane 7Lane 6Lane 5Lane 4Lane 3Lane 2Lane 1Lane 010000E3E2E1E020000E7E6E5E430000E11E10E9E840000E15E14E13E1250000E19E18E17E1660000E23E22E21E2070000E27E26E25E2480000000E28 The 29 elements of the stream are distributed over lanes 0 to 3 in 8 vectors. Extra lanes 4 to 7 in vectors 1 to 7 are zero filled. In vector 8 only lane 1 has a stream element (E28), all other lanes are zero filled. Table 22 shows the same parameters as shown in Table 22, except with VECLEN of 16 bytes. Group duplicate is disabled (GRDUP=0). TABLE 22CPUVectorsLane 7Lane 6Lane 5Lane 4Lane 3Lane 2Lane 1Lane 01000000E1E02000000E3E23000000E5E44000000E7E65000000E9E86000000E11E107000000E13E128000000E15E149000000E17E1610000000E19E1811000000E21E2012000000E23E2213000000E25E2414000000E27E26150000000E28 The 29 elements of the stream are distributed over lanes 0 to 1 in 15 vectors. Extra lanes 2 to 7 in vectors 1 to 14 are zero filled. In vector 15 only lane 1 has a stream element (E28), all other lanes are zero filled. Table 23 shows the same parameters as shown in Table 20, except with VECLEN of 8 bytes. Group duplicate is disabled (GRDUP=0). TABLE 23CPUVectorsLane 7Lane 6Lane 5Lane 4Lane 3Lane 2Lane 1Lane 010000000E020000000E130000000E240000000E350000000E460000000E570000000E680000000E790000000E8100000000E9110000000E10120000000E11130000000E12140000000E13150000000E14160000000E15170000000E16180000000E17190000000E18200000000E19210000000E20220000000E21230000000E22240000000E23250000000E24260000000E25270000000E26280000000E27290000000E28 The 29 elements of the stream all appear on lane 0 in 29 vectors. Extra lanes 1 to 7 in vectors 1 to 29 are zero filled. Table 24 shows the same parameters as shown in Table 20, except with VECLEN of 32 bytes and group duplicate is enabled (GRDUP=1). TABLE 24CPUVectorsLane 7Lane 6Lane 5Lane 4Lane 3Lane 2Lane 1Lane 01E3E2E1E0E3E2E1E02E7E6E5E4E7E6E5E43E11E10E9E8E11E10E9E84E15E14E13E12E15E14E13E125E19E18E17E16E19E18E17E166E23E22E21E20E23E22E21E207E27E26E25E24E27E26E25E248000E28000E28 The 29 elements of the stream are distributed over lanes 0 to 7 in 8 vectors. Each vector 1 to 7 includes four elements duplicated. The duplication factor (2) results because VECLEN (32 bytes) is half the native vector length of 64 bytes. In vector 8 lane 0 has a stream element (E28) and lanes 1 to 3 are zero filled. Lanes 4 to 7 of vector 9 duplicate this pattern. Table 25 shows the same parameters as shown in Table 20, except with VECLEN of 16 bytes. Group duplicate is enabled (GRDUP=1). TABLE 25CPUVectorsLane 7Lane 6Lane 5Lane 4Lane 3Lane 2Lane 1Lane 01E1E0E1E0E1E0E1E02E3E2E3E2E3E2E3E23E5E4E5E4E5E4E5E44E7E6E7E6E7E6E7E65E9E8E9E8E9E8E9E86E11E10E11E10E11E10E11E107E13E12E13E12E13E12E13E128E15E14E15E14E15E14E15E149E17E16E17E16E17E16E17E1610E19E18E19E18E19E18E19E1811E21E20E21E20E21E20E21E2012E23E22E23E22E23E22E23E2213E25E24E25E24E25E24E25E2414E27E26E27E26E27E26E27E26150E280E280E280E28 The 29 elements of the stream are distributed over lanes 0 to 7 in 15 vectors. Each vector 1 to 7 includes two elements duplicated four times. The duplication factor (4) results because VECLEN (16 bytes) is one quarter the native vector length of 64 bytes. In vector 15 lane 0 has a stream element (E28) and lane 1 is zero filled. This pattern is duplicated in lanes 2 and 3, lanes 4 and 5 and lanes 6 and 7 of vector 15. Table 26 shows the same parameters as shown in Table 20, except with VECLEN of 8 bytes. Group duplicate is enabled (GRDUP=1). TABLE 26CPUVectorsLane 7Lane 6Lane 5Lane 4Lane 3Lane 2Lane 1Lane 01E0E0E0E0E0E0E0E02E1E1E1E1E1E1E1E13E2E2E2E2E2E2E2E24E3E3E3E3E3E3E3E35E4E4E4E4E4E4E4E46E5E5E5E5E5E5E5E57E6E6E6E6E6E6E6E68E7E7E7E7E7E7E7E79E8E8E8E8E8E8E8E810E9E9E9E9E9E9E9E911E10E10E10E10E10E10E10E1012E11E11E11E11E11E11E11E1113E12E12E12E12E12E12E12E1214E13E13E13E13E13E13E13E1315E14E14E14E14E14E14E14E1416E15E15E15E15E15E15E15E1517E16E16E16E16E16E16E16E1618E17E17E17E17E17E17E17E1719E18E18E18E18E18E18E18E1820E19E19E19E19E19E19E19E1921E20E20E20E20E20E20E20E2022E21E21E21E21E21E21E21E2123E22E22E22E22E22E22E22E2224E23E23E23E23E23E23E23E2325E24E24E24E24E24E24E24E2426E25E25E25E25E25E25E25E2527E26E26E26E26E26E26E26E2628E27E27E27E27E27E27E27E2729E28E28E28E28E28E28E28E28 The 29 elements of the stream all appear on lanes 0 to 7 in 29 vectors. Each vector 1 to 7 includes one element duplicated eight times. The duplication factor (8) results because VECLEN (8 bytes) is one eighth the native vector length of 64 bytes. Thus each lane is the same in vectors 1 to 29. FIG.32illustrates an exemplary embodiment of vector length masking/group duplication block2025. Input register3200receives a vector input from element duplication block2024. Input register3200includes 64 bytes arranged in 64 1-byte blocks byte0 to byte63. Note that bytes byte0 to byte63 are each equal in length to the minimum of ELEM_BYTES. A plurality of multiplexers3201to3263couple input bytes from source register3200to output register3270. Each multiplexer3201to3263supplies input to a corresponding byte1 to byte63 of output register3270. Not all input bytes byte0 to byte63 of input register3200are coupled to every multiplexer3201to3263. Note there is no multiplexer supplying byte0 of output register3270. Byte0 of output register3270is always supplied by byte0 of input register3200. Multiplexers3201to3263are controlled by multiplexer control encoder3180. Multiplexer control encoder3280receives ELEM_BYTES, ELDUP, VECLEN and GRDUP input signals and generates corresponding control signals for multiplexers3201to3263. ELEM_BYTES and ELDUP are supplied to multiplexer control encoder3180to check to see that VECLEN is at least as great as the product of ELEM_BYTES and ELDUP. In operation multiplexer control encoder3280control multiplexers3201to3263to transfer least significant bits equal in number to VECLEN from input register3200to output register3270. If GRDUP=0 indicating group duplication disabled, then multiplexer control encoder3280controls the remaining multiplexers3201to3263to transfer zeros to all bits in the remaining most significant lanes of output register3270. If GRDUP=1 indicating group duplication enabled, then multiplexer control encoder3280controls the remaining multiplexers3201to3263to duplicate the VECLEN number of least significant bits of input register3200into the most significant lanes of output register3270. This control is similar to the element duplication control described above. This fills the output register3270with a first vector. For the next vector, data within input register3200is rotated down by VECLEN, discarding the previous VECLEN least significant bits. The rate of data movement in formatter1903is set by the rate of consumption of data by central processing unit core110via stream read and advance instructions described below. This group duplication formatting repeats as long as the stream includes additional data elements. Element duplication (ELDUP) and group duplication (GRUDP) are independent. Note these features include independent specification and parameter setting. Thus element duplication and group duplication may be used together or separately. Because of how these are specified, element duplication permits overflow to the next vector while group duplication does not. The DECIM field3007controls data element decimation of the corresponding stream. Streaming engine2800deletes data elements from the stream upon storage in stream head registers2818/2828for presentation to the requesting functional unit. Decimation always removes whole data elements, not sub-elements. The DECIM field3007is defined as listed in Table 27. TABLE 27DECIMDecimation Factor00No Decimation012 times104 times11Reserved If DECIM field3007equals 00, then no decimation occurs. The data elements are passed to the corresponding stream head registers2818/2828without change. If DECIM field3007equals 01, then 2:1 decimation occurs. Streaming engine2800removes odd number elements from the data stream upon storage in the stream head registers2818/2828. Limitations in the formatting network require 2:1 decimation to be employed with data promotion by at least 2× (PROMOTE cannot be 000), ICNT0 must be multiple of 2 and the total vector length (VECLEN) must be large enough to hold a single promoted, duplicated element. For transposed streams (TRANSPOSE≠0), the transpose granule must be at least twice the element size in bytes before promotion. If DECIM field3007equals 10, then 4:1 decimation occurs. Streaming engine2800retains every fourth data element removing three elements from the data stream upon storage in the stream head registers2818/2828. Limitations in the formatting network require 4:1 decimation to be employed with data promotion by at least 4× (PROMOTE cannot be 000, 001 or 101), ICNT0 must be multiple of 4 and the total vector length (VECLEN) must be large enough to hold a single promoted, duplicated element. For transposed streams (TRANSPOSE≠0), decimation always removes columns, and never removes rows. Thus the transpose granule must be: at least twice the element size in bytes before promotion for 2:1 decimation (GRANULE≥2×ELEM_BYTES); and at least four times the element size in bytes before promotion for 4:1 decimation (GRANULE≥4×ELEM_BYTES). The THROTTLE field3008controls how aggressively the streaming engine fetches ahead of central processing unit core110. The coding of the two bits of this field is defined as shown in Table 28. TABLE 28THROTTLEDescription00Minimum throttling, maximum fetch ahead01Less throttling, more fetch ahead10More throttling, less fetch ahead11Maximum throttling, minimum fetch ahead THROTTLE does not change the meaning of the stream, and serves only as a hint. The streaming engine may ignore this field. Programs should not rely on the specific throttle behavior for program correctness, because the architecture does not specify the precise throttle behavior. THROTTLE allows programmers to provide hints to the hardware about the program's own behavior. By default, the streaming engine attempts to get as far ahead of central processing unit core110as it can to hide as much latency as possible (equivalent THOTTLE=11), while providing full stream throughput to central processing unit core110. While several key applications need this level of throughput, it can lead to bad system level behavior for others. For example, the streaming engine discards all fetched data across context switches. Therefore, aggressive fetch-ahead can lead to wasted bandwidth in a system with large numbers of context switches. Aggressive fetch-ahead only makes sense in those systems if central processing unit core110consumes data very quickly. The DIMFMT field3009enables redefinition of the loop count fields ICNT02801, ICNT12802, ICNT22803, ICNT32804, ICNT42805and ICNT52806, the loop dimension fields DIM12855, DIM22823, DIM32824, DIM42825and DIM52826and the addressing mode fields AM03013, AM13014, AM23015, AM33016, AM43017and AM53018(part of FLAGS field2811) of the stream template register2800. This permits some loop dimension fields and loop counts to include more bits at the expense of fewer loops. Table 29 lists the size of the loop dimension fields for various values of the DIMFMT field3009. TABLE 29NumberDIMFMTof LoopsDIM5DIM4DIM3DIM2DIM10003unused32 bitsunused32 bitsunused0014unused32 bitsunused16 bits16 bits0104unused32 bits16 bits16 bitsunused0115unused32 bits32 bits32 bits16 bits100reserved101reserved110616 bits16 bits32 bits32 bits32 bits111632 bits32 bits16 bits16 bits32 bits Note that DIM0 always equals ELEM_BYTES the data element size. Table 30 lists the size of the loop count fields for various values of the DIMFMT field3009. TABLE 30NumberDIMFMTof LoopsICNT5ICNT4ICNT3ICNT2ICNT1ICNT00003unused32 bitsunused32 bitsunused32 bits0014unused32 bitsunused32 bits16 bits16 bits0104unused32 bits16 bits16 bitsunused32 bits0115unused32 bits16 bits16 bits16 bits16 bits100reserved101reserved110616 bits16 bits16 bits16 bits16 bits16 bits111616 bits16 bits16 bits16 bits16 bits16 bits DIMFMT field3009effectively defines the loop dimension and loop count bits of stream template register2800.FIG.28illustrates the default case when DIMFMT is 111. FIGS.33to37illustrate the definition of bits of the stream template register for other values of DIMFMT. Note the location and meaning of the FLAGS field (3311,3411,3511,3611and3711) are the same for all values of DIMFMT FIG.33illustrates the definition of bits of the stream template register3300for a DIMFMT value of 000. For a DIMFMT value of 000, there are three loops: loop0, loop2 and loop4. For loop0 ICNT0 field3301includes bits 0 to 33 and DIM0 field equals ELEM_BYTES. For loop2 ICNT2 field3302includes bits 32 to 63 and DIM2 field3321includes bits 160 to 191. For loop4 INTC4 field3303includes bits 64 to 95 and DIM4 field3322includes bits 192 to 223. FIG.34illustrates the definition of bits of the stream template register3400for a DIMFMT value of 001. For a DIMFMT value of 001, there are four loops: loop0, loop1, loop2 and loop4. For loop0 ICNT0 field3401includes bits 0 to 16 and DIM0 field equals ELEM_BYTES. For loop1 ICNT1 field3402includes bits 16 to 31 and DIM1 field3423includes bits 224 to 255. For loop2 INTC2 field3403includes bits 32 to 63 and DIM2 field3421includes bits 160 to 191. For loop4 INTC4 field3404includes bits 64 to 95 and DIM4 field3422includes bits 192 to 223. FIG.35illustrates the definition of bits of the stream template register3500for a DIMFMT value of 010. For a DIMFMT value of 010, there are four loops: loop0 loop2, loop3 and loop4. For loop0 ICNT0 field3501includes bits 0 to 32 and DIM0 field equals ELEM_BYTES. For loop2 ICNT2 field3502includes bits 32 to 47 and DIM2 field3521includes bits 160 to 191. For loop3 INTC3 field3503includes bits 48 to 63 and DIM3 field3523includes bits 224 to 255. For loop4 INTC4 field3504includes bits 64 to 95 and DIM4 field3522includes bits 192 to 223. FIG.36illustrates the definition of bits of the stream template register3600for a DIMFMT value of 011. For a DIMFMT value of 011, there are five loops: loop0 loop1, loop2, loop3 and loop4. For loop0 ICNT0 field3701includes bits 0 to 15 and DIM0 field equals ELEM_BYTES. For loop1 ICNT1 field3702includes bits 16 to 31 and DIM1 field3721includes bits 144 to 159. For loop2 ICNT2 field3703includes bits 32 to 47 and DIM2 field3221includes bits 160 to 191. For loop3 INTC3 field3204includes bits 48 to 63 and DIM3 field3724includes bits 224 to 255. For loop4 INTC4 field3705includes bits 64 to 95 and DIM4 field3723includes bits 192 to 223. FIG.37illustrates the definition of bits of the stream template register3700for a DIMFMT value of 101. For a DIMFMT value of 110, there are six loops: loop0, loop1, loop2, loop3, loop4 and loop5. For loop0 ICNT0 field3701includes bits 0 to 15 and DIM0 field equals ELEM_BYTES. For loop1 ICNT1 field3702includes bits 16 to 31 and DIM1 field3721includes bits 144 to 159. For loop2 ICNT2 field3703includes bits 32 to 47 and DIM2 field3722includes bits 160 to 191. For loop3 INTC3 field3704includes bits 48 to 63 and DIM3 field3725includes bits 224 to 255. For loop4 INTC4 field3705includes bits 64 to 79 and DIM4 field3723includes bits 192 to 207. For loop5 INTC5 field3706includes bits 80 to 95 and DIM5 field3724includes bits 208 to 223. The DIR bit3010determines the direction of fetch of the inner loop (Loop0). If the DIR bit3010is 0 then Loop0 fetches are in the forward direction toward increasing addresses. If the DIR bit3010is 1 then Loop0 fetches are in the backward direction toward decreasing addresses. The fetch direction of other loops is determined by the sign of the corresponding loop dimension DEVIL DIM2, DIM3, DIM4 and DIM5 which are signed integers. The CBK0 field3011and the CBK1 field3012control the circular block size upon selection of circular addressing. The manner of determining the circular block size will be more fully described below. The AM0 field3013, AM1 field3014, AM2 field3015, AM3 field3016, AM4 field3017and AM5 field3018control the addressing mode of a corresponding loop. This permits the addressing mode to be independently specified for each loop. Each of AM0 field3013, AM1 field3014, AM2 field3015, AM3 field3016, AM4 field3017and AM5 field3018are three bits and are decoded as listed in Table 31. TABLE 31AMx fieldMeaning00Linear addressing01Circular addressing block size set by CBK010Circular addressing block size set byCBK0 + CBK1 + 111reserved In linear addressing the address advances according to the address arithmetic whether forward or reverse. In circular addressing the address remains within a defined address block. Upon reaching the end of the circular address block the address wraps around to other limit of the block. Circular addressing blocks are typically limited to 2Naddresses where N is an integer. Circular address arithmetic may operate by cutting the carry chain between bits and not allowing a selected number of most significant bits to change. Thus arithmetic beyond the end of the circular block changes only the least significant bits. The block size is set as listed in Table 32. TABLE 32Encoded BlockBlockSize CBK0 orSizeCBK0 + CBK1 + 1(bytes)051211K22K34K48K516K632K764K8128K9256K10512K111M122M134M148M1516M1632M1764M18128M19256M20512M211 G222 G234 G248 G2516 G2632 G2764 G28Reserved29Reserved30Reserved31Reserved In the preferred embodiment the circular block size is set by the number encoded by CBK0 (first circular address mode 01) or the number encoded by CBK0+CBK1+1 (second circular address mode 10). For example the first circular address mode, the circular address block size can be from 512 bytes to 16 M bytes. For the second circular address mode, the circular address block size can be from 1 K bytes to 64 G bytes. Thus the encoded block size is 2(B+9)bytes, where B is the encoded block number which is CBK0 for the first block size (AMx of 01) and CBK0+CBK1+1 for the second block size (AMx of 10). The central processing unit core110exposes the streaming engine to programs through a small number of instructions and specialized registers. A STROPEN instruction opens a stream. The STROPEN command specifies a stream number indicating opening stream 0 or stream 1. The STROPEN specifies a stream template register which stores the stream template as described above. The arguments of the STROPEN instruction are listed in Table 33. TABLE 33ArgumentDescriptionStream StartScaler register storing streamAddress Registerstart addressSteam NumberStream 0 or Stream 1Stream TemplateVector register storing streamRegistertemplate data The stream start address register is preferably a scalar register in global scalar register file211. The STROPEN instruction specifies stream 0 or stream 1 by its opcode. The stream template register is preferably a vector register in global vector register file231. If the specified stream is active the STROPEN instruction closes the prior stream and replaces the stream with the specified stream. A STRCLOSE instruction closes a stream. The STRCLOSE command specifies the stream number of the stream to be closed. A STRSAVE instruction captures sufficient state information of a specified stream to restart that stream in the future. A STRRSTR instruction restores a previously saved stream. A STRSAVE instruction does not save any of the data of the stream. A STRSAVE instruction saves only metadata. The stream re-fetches data in response to a STRRSTR instruction. Streaming engine is in one of three states: Inactive; Active; or Frozen. When inactive the streaming engine does nothing. Any attempt to fetch data from an inactive streaming engine is an error. Until the program opens a stream, the streaming engine is inactive. After the program consumes all the elements in the stream or the program closes the stream, the streaming engine also becomes inactive. Programs which use streams explicitly activate and inactivate the streaming engine. The operating environment manages streams across context-switch boundaries via the streaming engine's implicit freeze behavior, coupled with its own explicit save and restore actions. Active streaming engines have a stream associated with them. Programs can fetch new stream elements from active streaming engines. Streaming engines remain active until one of the following. When the stream fetches the last element from the stream, it becomes inactive. When program explicitly closes the stream, it becomes inactive. When central processing unit core110responds to an interrupt or exception, the streaming engine freezes. Frozen streaming engines capture all the state necessary to resume the stream where it was when the streaming engine froze. The streaming engines freeze in response to interrupts and exceptions. This combines with special instructions to save and restore the frozen stream context, so that operating environments can cleanly switch contexts. Frozen streams reactivate when central processing unit core110returns to the interrupted context. FIG.38is a partial schematic diagram3800illustrating the stream input operand coding described above.FIG.38illustrates decoding src1 field1305of one instruction of a corresponding src1 input of functional unit3820. These same circuits are duplicated for src2/cst field1304and the src2 input of functional unit3820. In addition, these circuits are duplicated for each instruction within an execute packet that can be dispatched simultaneously. Instruction decoder113receives bits 13 to 17 comprising src1 field1305of an instruction. The opcode field opcode field (bits 4 to 12 for all instructions and additionally bits 28 to 31 for unconditional instructions) unambiguously specifies a corresponding functional unit3820. In this embodiment functional unit3820could be L2 unit241, S2 unit242, M2 unit243, N2 unit244or C unit245. The relevant part of instruction decoder113illustrated inFIG.38decodes src1 bit field1305. Sub-decoder3811determines whether src1 bit field1305is in the range from 00000 to 01111. If this is the case, sub-decoder3811supplies a corresponding register number to global vector register file231. In this example this register field is the four least significant bits of src1 bit field1305. Global vector register file231recalls data stored in the register corresponding to this register number and supplies this data to the src1 input of functional unit3820. This decoding is generally known in the art. Sub-decoder3812determines whether src1 bit field1305is in the range from 10000 to 10111. If this is the case, sub-decoder3812supplies a corresponding register number to the corresponding local vector register file. If the instruction is directed to L2 unit241or S2 unit242, the corresponding local vector register file is local vector register field232. If the instruction is directed to M2 unit243, N2 unit244or C unit245, the corresponding local vector register file is local vector register field233. In this example this register field is the three least significant bits of src1 bit field1305. Local vector register file231recalls data stored in the register corresponding to this register number and supplies this data to the src1 input of functional unit3820. The corresponding local vector register file232/233recalls data stored in the register corresponding to this register number and supplies this data to the src1 input of functional unit3820. This decoding is generally known in the art. Sub-decoder3813determines whether src1 bit field1305is 11100. If this is the case, sub-decoder3813supplies a stream 0 read signal to streaming engine2800. Streaming engine2800then supplies stream 0 data stored in holding register2818to the src1 input of functional unit3820. Sub-decoder3814determines whether src1 bit field1305is 11101. If this is the case, sub-decoder3814supplies a stream 0 read signal to streaming engine2800. Streaming engine2800then supplies stream 0 data stored in holding register2818to the src1 input of functional unit3820. Sub-decoder3814also supplies an advance signal to stream 0. As previously described, streaming engine2800advances to store the next sequential vector of data elements of stream 0 in holding register2818. Sub-decoder3815determines whether src1 bit field1305is 11110. If this is the case, sub-decoder3815supplies a stream 1 read signal to streaming engine2800. Streaming engine2800then supplies stream 1 data stored in holding register2828to the src1 input of functional unit3820. Sub-decoder3816determines whether src1 bit field1305is 11111. If this is the case, sub-decoder3816supplies a stream 1 read signal to streaming engine2800. Streaming engine2800then supplies stream 1 data stored in holding register2828to the src1 input of functional unit3820. Sub-decoder3814also supplies an advance signal to stream 1. As previously described, streaming engine2800advances to store the next sequential vector of data elements of stream 1 in holding register2828. Similar circuits are used to select data supplied to scr2 input of functional unit3802in response to the bit coding of src2/cst field1304. The src2 input of functional unit3820may be supplied with a constant input in a manner described above. The exact number of instruction bits devoted to operand specification and the number of data registers and streams are design choices. Those skilled in the art would realize that other number selections that described in the application are feasible. In particular, the specification of a single global vector register file and omission of local vector register files is feasible. This invention employs a bit coding of an input operand selection field to designate a stream read and another bit coding to designate a stream read and advancing the stream. While this specification contains many specifics, these should not be construed as limitations on the scope of what may be claimed, but rather as descriptions of features that may be specific to particular embodiments. Certain features that are described in this specification in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable sub-combination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a sub-combination or variation of a sub-combination. Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results unless such order is recited in one or more claims. In certain circumstances, multitasking and parallel processing may be advantageous. Moreover, the separation of various system components in the embodiments described above should not be understood as requiring such separation in all embodiments.
136,060
11860791
DETAILED DESCRIPTION A clustered network environment100that may implement one or more aspects of the technology described and illustrated herein is shown inFIG.1. The clustered network environment100includes data storage apparatuses102(1)-102(n) that are coupled over a cluster fabric104facilitating communication between the data storage apparatuses102(1)-102(n) (and one or more modules, components, etc. therein, such as, node computing devices106(1)-106(n), for example), although any number of other elements or components can also be included in the clustered network environment100in other examples. This technology provides a number of advantages including methods, non-transitory computer readable media, and devices that more effectively and efficiently handle storage devices with managing input-output operations in a zone translation layer architecture. The disclosed technology eliminates the flash translation layer (FTL) that is present within the SSDs and replaces the functions of the FTL with the zone translation layer (ZTL) that is present within the host device, such as node computing device. By configuring the ZTL to perform the functions of the FTL and removing the FTL from the SSDs, the disclosed technology is able to provide the end user substantially more usable storage when compared to existing storage system technologies. In this example, node computing devices106(1)-106(n) can be primary or local storage controllers or secondary or remote storage controllers that provide client devices108(1)-108(n), with access to data stored within data storage devices110(1)-110(n). The data storage apparatuses102(1)-102(n) and/or node computing device106(1)-106(n) of the examples described and illustrated herein are not limited to any particular geographic areas and can be clustered locally and/or remotely. Thus, in one example the data storage apparatuses102(1)-102(n) and/or node computing device106(1)-106(n) can be distributed over a plurality of storage systems located in a plurality of geographic locations. In another example, a clustered network can include data storage apparatuses102(1)-102(n) and/or node computing device106(1)-106(n) residing in a same geographic location (e.g., in a single onsite rack). In the illustrated example, one or more of the client devices108(1)-108(n), which may be, for example, personal computers (PCs), computing devices or storage (e.g., storage servers), and other computers or peripheral devices, are coupled to the respective data storage apparatuses102(1)-102(n) by storage network connections112(1)-112(n). Network connections112(1)-112(n) may include a local area network (LAN) or wide area network (WAN), for example, that utilizes Network Attached Storage (NAS) protocols, such as a Common Internet File System (CIFS) protocol or a Network File System (NFS) protocol to exchange data packets, a Storage Area Network (SAN) protocol, such as Small Computer System Interface (SCSI) or Fiber Channel Protocol (FCP), an object protocol, such as S3, etc. Illustratively, the client devices108(1)-108(n) may be general-purpose computers running applications, and may interact with the data storage apparatuses102(1)-102(n) using a client/server model for exchange of information. That is, the client devices108(1)-108(n) may request data from the data storage apparatuses102(1)-102(n) (e.g., data on one of the data storage devices110(1)-110(n) managed by a network storage control configured to process I/O commands issued by the client devices108(1)-108(n)), and the data storage apparatuses102(1)-102(n) may return results of the request to the client devices108(1)-108(n) via the storage network connections112(1)-112(n). The node computing devices106(1)-106(n) of the data storage apparatuses102(1)-102(n) can include network or host nodes that are interconnected as a cluster to provide data storage and management services, such as to an enterprise having remote locations, cloud storage (e.g., a storage endpoint may be stored within a data cloud), etc., for example. Such a node computing device106(1)-106(n) can be a device attached to the fabric104as a connection point, redistribution point, or communication endpoint, for example. One or more of the node computing devices106(1)-106(n) may be capable of sending, receiving, and/or forwarding information over a network communications channel, and could comprise any type of device that meets any or all of these criteria. In an example, the node computing device106(1) may be located on a first storage site and the node computing device106(n) may be located at a second storage site. The node computing devices106(1) and106(n) may be configured according to a disaster recovery configuration whereby a surviving node provides switchover access to the storage devices110(1)-110(n) in the event a disaster occurs at a disaster storage site (e.g., the node computing device106(1) provides client device108(n) with switchover data access to storage devices110(n) in the event a disaster occurs at the second storage site). In other examples, the node computing device106(n) can be configured according to an archival configuration and/or the node computing devices106(1)-106(n) can be configured based on another type of replication arrangement (e.g., to facilitate load sharing). Additionally, while two node computing devices106are illustrated inFIG.1, any number of node computing devices or data storage apparatuses can be included in other examples in other types of configurations or arrangements. As illustrated in the clustered network environment100, node computing devices106(1)-106(n) can include various functional components that coordinate to provide a distributed storage architecture. For example, the node computing devices106(1)-106(n) can include network modules114(1)-114(n) and disk modules116(1)-116(n). Network modules114(1)-114(n) can be configured to allow the node computing devices106(1)-106(n) (e.g., network storage controllers) to connect with client devices108(1)-108(n) over the storage network connections112(1)-112(n), for example, allowing the client devices108(1)-108(n) to send input-output operations to the node computing devices106(1)-106(n). Further, the network modules114(1)-114(n) can provide connections with one or more other components through the cluster fabric104. For example, the network module114(1) of node computing device106(1) can access the data storage device110(n) by sending a request via the cluster fabric104through the disk module116(n) of node computing device106(n) when the node computing device106(n) is available. Alternatively, when the node computing device106(n) fails, the network module114(1) of node computing device106(1) can access the data storage device110(n) directly via the cluster fabric104. The cluster fabric104can include one or more local and/or wide area computing networks embodied as Infiniband, Fibre Channel (FC), or Ethernet networks, for example, although other types of networks supporting other protocols can also be used. Disk modules116(1)-116(n) can be configured to connect data storage devices110(1)-110(n), such as disks or arrays of disks, SSDs, flash memory, or some other form of data storage, to the node computing devices106(1)-106(n). Often, disk modules116(1)-116(n) communicate with the data storage devices110(1)-110(n) according to the SAN protocol, such as SCSI, FCP, SAS, NVMe, NVMe-oF for example, although other protocols can also be used. Thus, as seen from an operating system on either of node computing devices106(1)-106(n), the data storage devices110(1)-110(n) can appear as locally attached. In this manner, different node computing devices106(1)-106(n), etc. may access data blocks through the operating system, rather than expressly requesting abstract files. While the clustered network environment100illustrates an equal number of network modules114(1)-114(n) and disk modules116(1)-116(n), other examples may include a differing number of these modules. For example, there may be a plurality of network and disk modules interconnected in a cluster that do not have a one-to-one correspondence between the network and disk modules. That is, different node computing devices can have a different number of network and disk modules, and the same node computing device can have a different number of network modules than disk modules. Further, one or more of the client devices108(1)-108(n) can be networked with the node computing devices106(1)-106(n) in the cluster, over the storage connections112(1)-112(n). As an example, respective client devices108(1)-108(n) that are networked to a cluster may request services (e.g., exchanging of information in the form of data packets) of node computing devices106(1)-106(n) in the cluster, and the node computing devices106(1)-106(n) can return results of the requested services to the client devices108(1)-108(n). In one example, the client devices108(1)-108(n) can exchange information with the network modules114(1)-114(n) residing in the node computing devices106(1)-106(n) (e.g., network hosts) in the data storage apparatuses102(1)-102(n). In one example, the storage apparatuses102(1)-102(n) host aggregates corresponding to physical local and remote data storage devices, such as local flash or disk storage in the data storage devices110(1)-110(n), for example. One or more of the data storage devices110(1)-110(n) can include mass storage devices, such as disks of a disk array. The disks may comprise any type of mass storage devices, including but not limited to magnetic disk drives, flash memory, SSDs, storage class memories and any other similar media adapted to store information, including, for example, data (D) and/or parity (P) information. The aggregates include volumes118(1)-118(n) in this example, although any number of volumes can be included in the aggregates. The volumes118(1)-118(n) are virtual data stores that define an arrangement of storage and one or more file systems within the clustered network environment100. Volumes118(1)-118(n) can span a portion of a disk or other storage device, a collection of disks, or portions of disks, for example, and typically define an overall logical arrangement of file storage. In one example volumes118(1)-118(n) can include stored data as one or more files or objects that reside in a hierarchical directory structure within the volumes118(1)-118(n). Volumes118(1)-118(n) are typically configured in formats that may be associated with particular storage systems, and respective volume formats typically comprise features that provide functionality to the volumes118(1)-118(n), such as providing an ability for volumes118(1)-118(n) to form clusters. In one example, to facilitate access to data stored on the disks or other structures of the data storage device110(1)-110(n), a file system (e.g., write anywhere file system (WAFL)) may be implemented that logically organizes the information as a hierarchical structure of directories and files. In this example, respective files may be implemented as a set of disk blocks configured to store information, whereas directories may be implemented as specially formatted files in which information about other files and directories are stored. Data can be stored as files or objects within a physical volume and/or a virtual volume, which can be associated with respective volume identifiers, such as file system identifiers (FSIDs). The physical volumes correspond to at least a portion of physical storage devices, such as the data storage device110(1)-110(n) (e.g., a Redundant Array of Independent (or Inexpensive) Disks (RAID system)) whose address, addressable space, location, etc. does not change. Typically the location of the physical volumes does not change in that the (range of) address(es) used to access it generally remains constant. Virtual volumes, in contrast, are stored over an aggregate of disparate portions of different physical storage devices. Virtual volumes may be a collection of different available portions of different physical storage device locations, such as some available space from disks, for example. It will be appreciated that since the virtual volumes are not “tied” to any one particular storage device, virtual volumes can be said to include a layer of abstraction or virtualization, which allows them to be resized and/or flexible in some regards. Further, virtual volumes can include one or more logical unit numbers (LUNs), directories, Qtrees, and/or files. Among other things, these features, but more particularly the LUNS, allow the disparate memory locations within which data is stored to be identified, for example, and grouped as a data storage unit. As such, the LUNs may be characterized as constituting a virtual disk or drive upon which data within the virtual volumes is stored within an aggregate. For example, LUNs are often referred to as virtual disks, such that they emulate a hard drive, while they actually comprise data blocks stored in various parts of a volume. In one example, the data storage devices110(1)-110(n) can have one or more physical ports, wherein each physical port can be assigned a target address (e.g., SCSI target address). To represent respective volumes, a target address on the data storage devices110(1)-110(n) can be used to identify one or more of the LUNs. Thus, for example, when one of the node computing devices106(1)-106(n) connects to a volume, a connection between the one of the node computing devices106(1)-106(n) and one or more of the LUNs underlying the volume is created. In one example, respective target addresses can identify multiple of the LUNs, such that a target address can represent multiple volumes. The I/O interface, which can be implemented as circuitry and/or software in a storage adapter or as executable code residing in memory and executed by a processor, for example, can connect to volumes by using one or more addresses that identify the one or more of the LUNs. Referring toFIG.2A, node computing device106(1) in this particular example includes processor(s)200, a memory202, a network adapter204, a cluster access adapter206, and a storage adapter208interconnected by a system bus210. The node computing device106also includes a storage operating system212installed in the memory206that can, for example, implement a Redundant Array of Independent (or Inexpensive) Disks (RAID) data loss protection and recovery scheme to optimize a reconstruction process of data of a failed disk or drive in an array. In some examples, the node computing device106(n) is substantially the same in structure and/or operation as node computing device106(1), although the node computing device106(n) can include a different structure and/or operation in one or more aspects than the node computing device106(1) in other examples. The storage operating system212can also manage communications for the node computing device106(1) among other devices that may be in a clustered network, such as attached to a cluster fabric104. Thus, the node computing device106(1) can respond to client device requests to manage data on one of the data storage devices110(1)-110(n) (e.g., or additional clustered devices) in accordance with the client device requests. The storage operating system212can also establish one or more file systems including software code and data structures that implement a persistent hierarchical namespace of files and directories, for example. As an example, when a new data storage device (not shown) is added to a clustered network system, the storage operating system212is informed where, in an existing directory tree, new files associated with the new data storage device are to be stored. This is often referred to as “mounting” a file system. In the example node computing device106(1), memory202can include storage locations that are addressable by the processor(s)200and adapters204,206, and208for storing related software application code and data structures. The processor(s)200and adapters204,206, and208may, for example, include processing elements and/or logic circuitry configured to execute the software code and manipulate the data structures. The storage operating system212, portions of which are typically resident in the memory202and executed by the processor(s)200, invokes storage operations in support of a file service implemented by the node computing device106(1). Other processing and memory mechanisms, including various computer readable media, may be used for storing and/or executing application instructions pertaining to the techniques described and illustrated herein. For example, the storage operating system212can also utilize one or more control files (not shown) to aid in the provisioning of virtual machines. Additionally, the memory202of the node computing device106(1) includes a zone translation layer216that assists performing input-output operations on the solid state drives (SSDs) portion of the data storage devices110(1)-110(n) (FIG.1), although the input-output operations can be performed on other types of storage devices. In one example, the zone translation layer216manages mapping and reading/writing of logical blocks to zones within the SSDs, although the zone translation layer216can perform other types or amounts of other operations. FIG.2Bis a block diagram of an exemplary data storage device110that is an SSD according to embodiments of the present disclosure. As illustrated inFIG.2B, in this example, the SSDs in the data storage devices110(1)-110(n) are arranged in a zoned namespace (and configuration (where the logical address space of the namespace is divided into zones), although the SSDs can be arranged in other configurations. Further, as illustrated inFIG.2B, the ZNS SSDs includes dual namespaces, i.e., a conventional namespace and a zoned namespace. Here, a namespace relates to a logical grouping of SSDs and a zoned namespace relates to dividing the logical address space of a namespace into zones. Accordingly, the conventional namespace within the ZNS SSD includes a data structure, such as a mapping table by way of example, to correlate logical block to a physical block, although the mapping table can include other types or amounts of information. In this example, the zoned namespace within the ZNS SSDs includes a data structure, such as a mapping table by way of example, to correlate logical zone to a physical zone, although the mapping table can include other types or amounts of information. Accordingly, the examples may be embodied as one or more non-transitory computer readable media having machine or processor-executable instructions stored thereon for one or more aspects of the present technology, as described and illustrated by way of the examples herein, which when executed by the processor(s)200(FIG.2A), cause the processor(s)200to carry out the steps necessary to implement the methods of this technology, as described and illustrated with the examples herein. In some examples, the executable instructions are configured to perform one or more steps of a method, such as one or more of the exemplary methods described and illustrated later with reference toFIGS.3-8, for example. Referring again toFIG.2A, the network adapter204in this example includes the mechanical, electrical and signaling circuitry needed to connect the node computing device106(1) to one or more of the client devices108(1)-108(n) over storage network connections112(1)-112(n), which may comprise, among other things, a point-to-point connection or a shared medium, such as a local area network. In some examples, the network adapter204further communicates (e.g., using TCP/IP) via the fabric104and/or another network (e.g. a WAN) (not shown) with cloud storage devices to process storage operations associated with data stored thereon. The storage adapter208cooperates with the storage operating system212executing on the node computing device106(1) to access information requested by one of the client devices108(1)-108(n) (e.g., to access data on a data storage device110(1)-110(n) managed by a network storage controller). The information may be stored on any type of attached array of writeable media such as magnetic disk drives, SSDs, and/or any other similar media adapted to store information. In the exemplary data storage devices110(1)-110(n), information can be stored in data blocks on disks. The storage adapter208can include input/output (I/O) interface circuitry that couples to the disks over an I/O interconnect arrangement, such as a storage area network (SAN) protocol (e.g., Small Computer System Interface (SCSI), iSCSI, hyperSCSI, Fiber Channel Protocol (FCP)), or NVMe/NVMeoF. The information is retrieved by the storage adapter208and, if necessary, processed by the processor(s)200(or the storage adapter208itself) prior to being forwarded over the system bus210to the network adapter204(and/or the cluster access adapter206if sending to another node computing device in the cluster) where the information is formatted into a data packet and returned to a requesting one of the client devices108(1)-108(n), or alternatively sent to another node computing device attached via the cluster fabric104. In some examples, a storage driver214in the memory202interfaces with the storage adapter to facilitate interactions with the data storage devices110(1)-110(n), as described and illustrated in more detail later with reference toFIGS.3-8. An exemplary method for managing input-output operations in a zone translation layer architecture storage devices will now be illustrated and described with reference toFIGS.3-8. Referring more specifically toFIGS.3-4, the exemplary method begins at step305where in this illustrative example the node computing device106(1) receives a write request from a client device108(1), although the node computing device106(1) can receive other types or numbers of requests. While this example illustrates the node computing device106(1) performing the steps illustrated inFIGS.3-8, it is to be understood that other node computing devices in the plurality of node computing devices106(1)-106(n) can perform the steps illustrated inFIGS.3-8. Additionally in this example, the write request also includes a logical zone to which the data is to be written, although the received input-output operation can include other types or amounts of information. In this example, zone relates to a portion of the zoned namespace SSD with contiguous logical block address with specific write access rules. For example, the write request can include the logical zone (e.g.,10) to which the data has to be written. Next in step310, the zone translation layer216within the node computing device106(1) maps the logical zone present in the received write request to the corresponding physical zone by referring to the mapping table. In this example and as illustrated inFIG.4, the zone translation layer216within the node computing device106(1) includes a random mapping data structure that assists with random writes that are received from the client device108(1) (in this example) and a sequential mapping data structure that assists with sequential write operations. Accordingly, when the received write request is a sequential write request, the zone translation layer216within the node computing device106(1) uses the sequential mapping data structure (e.g., a sequential mapping table) to map the logical zone to physical sequential write zone. Alternatively, when the received write request is a random write request, the zone translation layer216within the node computing device106(1) uses the random mapping data structure (e.g., a random mapping table) to map the logical block to the physical block zone. In this example, a block is the smallest unit of data store within the zone in an SSD. By way of example,FIG.5illustrates the random mapping table and the sequential mapping table in the zone translation layer (ZTL)216that includes the logical zone and the corresponding physical zone, although the ZTL mapping table can include other types or amounts of information. For example, the logical zone 10 received in the write request (e.g., as a sequential write request) corresponds to physical sequential write zone 0 in the sequential mapping table. Furthermore, in this example, the zone translation layer216within the node computing device106(1) determines whether the received write request is a sequential write request or a random write request based on the logical zone or block to which the data has to be written. By way of example and for purpose of further illustration, if the node computing device106(1) receives a write operation to zones or blocks that are sequentially next to each other within the SSD, then the received write request is determined to be a sequential write request. Alternatively, if the node computing device receives a write operation to zones or blocks that are not sequential, or randomly distributed across the SSD, then the received write request is classified as a random write request. In step315, the zone translation layer216within the node computing device106(1) identifies the physical zone corresponding to the logical zone based on the mapping table present in the zoned namespace of the SSDs in the data storage devices110(1)-110(n). Although in other examples, different parameters or techniques can be used to identify the physical zone corresponding to the logical zone. In this illustrative example, the zoned namespace of the SSDs includes a current mapping table that includes a map between the logical zone and the physical zone and the logical block to the physical block. In this example, the zone translation layer216within the node computing device106(1) communicates with the zoned namespace SSDs using the non-volatile memory express (NVMe) protocol to identify the physical zone, although other type protocols could be used for the communication. Further, the zoned namespace SSDs are dual namespace devices and therefore the zoned namespace SSDs include both the conventional namespace and the zoned namespace. In this example, the zoned namespace includes a sequential mapping table that assists with correlating a logical zone to a physical zone; and the conventional namespace includes the random mapping table that assists with correlating a logical block to a physical block. By way of example,FIG.6illustrates the mapping table that correlates the logical zone to the physical zone present within the zoned namespace of the ZNS SSDs. For purpose of further illustration, logical zone 0 correlates to physical zone 10, for example. In step320, the zone translation layer216within the node computing device106(1) generates a zone write request to write the data into the identified physical zones. In step325, the zone translation layer216within the node computing device106(1) sequentially writes the data into the physical zone identified in step315, although the zone translation layer216can write at other memory locations. By way of example, here the zone translation layer216writes the data to physical zone 10 illustrated inFIG.7. In step330, the zone translation layer216within the node computing device106(1) performs a close operation on the identified physical zone to which the data is written and the exemplary method ends at step335. While the above illustrated technology illustrates a sequential write operation, it is to be understood that a random write operation may also be performed using the technique illustrated above. Next, an exemplary method for managing read operation in a zone translation layer architecture storage devices will now be illustrated with reference toFIGS.4-8. Referring more specifically toFIGS.4and8, the exemplary method begins at step805where in this illustrative example the node computing device106(1) receives a read request from a client device108(1), although the node computing device106(1) can receive other types or numbers of requests. Additionally in this example, the received read request also includes a logical zone from which the data is required to be read, although the read request can include other types or amounts of information. For example, the read request can include logical zone 10 from which the data is required to be read. Next in step810, the zone translation layer216within the node computing device106(1) maps the logical zone present in the received read request operation to the corresponding physical zone by referring to the mapping table. In this example and as illustrated inFIG.4, the zone translation layer216within the node computing device106(1) includes a random mapping data structure that assists with random read requests that are received from the client device108(1) and a sequential mapping data structure that assists with sequential read requests. Accordingly, when the received read request is a sequential read request, the zone translation layer216within the node computing device106(1) uses the sequential mapping data structure (e.g., a mapping table) to map the logical zone to the physical zone. Alternatively, when the received read request is a random read request, the zone translation layer216within the node computing device106(1) uses the random mapping data structure (e.g., random mapping table) to map the logical block to the physical block. By way of example,FIG.5illustrates the random mapping table and the sequential mapping table in the zone translation layer (ZTL)216that includes the logical zone to the corresponding physical zone (sequential mapping table) and logical block to the corresponding physical block (random mapping table), although the ZTL mapping table can include other types or amounts of information. For example, the logical zone 10 received in the read request corresponds to physical zone 0 in the sequential mapping table. Furthermore, in this example, the zone translation layer216within the node computing device106(1) determines whether the received read request is a sequential read request or a random read request based on the logical zone to which the data is to be read, although other techniques or parameters can be used to make the determination. In step815, the zone translation layer216within the node computing device106(1) identifies the physical zone corresponding to the logical zone based on the mapping table present in the zoned namespace of the SSDs in the data storage devices110(1)-110(n). Although in other examples, different parameters or techniques can be used to identify the physical zone corresponding to the logical zone. In this example, the zone translation layer216within the node computing device106(1) communicates with the zoned namespace SSDs using the non-volatile memory express (NVMe) protocol to identify the physical zone, although other type protocols could be used for the communication. Further, the zoned namespace SSDs are dual namespace devices and therefore the zoned namespace SSDs include both the conventional namespace and the zoned namespace. In this example, the zoned namespace includes sequential mapping table that assists with correlating logical zone to a physical zone; and the conventional namespace includes the random mapping table that assists with correlating logical block to a physical block. By way of example,FIG.6illustrates the mapping table that correlates the logical zone to the physical zone present within the zoned namespace of the ZNS SSDs. For purpose of further illustration, logical zone 0 correlates to physical zone 10, for example. In step820, the zone translation layer216within the node computing device106(1) generates a zone read request to read the data into the identified physical zones. In step825, the zone translation layer216within the node computing device106(1) sequentially reads the data from the physical zone identified in step315, although the zone translation layer216can read at other memory locations. By way of example, here the zone translation layer216reads the data to physical zone 10 illustrated inFIG.7. In step830, the zone translation layer216within the node computing device106(1) provides the requesting client device108(1) with the data read from the physical zone and the exemplary method ends at step835. While the above illustrated example describes a sequential read request, it is to be understood that a random read request may be handled using the techniques illustrated above. As illustrated and described by way of examples herein, this technology provides a number of advantages including methods, non-transitory computer readable media, and devices that more effectively and efficiently handle storage devices with managing input-output operations in a zone translation layer architecture. The disclosed technology eliminates the flash translation layer (FTL) that is present within the SSDs and replaces the functions of the FTL with the zone translation layer (ZTL) that is present within the host device, such as node computing device. By configuring the ZTL to perform the functions of the FTL and removing the FTL from the SSDs, the disclosed technology is able to provide the end user substantially more usable storage when compared to the existing storage system technologies. Having thus described the basic concept of the technology, it will be rather apparent to those skilled in the art that the foregoing detailed disclosure is intended to be presented by way of example only, and is not limiting. Various alterations, improvements, and modifications will occur and are intended to those skilled in the art, though not expressly stated herein. These alterations, improvements, and modifications are intended to be suggested hereby, and are within the spirit and scope of the technology. Additionally, the recited order of processing elements or sequences, or the use of numbers, letters, or other designations therefore, is not intended to limit the claimed processes to any order except as may be specified in the claims. Accordingly, the technology is limited only by the following claims and equivalents thereto.
34,210
11860792
DETAILED DESCRIPTION Described herein are systems and methods for improving memory access handling for peripheral component interconnect (PCI) devices. A PCI device is an external computer hardware device that connects to a computer system. In some instances, the PCI device may be coupled to a physical bus of the host machine. In some instances, the hypervisor may abstract the PCI device by assigning particular port ranges of the PCI device to the virtual machine and presenting the assigned port ranges to the virtual machine as the virtual device. The PCI device may be capable of direct memory access (DMA). DMA allows the PCI device to access the system memory for reading and/or writing independently of the central processing unit (CPU). PCI devices that are capable of performing DMA include disk drive controllers, graphics cards, network interface cards (NICs), sound cards, or any other input/output (I/O) device. While the hardware device is performing the DMA, the CPU can engage in other operations. A PCI device having DMA-capable devices often uses an input/output memory management unit (IOMMU) to manage address translations. An IOMMU is a hardware memory management unit (MMU) that resides on the input/output (I/O) path connecting the device to the memory. The IOMMU may map the device address space (e.g., a bus address) that is relevant to the I/O bus into the physical address space (e.g., host physical address) that is relevant to the memory bus. The IOMMU may include an IOMMU page table, which includes a set of page table entries, such that each page table entry translates a guest physical address of a guest memory pages to a host physical address of the host memory. The IOMMU may also include extra information associated with the address space mapping such as read and write permissions. During the runtime of a virtual machine, the hypervisor can intercept I/O communications (e.g., memory access requests, interrupt requests, etc.) from the PCI device and handle each I/O communication by forwarding the communication to an appropriate physical device at the host machine (e.g., the underlying I/O device, a memory device at the host machine, etc.). In one example, the hypervisor may intercept a guest memory access request from the PCI device, forward it to a host system IOMMU for translation (e.g., guest memory address(es) to host memory address(es)), and provide host memory access using the translated memory access request. In another example, the hypervisor may intercept an interrupt request from the PCI device, translate the interrupt request via the host system IOMMU, and provide the event associated with the interrupt request to the virtual machine. In current systems, PCI devices in communication with the system IOMMU have drawbacks, such as a bottleneck caused by the system looking up address translations, for memory access requests, in the IOMMU translation table. As a solution, some PCI devices use an on-device IOMMU. The on-device IOMMU may poll (using a poll operation) the host system to check the status of the virtual machine (whether new data is available), translate memory access to guest memory, and initiate memory accesses directly to the host memory. The PCI device may then retrieves memory page(s) from the host memory and stores the memory page(s) in the on-device IOMMU's cache. During these operations performed by the PCI device, the system IOMMU may be set to pass-through mode. In pass-through mode, the PCI device may access the host memory without the hypervisor trapping the PCI device's communication (e.g., memory access request) for translation by the system IOMMU. However, while in pass-through mode, an interrupt request from the PCI device will result in the virtual machine temporarily exiting to the hypervisor (e.g., by a VMExit event) and being subsequently restarted (e.g., by a VMEnter instruction). This process causes the VM to pause its processing threads, uses additional processing resources (e.g., central processing unit (CPU) resources), and hinders performance of the virtual machine. On the other hand, interrupt requests from the PCI device have low bandwidth (as compared to memory access requests) and may be translated without causing a performance bottleneck. In current system, the IOMMU can either translate memory access requests and interrupt requests from a PCI device or be set to pass-through mode. However, as discussed above, memory access requests from a PCI device are more efficiency processed by the on-device IOMMU, while interrupt requests may be processed by the host system IOMMU without causing a performance bottleneck. Thus, systems and methods capable of selectively enabling and disabling pass-through mode by the host system IOMMU are desirable. Aspects of the present disclosure address the above-noted and other deficiencies by providing technology that improves memory request handling for PCI devices. In particular, aspects of the present disclosure enable a host system to cause the host system IOMMU to enter pass-through mode in response to receiving translated memory access requests from the PCI device. Aspects of the present disclosure further enable the host system to cause the host system IOMMU to process interrupt requests from the PCI device. In an embodiment, a hypervisor may first enable the host system IOMMU to receive translated memory access requests. A translated memory access request may allow the PCI device to directly access the host memory. The hypervisor may then enable the PCI device to access virtual device memory using host physical addresses. The PCI device may, thus, request address translations (guest physical memory to host physical memory) from the host system IOMMU. The PCI device may then receive the address translations from the host system IOMMU and store the address translations in the on-device IOMMU page table. The on-device IOMMU page table may include a set of page table entries where each page table entry translates a guest physical address of guest memory page(s) to a host physical address of the host memory. The PCI device may use the on-device IOMMU to translate the guest memory address of memory access requests initiated by the PCI device. For example, in response to the PCI device issuing a memory access request for guest memory, the PCI device may translate the memory access request using the address translations stored in the on-device IOMMU page table. The PCI device may further set an address translation flag for each memory access request that is translated by the on-device IOMMU. The address translation flag may specify a host address space associated with the memory address of the memory access request. The host system IOMMU may pass-through (translated) memory access requests to the host memory with an enabled address translation flag (e.g., a translated-request bit set to a value of 1). Thus, in response to receiving a memory access request from the PCI device, the system IOMMU may determine that the memory access request includes an enabled address translation flag, and allow direct access to the host memory without first translating the memory access request. As such, the memory access request bypasses translation by the system IOMMU, thus preventing a performance bottleneck. The PCI device may communicate interrupt requests to the host system IOMMU. Interrupt requests may not include an address translation flag, or may include a disabled address translation flag (set to a value of 0). As such, the host system IOMMU may translate the interrupt requests and send the translated interrupt request to the virtual central processing unit (vCPU) of the virtual machine. Accordingly, the interrupt request is not handled by the hypervisor, and, thus, the virtual machine does not need to temporarily exit to the hypervisor (e.g., by a VMExit event) and be subsequently restarted (e.g., by a VMEnter instruction). This may prevent performance issues associated with VM exits. Various aspects of the above referenced methods and systems are described in details herein below by way of examples, rather than by way of limitation. The examples provided below discuss a virtualized computer system where hardware and software configuration and memory movement may be initiated by aspects of a hypervisor, a host operating system, a virtual machine, a PCI device, or a combination thereof. In other examples, the ware and software configuration and memory movement may be performed in a non-virtualized computer system that is absent a hypervisor or other virtualization features discussed below. FIG.1depicts an illustrative architecture of elements of a computer system100, in accordance with an embodiment of the present disclosure. It should be noted that other architectures for computer system100are possible, and that the implementation of a computing device utilizing embodiments of the disclosure are not necessarily limited to the specific architecture depicted. Computer system100may be a single host machine or multiple host machines arranged in a cluster and may include a rackmount server, a workstation, a desktop computer, a notebook computer, a tablet computer, a mobile phone, a palm-sized computing device, a personal digital assistant (PDA), etc. In one example, computer system100may be a computing device implemented with x86 hardware. In another example, computer system100may be a computing device implemented with PowerPC®, SPARC®, or other hardware. In the example shown inFIG.1, computer system100may include virtual machine110, hypervisor120, hardware devices130, a network140, and a Peripheral Component Interconnect (PCI) device150. Virtual machine110may execute guest executable code that uses an underlying emulation of the physical resources. The guest executable code may include a guest operating system, guest applications, guest device drivers, etc. Virtual machines110may support hardware emulation, full virtualization, para-virtualization, operating system-level virtualization, or a combination thereof. Virtual machine110may have the same or different types of guest operating systems, such as Microsoft® Windows®, Linux®, Solaris®, etc. Virtual machine110may execute guest operating system112that manages guest memory116. Guest memory116may be any virtual memory, logical memory, physical memory, other portion of memory, or a combination thereof for storing, organizing, or accessing data. Guest memory116may represent the portion of memory that is designated by hypervisor120for use by virtual machine110. Guest memory116may be managed by guest operating system112and may be segmented into guest pages. The guest pages may each include a contiguous or non-contiguous sequence of bytes or bits and may have a page size that is the same or different from a memory page size used by hypervisor120. Each of the guest page sizes may be a fixed-size, such as a particular integer value (e.g., 4 KB, 2 MB) or may be a variable-size that varies within a range of integer values. In one example, the guest pages may be memory blocks of a volatile or non-volatile memory device and may each correspond to an individual memory block, multiple memory blocks, or a portion of a memory block. Host memory124(e.g., hypervisor memory) may be the same or similar to the guest memory but may be managed by hypervisor120instead of a guest operating system. Host memory124may include host pages, which may be in different states. The states may correspond to unallocated memory, memory allocated to guests, and memory allocated to hypervisor. The unallocated memory may be host memory pages that have not yet been allocated by host memory124or were previously allocated by hypervisor120and have since been deallocated (e.g., freed) by hypervisor120. The memory allocated to guests may be a portion of host memory124that has been allocated by hypervisor120to virtual machine110and corresponds to guest memory116. Other portions of hypervisor memory may be allocated for use by hypervisor120, a host operating system, hardware device, other module, or a combination thereof. Hypervisor120may also be known as a virtual machine monitor (VMM) and may provide virtual machine110with access to one or more features of the underlying hardware devices130. In the example shown, hypervisor120may run directly on the hardware of computer system100(e.g., bare metal hypervisor). In other examples, hypervisor120may run on or within a host operating system (not shown). Hypervisor120may manage system resources, including access to hardware devices130. In the example shown, hypervisor120may include a configuration component122and host memory124. Configuration component122may execute configuration operations on on-device IOMMU152and system IOMMU160. In particular, configuration component122may enable PCI device150to access guest memory116using host physical addresses, configure on-device IOMMU152to translate all memory access requests associated with the virtual machine110and store the translations in IOMMU page table154, and to enable a translated-request bit for each memory access request that is translated by on-device IOMMU152. Configuration component122may further enable the system IOMMU160to enter pass-through mode in response to receiving a memory access request with an enabled translated-request bit, and to process interrupt requests from the PCI device150via the host system IOMMU. Configuration component122is discussed in more detail in regards toFIG.2. Hardware devices130may provide hardware resources and functionality for performing computing tasks. Hardware devices130may include one or more physical storage devices132, one or more physical processing devices134, system IOMMU160, other computing devices, or a combination thereof. One or more of hardware devices130may be split up into multiple separate devices or consolidated into one or more hardware devices. Some of the hardware device shown may be absent from hardware devices130and may instead be partially or completely emulated by executable code. Physical storage devices132may include any data storage device that is capable of storing digital data and may include volatile or non-volatile data storage. Volatile data storage (e.g., non-persistent storage) may store data for any duration of time but may lose the data after a power cycle or loss of power. Non-volatile data storage (e.g., persistent storage) may store data for any duration of time and may retain the data beyond a power cycle or loss of power. In one example, physical storage devices132may be physical memory and may include volatile memory devices (e.g., random access memory (RAM)), non-volatile memory devices (e.g., flash memory, NVRAM), and/or other types of memory devices. In another example, physical storage devices132may include one or more mass storage devices, such as hard drives, solid state drives (SSD)), other data storage devices, or a combination thereof. In a further example, physical storage devices132may include a combination of one or more memory devices, one or more mass storage devices, other data storage devices, or a combination thereof, which may or may not be arranged in a cache hierarchy with multiple levels. Physical processing devices134may include one or more processors that are capable of executing the computing tasks. Physical processing devices134may be a single core processor that is capable of executing one instruction at a time (e.g., single pipeline of instructions) or may be a multi-core processor that simultaneously executes multiple instructions. The instructions may encode arithmetic, logical, or I/O operations. In one example, physical processing devices134may be implemented as a single integrated circuit, two or more integrated circuits, or may be a component of a multi-chip module (e.g., in which individual microprocessor dies are included in a single integrated circuit package and hence share a single socket). A physical processing device may also be referred to as a central processing unit (“CPU”). System IOMMU160may manage address translations in response to receiving memory access requests, interrupt requests, or any other data requests and/or commands. System IOMMU160may include page table162and translation component168. Page table162is a data structure used to store a mapping of addresses of the guest memory116to addresses of the host memory124. Accordingly, address translation is handled using the page table(s). For example, page table162may translate guest physical addresses166of guest memory116pages to host physical addresses164of host memory124. Page table162may include one or more page tables such as a protected page table or an unprotected page table. In an example, host page table162may be an extended page table (EPT) translating guest physical addresses to host physical addresses. In another example, the page table162may be the shadow page table translating the guest virtual addresses to host physical addresses. In another example, page table162may be the hypervisor page table, translating the guest physical addresses to hypervisor virtual addresses. Translation component168may determine whether to translate, using page table162, a memory access request and/or an interrupt request. In some embodiments, translation component168may determine to translate a memory access request and/or an interrupt request in response to failing to detect or detecting a disabled (set to a value of 0) address translation flag appended to the memory access request. Otherwise, in response to detecting an enabled (set to a value of 1) address translation flag appended to a memory access request or an interrupt request, translation component168may enable the memory access request to pass-through system IOMMU160(e.g., set system IOMMU160to pass-through mode) and access host memory124. In other embodiments, translation component168may determine whether to translate a memory access request and/or an interrupt request in response to detecting a specific address range associated with the memory access request. For example, in response to detecting a specific address range associated with the memory access request, translation component168may enable the request to pass-through system IOMMU160. Network140may be a public network (e.g., the internet), a private network (e.g., a local area network (LAN), a wide area network (WAN)), or a combination thereof. In one example, network140may include a wired or a wireless infrastructure, which may be provided by one or more wireless communications systems, such as a wireless fidelity (WiFi) hotspot connected with the network140and/or a wireless carrier system that can be implemented using various data processing equipment, communication towers, etc. PCI device150may be a computer hardware device that plugs directly into a PCI slot of the computer system100. PCI device150may be assigned to the guest operation system112of the virtual machine110and may communicate with the guest operation system112. PCI device150may include DMA (direct memory access) capabilities, which allow PCI device150to access system memory (e.g., physical storage devices132) for reading and/or writing independently of a system CPU (e.g., physical processing devices134). For example, the PCI device150may transfer its input/output (I/O) data directly to and from physical storage devices132. The PCI device150may include on-device IOMMU152to manage address translations, and memory management component162. On-device IOMMU152may map, to a page table, the device address space (e.g., a bus address) that is relevant to the I/O bus into the physical address space (e.g., host physical address) that is relevant to the memory bus. On-device IOMMU152may include extra information associated with the address space mapping, such as read and write permissions for the memory page. On-device IOMMU152may include an IOMMU page table154. IOMMU page table154may translate guest physical addresses158of guest memory116pages to host physical addresses156of host memory124. For example, on-device IOMMU152may retrieve or receive mapping data from hypervisor120and/or system IOMMU160via a memory access request, a polling operation, etc. On-device IOMMU152may then cache the mapping data and generate records in IOMMU page table154, where each record maps a guest physical address158and a host physical address156. Memory management component162may issue memory access requests for guest memory116and interrupt requests to virtual machine110, hypervisor120, and/or system IOMMU160. In some embodiments, memory management component162may translate a memory access request using, for example, IOMMU page table154. Further, memory management component162may append an address translation flag to a translated memory access request, and set the address translation flag to a value of 1. Thus, the translated memory access request may pass-through system IOMMU160and access host memory124. The features of memory management component162is discussed in more detail in regards toFIG.2. FIG.2is a block diagram illustrating example components and modules of computer system200, in accordance with one or more aspects of the present disclosure. Computer system200may comprise executable code that implements one or more of the components and modules and may be implemented within a hypervisor, a host operating system, a guest operating system, hardware firmware, or a combination thereof. More or less components or modules may be included without loss of generality. For example, two or more of the modules may be combined into a single module, or features of a module may be divided into two or more modules. In the example shown, computer system200may include configuration component112, system memory management component162, virtual machine110, system IOMMU160, and data storage240. Configuration component112may provide instructions to the PCI device150and to the system IOMMU160to enable memory request handling by PCI device150and system IOMMU160. As illustrated, configuration component122may include PCI device configuration module212and system IOMMU configuration module214. PCI device configuration module212may send instructions to PCI device150to enable PCI device150to perform memory translations using memory management component162. System IOMMU configuration module214may send instructions to translation component168to determine whether a received memory request (e.g., memory access request, interrupt request, etc.) includes a translated-request bit, and in response to detecting the translated-request bit set to a value of 1, enter pass-through mode. PCI device configuration module212and system IOMMU configuration module214may enable the PCI device150and translation component168, respectively, using a software packet(s), a firmware packet(s), a device driver, virtual hardware, or any combination thereof. Memory management component162may include memory mapping module222, memory access module224, and interrupt request module226. In some embodiments, IOMMU configuration module214may configure translation component168of system IOMMU160. PCI device150may access virtual device memory (e.g., guest memory116) using host physical addresses. For example, memory mapping module222may request address translations (guest physical memory to host physical memory) from the system IOMMU160. Memory mapping module222may receive the address translations from system IOMMU160and store the address translations in IOMMU page table154. Specifically, memory mapping module222may append a set of page table entries to IOMMU page table154, where each page table entry translates a guest physical address of guest memory116to a host physical address of the host memory124. Memory management component162may use the on-device IOMMU152to translate memory access requests associated with virtual machine110(instead of using IOMMU160for memory access request translations). For example, in response to memory management component162receiving or initiating a memory access request for one or more memory page from guest memory116, memory access module224may translate the memory access request using the address translations stored in IOMMU page table154. Thus, memory management component162may attempt to access the requested memory page directly from the host memory124. By accessing the requested memory page directly from the host memory124, the memory access module224does not need to translate the memory access request using system IOMMU160, thus avoiding potentially causing a performance bottleneck. To indicate to system IOMMU160whether the memory access request issued by memory access module224needs to be translated, memory management component162may use a translated-request bit. In particular, memory access module224may append an address translation flag to each memory access request issued by memory management component162. For memory access requests that are translated using IOMMU page table154, memory access module224may enable the address translation flag by setting the address translation flag to a value of 1. For memory access requests that are not translated using IOMMU page table154(thus, need to be translated using system IOMMU160), memory access module224may disable the address translation flag by setting the translated-request bit to a value of 0. Thus, the translated memory access request may pass-through system IOMMU160and access host memory124. On the other hand, if a memory access request is unable to be translated by on-device IOMMU152(e.g., due to missing translation records in IOMMU page table154), the memory access request may be translated by system IOMMU160and then access host memory124. Accordingly, memory mapping module may then cache the translation data associated with the memory access request and update IOMMU page table154. System IOMMU160may enter pass-through mode in response to detecting an enabled address translation flag appended to a memory access request. In an example, in response to receiving a memory access request from PCI device150, translation component168may determine whether the memory access request is appended with the address translation flag. Responsive to detecting the address translation flag and determining that the address translation flag is set to a value of 1, translation component168may enable the memory access request to access the host memory without trapping the memory access request for translation by the system IOMMU160. Responsive to failing to detect the address translation flag or detecting the address translation flag and determining that the address translation flag is set to a value of 0, translation component168may trap the memory access request in a queue for translation by the system IOMMU160. Memory management component162may communicate, via interrupt request module226, interrupt requests to the system IOMMU160. In some embodiments, interrupt requests do not include an address translation flag. In other embodiments, interrupt requests may be appended with the address translation flag, and the address translation flag may be set to a value of 0. In response to receiving an interrupt request from PCI device150, translation component168may determine whether the memory access request is appended with an address translation flag. Responsive to failing to detect the address translation flag or detecting the address translation flag and determining that the address translation flag is set to a value of 0, translation component168may trap the memory access request in a queue for translation by the system IOMMU160. Once translated, translation component168may send the translated interrupt request to the virtual central processing unit (vCPU) of virtual machine110. In some embodiments, translation component168may determine whether to translate a memory access request and/or an interrupt request in response to detecting a specific address range associated with the request. For example, in response to detecting a specific address range associated with the request, translation component168may enable the request to pass-through system IOMMU160and access the host memory. By way of illustrative example, a hypervisor may manage a virtual machine in communication with a PCI device. One or more memory pages related to a task running on the virtual machine may be loaded into the guest memory of the virtual machine. The PCI device may receive a packet, from the network, to store in the guest memory. The packet may include a guest physical address. The PCI device may translate the guest physical address associated with the packet to a host physical address, append an enabled address translation flag to the translated memory access request (to write the data associated with the packet onto the onto host memory), and send the translated memory access request to the host system IOMMU. The host system IOMMU may determine that the memory access request includes the address translation flag set to a value of 1, and allow the memory access request to write data to the address space associated with the host physical address without trapping the memory access request for translation by the host system IOMMU. To notify the virtual machine about the packet, the PCI device may send an interrupt request (e.g., a message signal interrupt) to the guest physical address without enabling the address translation flag appended to the interrupt request. The system IOMMU may intercept the interrupt request and determine that the interrupt request includes a disabled address translation flag. The system IOMMU may then translate the interrupt request to determine which virtual machine is associated with the interrupt request, and send the interrupt request to the corresponding virtual machine. FIG.3depicts a flow diagram of an illustrative example of a method300for PCI device memory management, in accordance with one or more aspects of the present disclosure. Method300and each of its individual functions, routines, subroutines, or operations may be performed by one or more processors of the computer device executing the method. In certain implementations, method300may be performed by a single processing thread. Alternatively, method300may be performed by two or more processing threads, each thread executing one or more individual functions, routines, subroutines, or operations of the method. In an illustrative example, the processing threads implementing method300may be synchronized (e.g., using semaphores, critical sections, and/or other thread synchronization mechanisms). Alternatively, the processes implementing method300may be executed asynchronously with respect to each other. For simplicity of explanation, the methods of this disclosure are depicted and described as a series of acts. However, acts in accordance with this disclosure can occur in various orders and/or concurrently, and with other acts not presented and described herein. Furthermore, not all illustrated acts may be required to implement the methods in accordance with the disclosed subject matter. In addition, those skilled in the art will understand and appreciate that the methods could alternatively be represented as a series of interrelated states via a state diagram or events. Additionally, it should be appreciated that the methods disclosed in this specification are capable of being stored on an article of manufacture to facilitate transporting and transferring such methods to computing devices. The term “article of manufacture,” as used herein, is intended to encompass a computer program accessible from any computer-readable device or storage media. In one implementation, method300may be performed by a kernel of a hypervisor as shown inFIG.1or by an executable code of a host machine (e.g., host operating system or firmware), a virtual machine (e.g., guest operating system or virtual firmware), an external device (e.g., a PCI device), other executable code, or a combination thereof. Method300may be performed by processing devices of a server device or a client device and may begin at block302. A host computing system may run a hypervisor managing a virtual machine in communication with a PCI device. The PCI device may include an IOMMU. At block302, the host computing system may receive a memory access request initiated by a peripheral component interconnect (PCI) device. The memory access request may include a memory address and an address translation flag specifying an address space associated with the memory address. At block304, the host system may determine whether the address translation flag is set of a first value (e.g., a value of 1) indicating a host address space. At block306, responsive to determining that the address translation flag is set to the first value indicating a host address space, the host computing system may cause the host system input/output memory management unit (IOMMU) to pass-through the memory access request. In an example, the PCI device may translate memory access requests using an on-device IOMMU. The PCI device may further append an address translation flag set to the first value to each memory access request translated by the on-device IOMMU. At block308, responsive to determining that the address translation flag is set to a second value (e.g., a value of 0) indicating a device address space, the host computing system may cause the host IOMMU to translate the memory address specified by the memory address request. The device address space may include a guest address space, an address space used by a virtual machine, and address space used by the PCI device, etc. Once translated, the host computing system may send the translated interrupt request to a virtual central processing unit (vCPU) of a virtual machine. In some embodiments, the host computing system may receive another memory access request initiated by the PCI device. The memory access request may include a memory address. Responsive to determining that the memory address is within a specific address range, the system IOMMU may pass-through the memory access request. Responsive to completing the operations described herein above with references to block308, the method may terminate. FIG.4depicts a block diagram of a computer system400operating in accordance with one or more aspects of the present disclosure. Computer system400may be the same or similar to computer system200and computer system100and may include one or more processing devices and one or more memory devices. In the example shown, computer system500may include a translation component410and memory420. Translation component410may receive a memory access request initiated by a peripheral component interconnect (PCI) device. The memory access request may include a memory address and an address translation flag specifying an address space associated with the memory address. Translation component410may then determine whether the address translation flag is set of a first value (e.g., a value of 1) indicating a host address space. Responsive to determining that the address translation flag is set to the first value indicating a host address space, translation component410may cause the host system input/output memory management unit (IOMMU) to pass-through the memory access request. In an example, the PCI device may translate memory access requests using an on-device IOMMU. The PCI device may further append an address translation flag set to the first value to each memory access request translated by the on-device IOMMU. In some embodiments, translation component410may receive another memory access request initiated by the PCI device. The memory access request may include a memory address. Responsive to determining that the memory address is within a specific address range, translation component410may pass-through the memory access request. FIG.5depicts a flow diagram of one illustrative example of a method500for PCI device memory management, in accordance with one or more aspects of the present disclosure. Method500may be similar to method300and may be performed in the same or a similar manner as described above in regards to method300. Method500may be performed by processing devices of a server device or a client device and may begin at block502. At block502, the processing device may receive a memory access request initiated by a peripheral component interconnect (PCI) device. The memory access request may include a memory address and an address translation flag specifying an address space associated with the memory address. At block504, the processing device may determine whether the address translation flag is set of a first value (e.g., a value of 1) indicating a host address space. At block506, responsive to determining that the address translation flag is set to the first value indicating a host address space, the processing device may cause the host system input/output memory management unit (IOMMU) to pass-through the memory access request. In an example, the PCI device may translate memory access requests using an on-device IOMMU. The PCI device may further append an address translation flag set to the first value to each memory access request translated by the on-device IOMMU. At block508, responsive to determining that the address translation flag is set to a second value (e.g., a value of 0) indicating a device address space, the processing device may translate the memory address specified by the memory address request. The device address space may include a guest address space, an address space used by a virtual machine, and address space used by the PCI device, etc. Once translated, the host computing system may send the translated interrupt request to a virtual central processing unit (vCPU) of a virtual machine. In some embodiments, the processing device may receive another memory access request initiated by the PCI device. The memory access request may include a memory address. Responsive to determining that the memory address is within a specific address range, the processing device may pass-through the memory access request. Responsive to completing the operations described herein above with references to block508, the method may terminate. FIG.6depicts a block diagram of a computer system operating in accordance with one or more aspects of the present disclosure. In various illustrative examples, computer system1000may correspond to computing device100ofFIG.1or computer system200ofFIG.2. The computer system may be included within a data center that supports virtualization. Virtualization within a data center results in a physical system being virtualized using virtual machines to consolidate the data center infrastructure and increase operational efficiencies. A virtual machine (VM) may be a program-based emulation of computer hardware. For example, the VM may operate based on computer architecture and functions of computer hardware resources associated with hard disks or other such memory. The VM may emulate a physical computing environment, but requests for a hard disk or memory may be managed by a virtualization layer of a computing device to translate these requests to the underlying physical computing hardware resources. This type of virtualization results in multiple VMs sharing physical resources. In certain implementations, computer system600may be connected (e.g., via a network, such as a Local Area Network (LAN), an intranet, an extranet, or the Internet) to other computer systems. Computer system600may operate in the capacity of a server or a client computer in a client-server environment, or as a peer computer in a peer-to-peer or distributed network environment. Computer system600may be provided by a personal computer (PC), a tablet PC, a set-top box (STB), a Personal Digital Assistant (PDA), a cellular telephone, a web appliance, a server, a network router, switch or bridge, or any device capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that device. Further, the term “computer” shall include any collection of computers that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methods described herein. In a further aspect, the computer system600may include a processing device602, a volatile memory604(e.g., random access memory (RAM)), a non-volatile memory606(e.g., read-only memory (ROM) or electrically-erasable programmable ROM (EEPROM)), and a data storage device616, which may communicate with each other via a bus608. Processing device602may be provided by one or more processors such as a general purpose processor (such as, for example, a complex instruction set computing (CISC) microprocessor, a reduced instruction set computing (RISC) microprocessor, a very long instruction word (VLIW) microprocessor, a microprocessor implementing other types of instruction sets, or a microprocessor implementing a combination of types of instruction sets) or a specialized processor (such as, for example, an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), or a network processor). Computer system600may further include a network interface device622. Computer system600also may include a video display unit610(e.g., an LCD), an alphanumeric input device612(e.g., a keyboard), a cursor control device614(e.g., a mouse), and a signal generation device620. Data storage device616may include a non-transitory computer-readable storage medium624on which may store instructions626encoding any one or more of the methods or functions described herein, including instructions for implementing methods300or500and for translation component168, execution component122(not shown), and modules illustrated inFIGS.1and2. Instructions626may also reside, completely or partially, within volatile memory1004and/or within processing device602during execution thereof by computer system600, hence, volatile memory604and processing device602may also constitute machine-readable storage media. While computer-readable storage medium624is shown in the illustrative examples as a single medium, the term “computer-readable storage medium” shall include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of executable instructions. The term “computer-readable storage medium” shall also include any tangible medium that is capable of storing or encoding a set of instructions for execution by a computer that cause the computer to perform any one or more of the methods described herein. The term “computer-readable storage medium” shall include, but not be limited to, solid-state memories, optical media, and magnetic media. The methods, components, and features described herein may be implemented by discrete hardware components or may be integrated in the functionality of other hardware components such as ASICS, FPGAs, DSPs or similar devices. In addition, the methods, components, and features may be implemented by firmware modules or functional circuitry within hardware devices. Further, the methods, components, and features may be implemented in any combination of hardware devices and computer program components, or in computer programs. Unless specifically stated otherwise, terms such as “initiating,” “transmitting,” “receiving,” “analyzing,” or the like, refer to actions and processes performed or implemented by computer systems that manipulates and transforms data represented as physical (electronic) quantities within the computer system registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices. Also, the terms “first,” “second,” “third,” “fourth,” etc. as used herein are meant as labels to distinguish among different elements and may not have an ordinal meaning according to their numerical designation. Examples described herein also relate to an apparatus for performing the methods described herein. This apparatus may be specially constructed for performing the methods described herein, or it may comprise a general purpose computer system selectively programmed by a computer program stored in the computer system. Such a computer program may be stored in a computer-readable tangible storage medium. The methods and illustrative examples described herein are not inherently related to any particular computer or other apparatus. Various general purpose systems may be used in accordance with the teachings described herein, or it may prove convenient to construct more specialized apparatus to perform methods300or500and one or more of its individual functions, routines, subroutines, or operations. Examples of the structure for a variety of these systems are set forth in the description above. The above description is intended to be illustrative, and not restrictive. Although the present disclosure has been described with references to specific illustrative examples and implementations, it will be recognized that the present disclosure is not limited to the examples and implementations described. The scope of the disclosure should be determined with reference to the following claims, along with the full scope of equivalents to which the claims are entitled.
45,591
11860793
DETAILED DESCRIPTION With reference to the attached drawings, a description will be given below of a computing system and a method of creating and searching a page table entry for the computing system according to embodiments. FIG.1is a block diagram illustrating a computing system, according to an embodiment. A computing system500may include a controller100and a main memory200. The controller100may be a central processing unit (CPU), and include a processor110, a memory management unit (MMU)120, and a translation lookaside buffer (TLB)122. The main memory200may include a physical memory210. The main memory200may be implemented by a dynamic random-access memory (DRAM), a static random-access memory (SRAM), or a non-volatile memory, not being limited thereto. The MMU120may be implemented by a hardware and/or software module controlled by the processor110. The function of each of the blocks illustrated inFIG.1will be described below. The processor110may create a page table entry including mapping information for translating a virtual address VA of a virtual memory (not shown) to a physical address PA of the physical memory210, and generate the virtual address VA and data (e.g., a page table entry included in a page table) by an operating system (OS). To store a program stored in the virtual memory (not shown) indicated by the virtual address VA of the virtual memory (not shown) in a program area PGMR indicated by a physical address PA of the physical memory210in the main memory200by the OS, the processor100may generate the virtual address VA and data (i.e., the program). The MMU120may receive the virtual address VA and the data (e.g., the page table entry), store a part of the page table entry in the TLB122, and transmit a physical address PA and data DATA (i.e., the page table entry included in the page table) to the main memory200by the OS. The MMU120may receive the virtual address VA and the data (i.e., the program) from the processor110, and transmit to the main memory200the physical address PA and the data DATA (i.e., the program) mapped to the virtual address VA by using the page table entry stored in the TLB122. In the absence of the page table entry for the virtual address VA in the TLB122, the MMU120may read the page table entry (e.g., a first page table entry of a subgroup (or group) including the page table entry for the virtual address VA) stored in a page table area PTR of the physical memory210, store the page table entry in the TLB122, and generate the physical address PA mapped to the virtual address VA by the OS. The TLB122may store a part of the page table entry. The TLB122may further include additional information in addition to the part of the page table entry. UnlikeFIG.1, the TLB122may reside separately outside the MMU120. The main memory200may store the data DATA (i.e., the page table entry included in the page table) in the page table area PTR of the physical memory210, indicated by the physical address PA, and store the data DATA (i.e., the program) in the program area PGMR of the physical memory210. While not shown, the OS may be stored in an OS storage area (not shown) of the physical memory210to manage or control the controller100, the main memory200, and programs. That is, the controller100may create a page table including page table entries for the program, store a part of the page table entries in the TLB122, and store the page table in the main memory200. FIG.2is a diagram illustrating the virtual memory130, the physical memory210, and the TLB122, according to an embodiment. Referring toFIG.2, the virtual memory130may be divided into a plurality of pages and store a plurality of programs, for example, three programs PGM1, PGM2, and PGM3, in a plurality of virtual memory areas, for example, three virtual memory areas VR1, VR2, and VR3. The three programs PGM1, PGM2, and PGM3may have different data sizes (capacities), and may be stored in the three virtual memory areas VR1, VR2, and VR3including different numbers of pages. For example, the program PGM1may be stored in the virtual memory area VR1including four (4) pages, the program PGM2may be stored in the virtual memory area VR2including eight (8) pages, and the program PGM3may be stored in the virtual memory area VR3including 16 pages. Similarly, the physical memory210may be divided into a plurality of frames and store a plurality of programs, for example, the three programs PGM1, PGM2, PGM3in a plurality of physical memory areas, for example, three program areas PR1, PR2, and PR3. The three program areas PR1, PR2, and PR3may be included in the program area PGMR illustrated inFIG.1. As described above, since the three programs PGM1, PGM2, and PGM3have different data sizes (capacities), they may be stored in the three program areas PR1, PR2, and PR3including different numbers of frames. For example, the program PGM1may be stored in the program area PR1including four (4) frames, the program PGM2may be stored in the program area PR2including eight (8) frames, and the program PGM3may be stored in the program area PR3including 16 frames. The physical memory210may further include the page table area PTR and store a page table PT1for the program PGM1, a page table PT2for the program PGM2, and a page table PT3for the program PGM3in the page table area PTR. The sizes of the page tables PT1, PT2, and PT3may be different according to the data sizes (capacities) of the programs. The TLB122may store a part TLT1of page table entries for the program PGM1, a part TLT2of page table entries for the program PGM2, and a part TLT3of page table entries for the program PGM3. FIGS.3A and3Bare diagrams illustrating a page table and a format of a page table entry, according to an embodiment. Referring toFIG.3A, a page table PT may include a variable number of (e.g., j) page table entries PE1to PEj according to a data size of a program. That is, when the program is stored in j pages, j page table entries PE1to PEj may be created. Referring toFIG.3B, each of the page table entries PE may include a virtual page number VPN, a physical frame number PFN, valid information VALID, and size information SIZE. The virtual page number VPN is upper bits of a starting virtual address VA of a page in the virtual memory130, and the physical frame number PFN is upper bits of a starting physical address PA of a frame in the physical memory210. The valid information VALID may include a first predetermined number of bits. The first predetermined number of bits may represent an address translation range or the number of page table entries that may be grouped (the number of page table entry subgroups (or groups)). The size information SIZE may include a second predetermined number of bits, and specify a size represented by each bit of the first predetermined number of bits. For example, when each of the size of one page and the size of one frame is 4 KB, the virtual page number VPN is represented by upper bits except for the lower 12 bits of the starting virtual address VA, and the physical frame number PFN may be represented by upper bits except for the lower 12 bits of the starting physical address PA. When the first predetermined number of bits are four (4) bits, whenever one bit of the first predetermined number of bits is changed from a second state (e.g., “0”) to a first state (e.g., “1”), the address translation range may increase by 1, 2, 3, or 4 times. For example, when each bit (corresponding to each page) of the first predetermined number of bits indicates 4 KB, the size information SIZE may be 0 (=“00”), when each bit (i.e., each page) of the first predetermined number of bits indicates 16 KB, the size information SIZE may be 1 (=“01”), and when each bit (i.e., each page) of the first predetermined number of bits indicates 64 KB, the size information SIZE may be 2 (=“10”). FIG.4is a flowchart illustrating a method of creating a page table entry in a controller, according to an embodiment. Referring toFIG.4, in operation400, a page table entry is created by assigning a corresponding virtual page number to a virtual page number VPN, a corresponding physical frame number to a physical frame number PFN, a first state (e.g., “1”) to a selected bit (e.g., the first upper bit) of valid information VALID and a second state (e.g., “0”) to the remaining bits of the valid information VALID, and an initial value (e.g., 0) to size information SIZE. In operation410, whenever a page table entry is created, valid information VALID of the created page table entry is combined with valid information VALID of a previously created page table entry(s). That is, whenever a page table entry is created, the number of bits set to the first state in valid information VALID of the page table entry may increase. In operation420, when all bits of the valid information VALID of the page table entries are set to the first state, a page table entry subgroup may be created, the remaining bits except for the selected bit (e.g., the first upper bit) of the valid information VALID of a first page table entry of the page table entry subgroup may be changed from the first state to the second state, and the size information SIZE may increase to 1. In operation430, whenever a page table entry subgroup is created, valid information VALID of a first page table entry of the created page table entry subgroup is combined with valid information VALID of a first page table entry(s) of a previously created page table entry subgroup(s). That is, whenever a page table entry subgroup is created, the number of bits set to the first state in valid information VALID of the first page table entries of the page table entry subgroups may increase. In operation440, when all bits of the valid information VALID of the first page table entries of the page table entry subgroups are set to the first state, a page table entry group is created, the remaining bits except for the selected bit (e.g., the first upper bit) of the valid information VALID of the first page table entry of the first page table entry subgroup of the page table entry group may be changed from the first state to the second state, and the size information SIZE may increase by 1 and thus become 2. The method of creating the page table entry illustrated inFIG.4may end, when the valid information VALID may not be changed any longer. In the above-described embodiment, each of the page table entry subgroups may include as many page table entries as the number of bits of the valid information VALID, and the page table entry group may include as many page entry subgroups as the number of bits of the valid information VALID. While the operations for creating the page table entry group have been described in the above-described embodiment, a similar operation may be performed to create a page table entry super-group larger than the page table entry group by creating as many page table entry groups as the number of bits of the valid information VALID, change the remaining bits except for a selected bit of the valid information VALID of the first page table entry of the first page table entry group of the page table entry super-group from the first state to the second state, and increase the size information SIZE by 1 to set the size information SIZE to 3. FIG.5is a diagram illustrating a mapping relationship between virtual page numbers of a virtual memory and physical frame numbers of a physical memory for one program in reference toFIGS.1and2, according to an embodiment. Referring toFIG.5, a program PGM may be stored in 16 virtual pages corresponding to virtual page number 0 to virtual page number 15 in a virtual memory area VR of the virtual memory130. Further, the program PGM may be stored in physical frames corresponding to physical frame number512to physical frame number527in a program area PR of the physical memory210. That is, virtual page number 0 to virtual page number 15 may be mapped to physical frame number512to physical frame number527, respectively. FIGS.6A to6Hare diagrams illustrating a process of creating a page table entry, according to an embodiment.FIGS.6A to6Hillustrate a process of creating a page table by performing operations400to440illustrated inFIG.4, when virtual page numbers 0 to 15 are mapped to physical frame numbers512to527as illustrated inFIG.5, valid information VALID is 4-bit data, and size information SIZE is 2-bit data. Referring toFIGS.4and6A, in operation400, a first page table entry PE11may be created by assigning 0 to the virtual page number VPN, assigning 512 to the physical frame number PFN, setting a selected bit (e.g., the first upper bit) of the valid information VALID to “1” and setting the remaining bits of the valid information VALID to “0” to create “1000”, and assigning “00” to the size information SIZE. Subsequently, in operation400, a second page table entry PE12adjacent to the first page table entry PE11may be created by assigning 1 to the virtual page number VPN, assigning 513 to the physical frame number PFN, setting a selected bit (e.g., the second upper bit) of the valid information VALID to “1” and setting the remaining bits of the valid information VALID to “0” to create “0100”, and setting the size information SIZE to “00”. In operation410, the valid information VALID of the first page table entry PE11and the valid information VALID of the second page table entry PE12are combined (e.g., OR-operated) with each other, so that the valid information VALID of the first page table entry PE11and the valid information VALID of the second page table entry PE12are changed to “1100”. In this manner, a third page table entry PE13and a fourth page table entry PE14may be created, and the valid information VALID of the first page table entry PE11to the fourth page table entry PE14are all combined. Thus, the valid information VALID of the first page table entry PE11to the fourth page table entry PE14may all be changed to “1111”. In operation420, when all of the valid information VALID of the first page table entry PE11to the valid information VALID of the fourth page table entry PE14are changed to “1111”, a first page table entry subgroup PEG11may be created. In this case, in the first page table entry PE11of the first page table entry subgroup PEG1, the valid information VALID may be changed to “1000” by changing the remaining bits except for the selected bit (e.g., the first upper bit) of “1111” in the valid information VALID to “0”, and the size information SIZE may be changed to “01” by increasing the size information SIZE by “1”. Referring toFIGS.5and6B, a second page table entry subgroup PEG12including page table entries PE21to PE24may be created similarly to the manner described with reference toFIG.6A. In the first page table entry PE21of the second page table entry subgroup PEG2, the valid information VALID may be changed to “0100” by setting a selected bit (e.g., the second upper bit) of the valid information VALID to the first state, and the size information SIZE may be changed to “01” by increasing the size information SIZE by “1”. Referring toFIGS.5and6C, In operation430, the valid information VALID “1000” of the first page table entry PE11of the first page table entry subgroup PEG11and the valid information VALID “0100” of the first page table entry PE21of the second page table entry subgroup PEG12are combined with each other, so that the valid information VALID of the first page table entries PE11and PE21is changed to “1100”. Referring toFIGS.5and6D, a third page table entry subgroup PEG13including page table entries PE31to PE34may be created similarly to the manner described with reference toFIG.6A. In the first page table entry PE31of the third page table entry subgroup PEG13, the valid information VALID may be changed to “0010” by setting a selected bit (e.g., the third upper bit) in the valid information VALID to the first state, and the size information SIZE may increase by “1” to become “01”. Referring toFIGS.5and6E, in operation430, the valid information VALID “1100” of the first page table entry PE11of the first page table entry subgroup PEG11, the valid information VALID “1100” of the first page table entry PE21of the second page table entry subgroup PEG12, and the valid information VALID “0010” of the first page table entry PE31of the third page table entry subgroup PEG13are combined with one another, so that the valid information VALID of the first page table entries PE11, PE21, and PE31of the first page table entry subgroup PEG11to the third page table entry subgroup PEG13are changed to “1110”. Referring toFIGS.5and6F, a fourth page table entry subgroup PEG14including page table entries PE41to PE44may be created similarly to the manner described with reference toFIG.6A. In the first page table entry PE41of the fourth page table entry subgroup PEG14, the valid information VALID may be changed to “0001” by setting a selected bit (e.g., the lowest bit) in the valid information VALID to the first state, and the size information SIZE may increase by “1” to become “01”. Referring toFIGS.5and6G, in operation430, the valid information VALID “1110” of the first page table entry PE11of the first page table entry subgroup PEG11, the valid information VALID “1110” of the first page table entry PE21of the second page table entry subgroup PEG12, the valid information VALID “1110” of the first page table entry PE31of the third page table entry subgroup PEG13, and the valid information VALID “0001” of the first page table entry PE41of the fourth page table entry subgroup PEG14are combined with each other, so that the valid information VALID of the first page table entries PE11, PE21, PE31, and PE41of the first page table entry subgroup PEG11to the fourth page table entry subgroup PEG14are changed to “1111”. Referring toFIGS.5and6H, in operation440, when all bits of the valid information VALID of the first page table entries PE11, PE21, PE31, and PE41of the first page table entry subgroup PEG11to the fourth page table entry subgroup PEG14are changed to “1111”, a page table entry group PEG1is created. In the first page table entry PE11of the first page table entry subgroup PEG11, the valid information VALID may be changed to “1000” by changing the remaining bits except for the selected bit (e.g., the first upper bit) of the valid information VALID from “1” to “0”, and the size information SIZE may increase by 1 to become “10”. FIG.7is a flowchart illustrating a method of creating a page table entry by a controller, according to an embodiment. InFIG.7, a method of increasing size information SIZE and determining whether to maintain the size information SIZE in operation420ofFIG.4is described. Referring toFIG.7, when the size information SIZE increases in operation420or440ofFIG.4, in operation700, a base physical frame number BPFN is calculated by setting lower M bits of a physical frame number PFN to “0”. The lower M bits may be calculated by the following Equation 1. M=log2(the number of bits of valid information VALID)×(size information SIZE+1)  (1) In operation710, the base physical frame number BPFN is compared with the physical frame number PFN. When the base physical frame number BPFN is the same as the physical frame number PFN, the procedure goes to operation720, and otherwise, the procedure goes to operation730. In operation720, the increased size information SIZE is maintained. In operation730, the increased size information SIZE decreases by 1, and the valid information VALID is changed to the previous value. FIGS.8A and8Bare flowcharts illustrating a process of creating a page table entry in a controller, according to an embodiment. InFIGS.8A and8B, a process of creating a page table entry by performing the operations ofFIG.7is illustrated. Referring toFIGS.4,7, and8A, when virtual page numbers VPN0to VPN3are mapped to physical frame numbers PFN4to PFN7, respectively, a page table entry subgroup PEG11′ including page table entries PE11′ to PE14′ may be created by performing the operations ofFIG.4. Referring toFIGS.7and8A, in operation700, when M is calculated to be 4 by M(=log24×(1+1)) and lower 4 bits of the physical frame number PFN4(=“0100”) is set to “0”, the base physical frame number BPFN may become “0000”. In operation710, when the base physical frame number BPFN “0000” is not the same as the physical frame number PFN “0100” of the first page table entry PE11′, the procedure may go to operation730. In operation730, a page table entry PE11″ may be created by changing the size information SIZE of the first page table entry PE11′ to 0 (=“00”) by decreasing the size information SIZE by 1 and changing the valid information VALID to the previous value “1111”. As a consequence, the page table entry subgroup PEG11″ including the page table entries PE11″, PE12″, PE13″, and PE14″ may be created. Referring toFIGS.4,7, and8B, when virtual page numbers VPN0to VPN3are mapped to physical frame numbers PFN512to PFN515, respectively, a page table entry subgroup PEG11including page table entries PE11to PE14may be created by performing the operations ofFIG.4. Referring toFIGS.7and8B, in operation700, when M is calculated to be 4 by M(=log24×(1+1)) and the lower 4 bits of the physical frame number PFN512(=“1000000000”) of the first page table entry PE11are set to “0”, the base physical frame number BPFN may become 512 (=“1000000000”). In operation710, when the base physical frame number BPFN512(“1000000000”) is the same as the physical frame number PFN512(“1000000000”) of the first page table entry PE11, the procedure may go to operation720. In operation720, the size information SIZE 1 (=“01”) of the page table entry PE11may be maintained. As a consequence, the page table entry subgroup PEG11including the page table entries PE11, PE12, PE13, and PE14is maintained. FIG.9is a flowchart illustrating a method of searching a page table entry included in a page table in a main memory by a controller, according to an embodiment. As the controller100searches the TLB122shown inFIG.1, and determines the absence of mapping information corresponding to a requested virtual page number VPN, the controller100searches a first page table entry of a table entry subgroup (or group) including a page table entry for the requested virtual page number VPN in a page table of the physical memory210in the main memory200in the method ofFIG.9. Referring toFIG.9, in operation900, a page table entry corresponding to a requested virtual page number VPN is read from a page table in the main memory200. In operation910, it is determined whether all bits of valid information VALID of the page table entry is “1” and a virtual page number VPN of the page table entry is different from a base virtual page number BVPN. When this condition is not satisfied, the procedure goes to operation920, and when this condition is satisfied, the procedure goes to operation930. The base virtual page number BVPN may be obtained by calculating N by the following Equation 2, and then, changing lower N bits of the virtual page number VPN to “0”. N=log2(the number of bits in valid information VALID)×(size information SIZE+1)  (2) In operation920, the page table entry is determined to be the first page table entry of a page table entry subgroup (or group) and stored in the TLB122. In operation930, a page table entry corresponding to the base virtual page number BVPN is read from the page table of the physical memory210in the main memory200. Then, the procedure goes to operation910. For example, when a page table is configured with page table entry subgroups PEG11and PEG12as illustrated inFIG.6Cand the requested virtual page number VPN is 6, the control device100searches a page table entry in the following method. Referring toFIGS.6C and9, the page table entry PE23corresponding to the virtual page number VPN6is read in operation900. In operation910, when all bits of the valid information VALID in the page table entry PE23are “1” and the virtual page number VPN6of the page table entry PE23is different from the base virtual page number BVPN4, the procedure goes to operation930. Specifically, in operation910, N is calculated to be 2 by N(=log24×(0+1)), and the lower two (2) bits of the virtual page number VPN6(=“0110”) of the page table entry PE23are changed to “0”. Thus, the base virtual page number BVPN “0100”, that is,4may be obtained. In operation930, the page table entry PE21corresponding to the base virtual page number BVPN4is read. In operation910, when all bits of the valid information VALID in the page table entry PE21are not “1” and the virtual page number VPN4of the page table entry PE21is not the same as the base virtual page number BVPN0, the procedure goes to operation920. The page table entry PE21is stored in the TLB122in operation920. Specifically, in operation910, N is calculated to be 2 by N(=log24×(0+1)), and the lower two (2) bits of the virtual page number VPN4(=“0100”) of the page table entry PE21are changed to “0”. Thus, a base virtual page number BVPN “0000”, that is, 0 may be obtained. In another example, when a page table is configured with the page table entry group PEG1as illustrated inFIG.6Hand the requested virtual page number VPN is 14, the control device100searches a page table entry in the following method. Referring toFIGS.6H and9, a page table entry PE43corresponding to the virtual page number VPN14is read in operation900. In operation910, when all bits of the valid information VALID in the page table entry PE43are “1” and the virtual page number VPN14of the page table entry PE43is different from a base virtual page number BVPN12, the procedure goes to operation930. Specifically, in operation910, N is calculated to be 2 by N(=log24×(0+1)), and the lower two (2) bits of the virtual page number VPN14(=“1110”) to “0”. Thus, the base virtual page number BVPN “1100”, that is, 12 may be obtained. In operation930, a page table entry PE41corresponding to the base virtual page number BVPN12is read, and then the procedure goes to operation910. In operation910, when all bits of the valid information VALID in the page table entry PE41are “1” and the virtual page number VPN12of the page table entry PE41is different from the base virtual page number BVPN0, the procedure goes to operation930. Specifically, in operation910, N is calculated to be 4 by N(=log24×(1+1)), and the lower four (4) bits of the virtual page number VPN12(=“1100”) are changed to “0”. Thus, a base virtual page number BVPN “0000”, that is, 0 may be obtained. In operation930, the page table entry PE11corresponding to the base virtual page number BVPN0is read. In operation910, when all bits of the valid information VALID in the page table entry PE11are not “1” and the virtual page number VPN0of the page table entry PE11is the same as the base virtual page number BVPN0, the procedure goes to operation920. The page table entry PE11is stored in the TLB122. Specifically, in operation910, N is calculated to be 6 by N(=log24×(2+1)), and the lower six (6) bits of the virtual page number VPN0(=“0000”) of the page table entry PE11are changed to “0”. Thus, the base virtual page number BVPN “0000”, that is, 0 may be obtained. FIG.10is a flowchart illustrating a method of searching a page table entry stored in a TLB by a controller, according to an embodiment. A physical frame number corresponding to a virtual page number is obtained by using a stored page table entry in the method ofFIG.10. In operation1000, a base virtual page number BVPN, an offset OFFSET, and an index INDEX for each of a requested virtual page number VPN and the virtual page number VPN of a page table entry stored in the TLB122are obtained. The base virtual page number BVPN is obtained by calculating N by Equation 2 and changing the lower N bits of the virtual page number VPN to “0”. When N is calculated, the size information SIZE of the stored page table entry is used as the size information SIZE of the requested virtual page number VPN. The offset OFFSET is a value before changing to “0” to obtain the base virtual page number BVPN, the index INDEX is the value of the upper K bits of the offset OFFSET, and K=log2(the number of bits in valid information VALID). In operation1010, when the base virtual page number BVPN of the requested virtual page number VPN is the same as the base virtual page number BVPN of the virtual page number VPN of the stored page table entry, and a bit of the valid information VALID of the stored page table entry corresponding to the index INDEX of the requested virtual page number VPN is “1”, it is determined that the stored page table entry is available for use to obtain a physical frame number PFN of the requested virtual page number VPN. In operation1020, the offset OFFSET is added to a value obtained by setting the lower N bits of the physical frame number PFN of the stored page table entry to “0” to obtain the physical frame number PFN for the requested virtual page number VPN. For example, when the first page table entry PE21of the page table entry subgroup PEG12illustrated inFIG.6Cis stored in the TLB122, and the requested virtual page number VPN is 7, the controller100searches a physical frame number PFN corresponding to a virtual page number VPN7by using the page table entry PE21in the following method. Referring toFIGS.6C and10, in operation1000, a base virtual page number BVPN, an offset OFFSET, and an index INDEX for the requested virtual page number VPN7are calculated. The base virtual page number BVPN for the virtual page number VPN7is a value obtained by changing the lower N bits of the virtual page number VPN7(=“0111”) to “0”. Because N is 4 (=log24×(1+1), where the size information SIZE of the stored page table entry PE21is “01”), the base virtual page number BVPN may be 0 (=“0000”). The offset OFFSET for the virtual page number VPN7is a value before changing to “0” to obtain the base virtual page number BVPN. Thus, the offset OFFSET may be “0111”. The index INDEX is a value obtained by taking the upper K bits of the offset OFFSET. Since K is 2, the index INDEX may be “01”. Further, a base virtual page number BVPN, an offset OFFSET, and an index INDEX for the virtual page number VPN4stored in the page table entry PE21are calculated. The base virtual page number BVPN for the virtual page number VPN4is a value obtained by changing the lower N bits of the virtual page number VPN4(=“0100”) to “0”. Because N is 4 (=log24×(1+1)), the base virtual page number BVPN may be 0 (=“0000”). The offset OFFSET for the virtual page number VPN4is a value before changing to “0” to obtain the base virtual page number BVPN. Thus, the offset OFFSET may be “0100”. The index INDEX is a value obtained by taking the upper K bits of the offset OFFSET. Since K is 2, the index INDEX may be “01”. In operation1010, the base virtual page number BVPN “0000” for the virtual page number VPN7is the same as the base virtual page number BVPN “0000” for the virtual page number VPN4, the valid information VALID of the page table entry PE21is “1100”, the index INDEX for the virtual page number VPN7is 1 (=“01”), and a bit of the valid information VALID of the page table entry PE21corresponding to the index INDEX of the virtual page number VPN7(that is, the second upper bit in “1100” since the index value INDEX is 1) is “1”. Therefore, the controller100may determine that it may search the physical frame number PFN for the virtual page number VPN7by using the page table entry PE21. In operation1020, the base physical frame number BPFN is obtained by changing the lower M bits of the physical frame number PFN516(=“1000000100”) of the page table entry PE21to “0”. Since M is 4, the lower 4 bits of the physical frame number PFN516(=“1000000100”) are changed to “0” to obtain a base physical frame number BPFN512(=“1000000000”). Then, a physical frame number PFN519is obtained by adding the offset OFFSET7(=“0111”) to the base physical frame number BPFN512(=10000000000). As is apparent from the foregoing description, according to embodiments, the size information of a page table entry may vary according to the data size of a program. Further, since the number of bits in the valid information of the page table entry and the number of bits in the size information of the page table entry may vary, the disclosure is applicable to various types of controllers (e.g., a central processing unit (CPU), a graphical processing unit (GPU), a numeric processing unit (NPU), and an accelerator). Further, according to embodiments, once the first page table entry of a page table entry subgroup or group stored in the physical memory of the main memory200is stored in the TLB122, physical frame numbers may be obtained for the virtual page numbers of the remaining page table entries of the stored page table entry subgroup or group without searching a page table stored in the physical memory of the main memory200. Accordingly, the address translation performance of the controller and the computing system500including the controller100may be improved. While the disclosure has been particularly shown and described with reference to example embodiments thereof, it will be apparent to those skilled in the art that various changes in form and detail may be made without departing from the spirit and essential characteristics of the disclosure. The above embodiments are therefore to be construed in all aspects as illustrative and not restrictive.
33,629
11860794
DETAILED DESCRIPTION FIG.1is an example block diagram of a pipelined super-scalar, out-of-order execution microprocessor core100that performs speculative execution of instructions in accordance with embodiments of the present disclosure. Speculative execution of an instruction means execution of the instruction during a time when at least one instruction older in program order than the instruction has not completed execution such that a possibility exists that execution of the older instruction will result in an abort, i.e., flush, of the instruction. The core100includes a cache memory subsystem that employs physical address proxies (PAP) to attain cache coherence as described herein. Although a single core100is shown, the PAP cache coherence techniques described herein are not limited to a particular number of cores. Generally, the PAP cache coherence embodiments may be employed in a processor conforming to various instruction set architectures (ISA), including but not limited to, x86, ARM, PowerPC, SPARC, MIPS. Nevertheless, some aspects of embodiments are described with respect to the microprocessor100conforming to the RISC-V ISA, as described in specifications set forth in Volumes I and II of “The RISC-V Instruction Set Manual,” Document Version 20191213, promulgated by the RISC-V Foundation. These two volumes are herein incorporated by reference for all purposes. However, the embodiments of the PAP cache coherence techniques are not generally limited to RISC-V. The core100has an instruction pipeline140that includes a front-end110, mid-end120, and back-end130. The front-end110includes an instruction cache101, a predict unit (PRU)102, a fetch block descriptor (FBD) FIFO104, an instruction fetch unit (IFU)106, and a fetch block (FBlk) FIFO108. The mid-end120include a decode unit (DEC)112. The back-end130includes a level-1 (L1) data cache103, a level-2 (L2) cache107, a register files105, a plurality of execution units (EU)114, and load and store queues (LSQ)125. In one embodiment, the register files105include an integer register file, a floating-point register file and a vector register file. In one embodiment, the register files105include both architectural registers as well as microarchitectural registers. In one embodiment, the EUs114include integer execution units (IXU)115, floating point units (FXU)119, and a load-store unit (LSU)117. The LSQ125hold speculatively executed load/store micro-operations, or load/store Ops, until the Op is committed. More specifically, the load queue125holds a load operation until it is committed, and the store queue125holds a store operation until it is committed. The store queue125may also forward store data that it holds to other dependent load Ops. When a load/store Op is committed, the load queue125and store queue125may be used to check for store forwarding violations. When a store Op is committed, the store data held in the associated store queue125entry is written into the L1 data cache103at the store address held in the store queue125entry. In one embodiment, the load and store queues125are combined into a single memory queue structure rather than separate queues. The DEC112allocates an entry of the LSQ125in response to decode of a load/store instruction. The core100also includes a memory management unit (MMU)147coupled to the IFU106and LSU117. The MMU147includes a data translation lookaside buffer (DTLB)141, an instruction translation lookaside buffer (ITLB)143, and a table walk engine (TWE)145. In one embodiment, the core100also includes a memory dependence predictor (MDP)111coupled to the DEC112and LSU117. The MDP111makes store dependence predictions that indicate whether store-to-load forwarding should be performed. The LSU117includes a write combining buffer (WCB)109that buffers write requests sent by the LSU117to the DTLB141and to the L2 cache107. In one embodiment, the L1 data cache103is a virtually-indexed virtually-tagged write-through cache. In the case of a store operation, when there are no older operations that could cause the store operation to be aborted, the store operation is ready to be committed, and the store data is written into the L1 data cache103. The LSU117also generates a write request to “write-through” the store data to the L2 cache107and update the DTLB141, e.g., to set a page dirty, or page modified, bit. The write request is buffered in the WCB109. Eventually, at a relatively low priority, the store data associated with the write request will be written to the L2 cache107. However, entries of the write combining buffer109are larger (e.g., 32 bytes) than the largest load and store operations (e.g., eight bytes). When possible, the WCB109combines, or merges, multiple write requests into a single entry of the WCB109such that the WCB109may make a potentially larger single write request to the L2 cache107that encompasses the store data of multiple store operations that have spatially-locality. The merging, or combining, is possible when the starting physical memory address and size of two or more store operations align and fall within a single entry of the WCB109. For example, assume a first 8-byte store operation to 32-byte aligned physical address A, a second 4-byte store operation to physical address A+8, a third 2-byte store operation to physical address A+12, and a fourth 1-byte store operation to physical address A+14. The WCB109may combine the four store operations into a single entry and perform a single write request to the L2 cache107of the fifteen bytes at address A. By combining write requests, the WCB109may free up bandwidth of the L2 cache107for other requests, such as cache line fill requests from the L1 data cache103to the L2 cache107or snoop requests. The microprocessor110may also include other blocks not shown, such as a load buffer, a bus interface unit, and various levels of cache memory above the instruction cache101and L1 data cache103and L2 cache107, some of which may be shared by other cores of the processor. Furthermore, the core100may be multi-threaded in the sense that it includes the ability to hold architectural state (e.g., program counter, architectural registers) for multiple threads that share the back-end130, and in some embodiments the mid-end120and front-end110, to perform simultaneous multithreading (SMT). The core100provides virtual memory support. Each process, or thread, running on the core100may have its own address space identified by an address space identifier (ASID). The core100may use the ASID to perform address translation. For example, the ASID may be associated with the page tables, or translation tables, of a process. The TLBs (e.g., DTLB141and ITLB143) may include the ASID in their tags to distinguish entries for different processes. In the x86 ISA, for example, an ASID may correspond to a processor context identifier (PCID). The core100also provides machine virtualization support. Each virtual machine running on the core100may have its own virtual machine identifier (VMID). The TLBs may include the VMID in their tags to distinguish entries for different virtual machines. Finally, the core100provides different privilege modes (PM), or privilege levels. The PM of the core100determines, among other things, whether or not privileged instructions may be executed. For example, in the x86 ISA there are four PMs, commonly referred to as Ring 0 through Ring 3. Ring 0 is also referred to as Supervisor level and Ring 3 is also referred to as User level, which are the two most commonly used PMs. For another example, in the RISC-V ISA, PMs may include Machine (M), User (U), Supervisor (S) or Hypervisor Supervisor (HS), Virtual User (VU), and Virtual Supervisor (VS). In the RISC-V ISA, the S PM exists only in a core without virtualization supported or enabled, whereas the HS PM exists when virtualization is enabled, such that S and HS are essentially non-distinct PMs. For yet another example, the ARM ISA includes exception levels (EL0, EL1, EL2 and EL3). As used herein and as shown inFIG.1, a translation context (TC) of the core100(or of a hardware thread in the case of a multi-threaded core) is a function of the ASID, VMID, and/or PM or a translation regime (TR), which is based on the PM. In one embodiment, the TR indicates whether address translation is off (e.g., M mode) or on, whether one level of address translation is needed (e.g., U mode, S mode and HS mode) or two levels of address translation is needed (VU mode and VS mode), and what form of translation table scheme is involved. For example, in a RISC-V embodiment, the U and S privilege modes (or U and HS, when the hypervisor extension is active) may share a first TR in which one level of translation is required based on the ASID, VU and VS share a second TR in which two levels of translation are required based on the ASID and VMID, and M privilege level constitutes a third TR in which no translation is performed, i.e., all addresses are physical addresses. Pipeline control logic (PCL)132is coupled to and controls various aspects of the pipeline140which are described in detail herein. The PCL132includes a ReOrder Buffer (ROB)122, interrupt handling logic149, abort and exception-handling logic134, and control and status registers (CSR)123. The CSRs123hold, among other things, the PM199, VMID197, and ASID195of the core100, or one or more functional dependencies thereof (such as the TR and/or TC). In one embodiment (e.g., in the RISC-V ISA), the current PM199does not reside in a software-visible CSR123; rather, the PM199resides in a microarchitectural register. However, the previous PM199is readable by a software read of a CSR123in certain circumstances, such as upon taking of an exception. In one embodiment, the CSRs123may hold a VMID197and ASID195for each TR or PM. The pipeline units may signal a need for an abort, as described in more detail below, e.g., in response to detection of a mis-prediction (e.g., by a branch predictor of a direction or target address of a branch instruction, or of a mis-prediction that store data should be forwarded to a load Op in response to a store dependence prediction, e.g., by the MDP111) or other microarchitectural exception, architectural exception, or interrupt. Examples of architectural exceptions include an invalid opcode fault, debug breakpoint, or illegal instruction fault (e.g., insufficient privilege mode) that may be detected by the DEC112, a page fault, permission violation or access fault that may be detected by the LSU117, and an attempt to fetch an instruction from a non-executable page or a page the current process does not have permission to access that may be detected by the IFU106. In response, the PCL132may assert flush signals to selectively flush instructions/Ops from the various units of the pipeline140. Conventionally, exceptions are categorized as either faults, traps, or aborts. The term “abort” as used herein is not limited by the conventional categorization of exceptions. As used herein, “abort” is a microarchitectural mechanism used to flush instructions from the pipeline140for many purposes, which encompasses interrupts, faults and traps. Purposes of aborts include recovering from microarchitectural hazards such as a branch mis-prediction or a store-to-load forwarding violation. The microarchitectural abort mechanism may also be used to handle architectural exceptions and for architecturally defined cases where changing the privilege mode requires strong in-order synchronization. In one embodiment, the back-end130of the processor100operates under a single PM, while the PM for the front-end110and mid-end120may change (e.g., in response to a PM-changing instruction) while older instructions under an older PM continue to drain out of the back-end130. Other blocks of the core100, e.g., DEC112, may maintain shadow copies of various CSRs123to perform their operations. The PRU102maintains the program counter (PC) and includes predictors that predict program flow that may be altered by control flow instructions, such as branch instructions. In one embodiment, the PRU102includes a next index predictor (NIP), a branch target buffer (BTB), a main conditional branch predictor (CBP), a secondary conditional branch predictor (BMP), an indirect branch predictor (IBP), and a return address predictor (RAP). As a result of predictions made by the predictors, the core100may speculatively execute instructions in the instruction stream of the predicted path. The PRU102generates fetch block descriptors (FBD) that are provided to the FBD FIFO104in a first-in-first-out manner. Each FBD describes a fetch block (FBlk or FB). An FBlk is a sequential set of instructions. In one embodiment, an FBlk is up to sixty-four bytes long and may contain as many as thirty-two instructions. An FBlk ends with either a branch instruction to be predicted, an instruction that causes a PM change or that requires heavy abort-based synchronization (aka “stop” instruction), or an indication that the run of instructions continues sequentially into the next FBlk. An FBD is essentially a request to fetch instructions. An FBD may include the address and length of an FBlk and an indication of the type of the last instruction. The IFU106uses the FBDs to fetch FBlks into the FBlk FIFO108, which feeds fetched instructions to the DEC112. The FBD FIFO104enables the PRU102to continue predicting FBDs to reduce the likelihood of starvation of the IFU106. Likewise, the FBlk FIFO108enables the IFU106to continue fetching FBlks to reduce the likelihood of starvation of the DEC112. The core100processes FBlks one at a time, i.e., FBlks are not merged or concatenated. By design, the last instruction of an FBlk can be a branch instruction, a privilege-mode-changing instruction, or a stop instruction. Instructions may travel through the pipeline140from the IFU106to the DEC112as FBlks, where they are decoded in parallel. The DEC112decodes architectural instructions of the FBlks into micro-operations, referred to herein as Ops. The DEC112dispatches Ops to the schedulers121of the EUs114. The schedulers121schedule and issue the Ops for execution to the execution pipelines of the EUs, e.g., IXU115, FXU119, LSU117. The EUs114receive operands for the Ops from multiple sources including: results produced by the EUs114that are directly forwarded on forwarding busses—also referred to as result busses or bypass busses—back to the EUs114and operands from the register files105that store the state of architectural registers as well as microarchitectural registers, e.g., renamed registers. In one embodiment, the EUs114include four IXU115for executing up to four Ops in parallel, two FXU119, and an LSU117that is capable of executing up to four load/store Ops in parallel. The instructions are received by the DEC112in program order, and entries in the ROB122are allocated for the associated Ops of the instructions in program order. However, once dispatched by the DEC112to the EUs114, the schedulers121may issue the Ops to the individual EU114pipelines for execution out of program order. The PRU102, IFU106, DEC112, and EUs114, along with the intervening FIFOs104and108, form a concatenated pipeline140in which instructions and Ops are processed in mostly sequential stages, advancing each clock cycle from one stage to the next. Each stage works on different instructions in parallel. The ROB122and the schedulers121together enable the sequence of Ops and associated instructions to be rearranged into a data-flow order and to be executed in that order rather than program order, which may minimize idling of EUs114while waiting for an instruction requiring multiple clock cycles to complete, e.g., a floating-point Op or cache-missing load Op. Many structures within the core100address, buffer, or store information for an instruction or Op by reference to an FBlk identifier. In one embodiment, checkpoints for abort recovery are generated for and allocated to FBlks, and the abort recovery process may begin at the first instruction of the FBlk containing the abort-causing instruction. In one embodiment, the DEC112converts each FBlk into a series of up to eight OpGroups. Each OpGroup consists of either four sequential Ops or, if there are fewer than four Ops in the FBlk after all possible four-op OpGroups for an FBlk have been formed, the remaining Ops of the FBlk. Ops from different FBlks are not concatenated together into the same OpGroup. Because some Ops can be fused from two instructions, an OpGroup may correspond to up to eight instructions. The Ops of the OpGroup may be processed in simultaneous clock cycles through later DEC112pipe stages, including rename and dispatch to the EU114pipelines. In one embodiment, the MDP111provides up to four predictions per cycle, each corresponding to the Ops of a single OpGroup. Instructions of an OpGroup are also allocated into the ROB122in simultaneous clock cycles and in program order. The instructions of an OpGroup are not, however, necessarily scheduled for execution together. In one embodiment, each of the EUs114includes a dedicated scheduler121. In an alternate embodiment, a scheduler121common to all the EUs114(and integrated with the ROB122according to one embodiment) serves all the EUs114. In one embodiment, each scheduler121includes an associated buffer (not shown) that receives Ops dispatched by the DEC112until the scheduler121issues the Op to the relevant EU114pipeline for execution, namely when all source operands upon which the Op depends are available for execution and an EU114pipeline of the appropriate type to execute the Op is available. The PRU102, IFU106, DEC112, each of the execution units114, and PCL132, as well as other structures of the core100, may each have their own pipeline stages in which different operations are performed. For example, in one embodiment, the DEC112has a pre-decode stage, an extract stage, a rename stage, and a dispatch stage. The PCL132tracks instructions and the Ops into which they are decoded throughout their lifetime. The ROB122supports out-of-order instruction execution by tracking Ops from the time they are dispatched from DEC112to the time they retire. In one embodiment, the ROB122has entries managed as a FIFO, and the ROB122may allocate up to four new entries per cycle at the dispatch stage of the DEC112and may deallocate up to four oldest entries per cycle at Op retire. In one embodiment, each ROB entry includes an indicator that indicates whether the Op has completed its execution and another indicator that indicates whether the result of the Op has been committed to architectural state. More specifically, load and store Ops may be committed subsequent to completion of their execution. Still further, an Op may be committed before it is retired. Embodiments of a cache subsystem are described herein that advantageously enable cache coherency attainment with higher performance and/or reduced size using PAPs. FIG.2is an example block diagram of a cache entry201of L1 data cache103ofFIG.1that employs PAPs to accomplish cache coherence in accordance with embodiments of the present disclosure. The L1 data cache entry201is used in the L1 data cache103embodiment ofFIG.3described in more detail below. The L1 data cache entry201includes cache line data202, a virtual address tag204, a status field206, a hashed tag field208, and a diminutive physical address proxy (dPAP) field209. The cache line data202is the copy of the data brought into the L1 data cache103from system memory indirectly through a higher level of the cache memory hierarchy, namely the L2 cache107. The tag204is upper bits (e.g., tag bits322ofFIG.3) of the virtual memory address (e.g., virtual load/store address321ofFIG.3) specified by the operation that brought the cache line into the L1 data cache103, e.g., the virtual memory address specified by a load/store operation. That is, when an entry201in the L1 data cache103is allocated, the tag bits322of the virtual memory address321are written to the virtual address tag204of the entry201. When the L1 data cache103is subsequently accessed (e.g., by a subsequent load/store operation), the virtual address tag204is used to determine whether the access hits in the L1 data cache103. Generally speaking, the L1 data cache103uses lower bits (e.g., set index bits326ofFIG.3) of the virtual memory address to index into the L1 data cache103and uses the remaining bits of the virtual address321above the set index bits326as the tag bits. To illustrate by way of example, assume a 64 kilobyte (KB) L1 data cache103arranged as a 4-way set associative cache having 64-byte cache lines; address bits [5:0] are an offset into the cache line, virtual address bits [13:6] (set index bits) are used as the set index, and virtual address bits [N−1:14] (tag bits) are used as the tag, where N is the number of bits of the virtual memory address, where N is 63 in the embodiment ofFIG.3. The status206indicates the state of the cache line. More specifically, the status206indicates whether the cache line data is valid or invalid. Typically, the status206also indicates whether the cache line has been modified since it was brought into the L1 data cache103. The status206may also indicate whether the cache line is exclusively held by the L1 data cache103or whether the cache line is shared by other cache memories in the system. An example protocol used to maintain cache coherency defines four possible states for a cache line: Modified, Exclusive, Shared, Invalid (MESI). The hashed tag208may be a hash of the tag bits322ofFIG.3of the virtual memory address321, as described in more detail below. Advantageously, the hashed tag208may be used to generate a predicted early miss indication, e.g., miss328ofFIG.3, and may be used to generate a predicted early way select signal, e.g., way select342ofFIG.3, as described in more detail with respect toFIG.3. The dPAP209is all or a portion of a physical address proxy (PAP), e.g., PAP699ofFIG.6. As described herein, the L2 cache107is inclusive of the L1 data cache103. That is, each cache line of memory allocated into the L1 data cache103is also allocated into the L2 cache107, and when the L2 cache107evicts the cache line, the L2 cache107also causes the L1 data cache103to evict the cache line. A PAP is a forward pointer to the unique entry in the L2 cache107(e.g., L2 entry401ofFIG.4) that holds a copy of the cache line held in the entry201of the L1 data cache103. For example, in the embodiments ofFIGS.6and9, the dPAP209is the PAP less the untranslated physical address PA[11:6] bits that are used in the L1 set index. That is, the dPAP is the L2 way and the translated physical address bits PA[16:12] of the set index of the L2 cache107set containing the entry401that holds the copy of the L1 data cache103cache line. For another example, in the embodiment ofFIG.11, the dPAP is the entire PAP, e.g., all the bits of the L2 way and L2 set index that point to the entry401in the L2 cache107that holds the copy of the L1 data cache103cache line. Uses of the dPAP209and PAP are described in more detail herein. FIG.3is an example block diagram illustrating the L1 data cache103ofFIG.1that employs PAPs to accomplish cache coherence in accordance with embodiments of the present disclosure. In the embodiment ofFIG.3, the L1 data cache103is a virtual cache, i.e., it is virtually-indexed and virtually-tagged. In the embodiment ofFIG.3, the DTLB141ofFIG.1is a second-level TLB, and the processor100includes no first-level TLB. The L1 data cache103includes a tag array332, a data array336, a hashed tag array334, a multiplexer342, a comparator344, a multiplexer346, and tag hash logic312. The LSU117generates a virtual load/store address VA[63:0] and provides to the L1 data cache103a portion thereof VA[63:6]321used to specify a line of memory that may be stored in the L1 data cache103. The virtual address321includes a tag322portion (e.g., bits [63:14]) and a set index326portion (e.g., bits [13:6]). The L1 data cache103also includes an allocate way input308for allocating an entry into the L1 data cache103. The L1 data cache103also includes a data in input325for writing data into the L1 data cache103, e.g., during a store commit operation and during a cache line allocation. The L1 data cache103also includes a hit output352, early miss prediction328, and a data out output227. The tag array332and data array336are random access memory arrays. In the embodiment ofFIG.3, the L1 data cache103is arranged as a 4-way set associative cache; hence, the tag array332and data array336are arranged as 4-way set associative memory arrays. However, other embodiments are contemplated in which the associativity has a different number of ways than four, including direct-mapped and fully associative embodiments. The set index326selects the set of entries on each allocation or access, e.g., load/store operation. In the embodiment ofFIG.3, each entry of the L1 data cache103is structured as the entry201ofFIG.2, having cache line data202, a tag204, a status206, a hashed tag208, and a dPAP209. The data array336holds the cache line data202associated with each of the entries201of the L1 data cache103. The tag array332holds the tag204associated with each of the entries201of the L1 data cache103. The hashed tag array334, also referred to as a hashed address directory334, holds the hashed tag208and dPAP209associated with each of the entries201of the L1 data cache103. In one embodiment, the status206of each entry is also stored in the tag array332, whereas in another embodiment the L1 data cache103includes a separate memory array for storing the status206of the entries. Although in the embodiment ofFIG.3the data array336and tag array332are separate, other embodiments are contemplated in which the data and tag (and status) reside in the same memory array. The tag hash logic312hashes the tag322portion of the virtual load/store address321to generate the hashed tag324. That is, the tag322is an input to a hash function performed by tag hash logic312that outputs the hashed tag324. The hash function performs a logical and/or arithmetic operation on its input bits to generate output bits. For example, in one embodiment, the hash function is a logical exclusive-OR on at least a portion of the tag322bits. The number of output bits of the hash function is the size of the hashed tag324and the hashed tag field208of the data cache entry201. The hashed tag324is provided as an input to the hashed tag array334for writing into the hashed tag208of the selected entry201of the hashed tag array334, e.g., during an allocation. Similarly, a dPAP323obtained from the L2 cache107during an allocation (as described with respect toFIG.7) are written into the dPAP209of the selected entry201of the hashed tag array334during an allocation. The set index326selects the set of entries of the hashed tag array334. In the case of an allocation, the hashed tag324and dPAP323are written into the hashed tag208and dPAP209of the entry201of the way selected by an allocate way input308of the selected set. In the case of an access, comparator348compares the hashed tag324with each of the hashed tags208of the selected set. If there is a valid match, the early miss signal328is false and the way select341indicates the matching way; otherwise, the early miss signal328is true. The dPAP323stored in the dPAP field202of the L1 entry201is used to process a snoop request to attain cache coherency, as described in more detail with respect toFIGS.6through12. Because the hashed tag324and the hashed tags208are small (e.g., 16 bits as an illustrative example) relative to the tag322and tags204(e.g., 54 bits as an illustrative example), the comparison performed by comparator348may be faster than the comparison performed by comparator344(described more below), for example. Therefore, the way select341may be signaled by an earlier stage in the L1 data cache103pipeline than an embodiment that relies on a comparison of the tags204of the tag array332to generate a way select. This may be advantageous because it may shorten the time to data out227. Additionally, the early miss prediction328may be signaled by an earlier stage than the stage that signals the hit indicator352. This may be advantageous because it may enable a cache line fill requestor (not shown) to generate a cache line fill request to fill a missing cache line earlier than an embodiment that would rely on a comparison of the tags204in the tag array332to detect a miss. Thus, the hashed tag array334may enable a high performance, high frequency design of the processor100. It is noted that due to the nature of the hashed tag324, if the early miss indicator328indicates a false value, i.e., indicates a hit, the hit indication may be incorrect, i.e., the hit indicator352may subsequently indicate a false value, i.e., a miss. Thus, the early miss indicator328is a prediction, not necessarily a correct miss indicator. This is because differing tag322values may hash to the same value. However, if the early miss indicator328indicates a true value, i.e., indicates a miss, the miss indication is correct, i.e., the hit indicator352will also indicate a miss, i.e., will indicate a false value. This is because if two hash results are not equal (assuming they were hashed using the same hash algorithm), then they could not have been generated from equal inputs, i.e., matching inputs. The tag322is provided as an input to the tag array332for writing into the tag204field of the selected entry of the tag array332, e.g., during an allocation. The set index326selects the set of entries of the tag array332. In the case of an allocation, the tag322is written into the tag204of the entry of the way selected by the allocate way input308of the selected set. In the case of an access (e.g., a load/store operation), the mux342selects the tag204of the way selected by the early way select341, and the comparator344compares the tag322with the tag204of the selected set. If there is a valid match, the hit signal352is true; otherwise, the hit signal352is false. In one embodiment, the cache line fill requestor advantageously uses the early miss prediction328provided by the hashed tag array334in order to generate a fill request as soon as possible, rather than waiting for the hit signal352. However, in embodiments of the LSU117that employ the L1 data cache103ofFIG.3, the cache line fill requestor is also configured to examine both the early miss prediction328and the hit indicator352, detect an instance in which the early miss prediction328predicted a false hit, and generate a fill request accordingly. The data array336receives the data in input325for writing into the cache line data202field of the selected entry of the data array336, e.g., during a cache line allocation or a store commit operation. The set index326selects the set of entries of the data array336. In the case of an allocation, the way of the selected set is selected by the allocate way input308, and in the case of a memory access operation (e.g., load/store operation) the way is selected by the way select signal341. In the case of a read operation (e.g., load operation), the mux346receives the cache line data202of all four ways and selects one of the ways based on the way select signal341, and the cache line data202selected by the mux346is provided on the data out output227. FIG.4is an example block diagram of a cache entry401of L2 cache107ofFIG.1that employs PAPs to accomplish cache coherence in accordance with embodiments of the present disclosure. The L2 cache entry401is used in the physically-indexed physically-tagged L2 cache107embodiment ofFIG.5described in more detail below. That is, the tag field404holds a physical address tag, rather than a virtual address tag. Also, the cache entry401ofFIG.4does not include a hashed tag field208nor a dPAP field209as inFIG.2. Otherwise, the cache entry401ofFIG.4is similar in many respects to the cache entry201ofFIG.2, e.g., the status field406is similar to the status field206ofFIG.2. FIG.5is an example block diagram illustrating the L2 cache107ofFIG.1that employs PAPs to accomplish cache coherence in accordance with embodiments of the present disclosure. The DTLB141ofFIG.1receives the virtual load/store address321ofFIG.2and provides to the L2 cache107a physical memory line address PA[51:6]521that is the translation of the virtual load/store address321. More specifically, physical memory line address521bits PA[51:12] are translated from the virtual load/store address321bits [63:12]. The physical memory line address521comprises a tag522portion and a set index526portion. In some respects, the L2 cache107ofFIG.5is similar and operates similarly to the L1 data cache103ofFIG.3in that it analogously includes a tag array532, a data array536, a comparator544, a multiplexer546, an allocate way input508for allocating an entry into the L2 cache107, and a data in input525for writing data into the L2 cache107. However, the L2 cache107does not analogously include the tag hash logic312, hashed tag array334, comparator348, nor multiplexer342ofFIG.3. The L2 cache107is physically-indexed and physically-tagged. That is, tag522is the tag portion (e.g., bits [51:17]) of the physical memory line address521, and the set index526is the index portion (e.g., bits [16:6]) of the physical memory line address521. Finally, the comparator544compares the tag522with the tag404of all ways of the selected set. If there is a valid match, the hit signal552is true and a way select signal542, which indicates the matching way, is provided to mux546; otherwise, the hit signal552is false. As described herein, a cache line of memory associated with a physical memory line address can only reside in one entry401of the L2 cache107, and a PAP points to the one entry401of the L2 cache107that holds the copy of the cache line associated with the physical memory line address for the which the PAP is a proxy. FIG.6is an example block diagram of a cache subsystem600that employs PAPs to accomplish cache coherence in accordance with embodiments of the present disclosure. The cache subsystem600includes the L2 cache107ofFIG.5that includes entries401ofFIG.4and the L1 data cache103ofFIG.3that includes entries201ofFIG.2. The cache subsystem600has an inclusive allocation policy such that each cache line of memory allocated into the L1 data cache103is also allocated into the L2 cache107, and when the L2 cache107evicts the cache line, the L2 cache107also causes the L1 data cache103to evict the cache line. Because the L2 cache107is a physically-indexed physically-tagged cache, a cache line of memory may reside only in a single entry of the L2 cache107. As described herein, each valid L1 entry201of the L1 data cache103includes a field, referred to as the dPAP209ofFIG.2. The dPAP209, along with relevant bits of the L1 set index used to select the set of the L1 data cache103that includes the L1 entry201, points to the entry401of the L2 cache107that holds a copy of the cache line of memory allocated into the L1 entry201. The dPAP209along with the relevant bits of the L1 set index are referred to herein as the physical address proxy (PAP)699ofFIG.6, which may be considered a forward pointer to the L2 cache107that holds a copy of the cache line of memory allocated into the L1 entry201. The PAP699is used to accomplish cache coherency in a more efficient manner, both in terms of timing and storage space, than using a full physical memory line address to accomplish cache coherency, as described herein. The inclusive allocation policy is further described with respect toFIG.7. In the embodiment ofFIG.6, the L2 cache107is a 512 KB 4-way set associative cache memory whose entries each store a 64-byte cache line. Thus, the L2 cache107includes an 11-bit L2 set index602that receives physical address bits PA[16:6] to select one of 2048 sets. However, other embodiments are contemplated in which the L2 cache107has a different cache line size, different set associativity, and different size. In the embodiment ofFIG.6, the L1 data cache103is a 64 KB 4-way set associative cache memory whose entries each store a 64-byte cache line. Thus, the L1 data cache103includes an 8-bit L1 set index612to select one of 256 sets. However, other embodiments are contemplated in which the L1 data cache103has a different cache line size, different set associativity, and different size. In the embodiment ofFIG.6, the lower six bits [5:0] of the L1 set index612receive physical address bits PA[11:6]. The upper two bits [7:6] are described in more detail below. In particular, in the example ofFIG.6, the lower six bits [5:0] of the L1 set index612correspond to untranslated virtual address bits VA[11:6] that are mathematically equivalent to untranslated physical address bits PA[11:6] which correspond to the lower six bits [5:0] of the L2 set index602. FIG.6illustrates aspects of processing of a snoop request601by the cache subsystem600, which is also described inFIG.8, to ensure cache coherency between the L2 cache107, L1 data cache103and other caches of a system that includes the core100ofFIG.1, such as a multi-processor or multi-core system. The snoop request601specifies a physical memory line address PA[51:6], of which PA[16:6] correspond to the L2 set index602to select a set of the L2 cache107. Comparators604compare a tag portion603of the snoop request601against the four tags605of the selected set. The tag portion603corresponds to physical address bits PA[51:17]. Each of the four tags605is tag404ofFIG.4, which is the physical address bits PA[51:17] stored during an allocation into the L2 cache107. If there is a tag match of a valid entry401, the hit entry401is indicated on an L2 way number606, which is preferably a two-bit value encoded to indicate one of four ways, which is provided to snoop forwarding logic607. The snoop forwarding logic607forwards the snoop request601to the L1 data cache103as forwarded snoop request611. The forwarded snoop request611is similar to the snoop request601except that the physical memory line address PA[51:6] is replaced with the PAP699. The PAP699points to the snoop request601hit entry401in the L2 cache107. That is, the PAP699is the physical address bits PA[16:6] that select the set of the L2 cache107that contains the hit entry401combined with the L2 way number606of the hit entry401. The PAP699is significantly fewer bits than the physical memory line address PA[51:6], which may provide significant advantages such as improved timing and reduced storage requirements, as described in more detail below. In the embodiment ofFIG.6, the PAP699is thirteen bits, whereas the physical memory line address is 46 bits, for a saving of 33 bits per entry of the L1 data cache103, although other embodiments are contemplated in which the different bit savings are enjoyed. In the embodiment ofFIG.6, the untranslated address bits PA[11:6] are used as the lower six bits [5:0] of the L1 set index612. During a snoop request, the upper two bits [7:6] of the L1 set index612are generated by the L1 data cache103. More specifically, for the upper two bits [7:6] of the L1 set index612, the L1 data cache103generates all four possible combinations of the two bits. Thus, four sets of the L1 data cache103are selected in the embodiment ofFIG.6. The upper two bits [7:6] of the L1 set index612for processing of the forwarded snoop request611correspond to virtual address bits VA[13:12] of a load/store address during an allocation or lookup operation. Comparators614compare a dPAP613portion of the PAP699of the forwarded snoop request611against the dPAPs209of each entry201of each way of each of the four selected sets of the L1 data cache103. In the embodiment ofFIG.6, sixteen dPAPs209are compared. The dPAP613portion of the PAP699is physical address bits PA[16:12] used to select the set of the L2 cache107that contains the hit entry401combined with the L2 way number606of the hit entry401. The sixteen dPAPs209are the dPAPs209of the sixteen selected entries201. If there is a dPAP match of one or more valid entries201, the hit entries201are indicated on an L1 hit indicator616, received by control logic617, that specifies each way of each set having a hit entry201. Because the L1 data cache103is a virtually-indexed virtually-tagged cache, it may be holding multiple copies of the cache line being snooped and may therefore detect multiple snoop hits. In one embodiment, the L1 hit indicator616comprises a 16-bit vector. The control logic617uses the L1 hit indicator616to reply to the L2 cache107, e.g., to indicate a miss or to perform an invalidation of each hit entry201, as well as a write back of any modified cache lines to memory. In one embodiment, the multiple sets (e.g., four sets in the embodiment ofFIG.6) are selected in a time sequential fashion as are the tag comparisons performed by the comparators614. For example, rather than having four set index inputs612as shown inFIG.6, the L1 data cache103may have a single set index input612, and each of the four L1 set index values corresponding to the four different possible values of the two VA[13:12] bits are used to access the L1 data cache103in a sequential fashion, e.g., over four different clock cycles, e.g., in a pipelined fashion. Such an embodiment may have the advantage of less complex hardware in exchange for potentially reduced performance. The smaller PAP (i.e., smaller than the physical memory line address PA[51:6]), as well as even smaller dPAPs, may improve timing because the comparisons that need to be performed (e.g., by comparators614) are considerably smaller than conventional comparisons. To illustrate, assume a conventional processor whose first-level data cache stores and compares physical address tags, e.g., approximately forty bits. In contrast, the comparisons of dPAPs may be much smaller, e.g., seven bits in the embodiment ofFIG.6. Thus, the comparisons made by the comparators614of the embodiment ofFIG.6may be approximately an order of magnitude smaller and therefore much faster than a conventional processor, which may improve the cycle time for a processor that compares dPAPs rather than full physical addresses. Second, there may be a significant area savings due to less logic, e.g., smaller comparators, and less storage elements, e.g., seven bits to store a dPAP in an L1 cache entry201rather than a large physical address tag. Still further, the much smaller dPAP comparisons may be sufficiently faster and smaller to make feasible an embodiment in which the comparisons of the ways of multiple selected sets are performed in parallel (e.g., sixteen parallel comparisons in the embodiment ofFIG.6). Finally, the smaller PAPs may further improve timing and area savings in other portions of the core100in which PAPs may be used in place of physical memory line addresses for other purposes, such as in entries of the load/store queue125for making decisions whether to perform a speculative store-to-load forward operation and for performing store-to-load forwarding violation checking at load/store commit time, or in entries of the write combine buffer109to determine whether store data of multiple store operations may be combined in an entry of the write combine buffer109. FIG.7is an example flowchart illustrating operation of the cache subsystem600ofFIG.6to process a miss in the L1 data cache103in furtherance of an inclusive cache policy in accordance with embodiments of the present disclosure. Operation begins at block702. At block702, a virtual address (e.g., VA321ofFIG.2of a load/store operation) misses in the L1 data cache103. In response, the cache subsystem600generates a cache line fill request to the L2 cache107. The fill request specifies a physical address that is a translation of the missing virtual address obtained from the DTLB141ofFIG.1, which obtains the physical address from the TWE145ofFIG.1if the physical address is missing in the DTLB141. Operation proceeds to block704. At block704, the L2 cache107looks up the physical address to obtain the requested cache line that has been allocated into the L2 cache107. (If the physical address is missing, the L2 cache107fetches the cache line at the physical address from memory (or from another cache memory higher in the cache hierarchy) and allocates the physical address into an entry401of the L2 cache107.) The L2 cache107then returns a copy of the cache line to the L1 data cache103as well as the dPAP (e.g., dPAP323ofFIG.3) of the entry401of the L2 cache107into which the cache line is allocated. The L1 data cache103writes the returned cache line and dPAP into the respective cache line data202and dPAP209ofFIG.2of the allocated entry201. Operation proceeds to block706. At block706, at some time later, when the L2 cache107subsequently evicts its copy of the cache line (e.g., in response to a snoop request or when the L2 cache107decides to replace the entry401and allocate it to a different physical address), the L2 cache107also causes the L1 data cache103to evict its copy of the cache line. Thus, in the manner ofFIG.7, the L2 cache107is inclusive of the L1 data cache103. Stated alternatively, as long as the cache line remains in the L1 data cache103, the L2 cache107also keeps its copy of the cache line. FIG.8is an example flowchart illustrating operation of the cache subsystem600ofFIG.6to process a snoop request in accordance with embodiments of the present disclosure. Operation begins at block802. At block802, a physically-indexed physically-tagged set associative L2 cache (e.g., L2 cache107ofFIG.6) that is inclusive of a lower-level data cache (e.g., L1 data cache103ofFIG.6) receives a snoop request (e.g., snoop request601) that specifies a physical memory line address. Operation proceeds to block804. At block804, the L2 cache107determines whether the physical memory line address hits in any of its entries401. If so, operation proceeds to block806; otherwise, operation proceeds to block805at which the L2 cache107does not forward the snoop request to the L1 data cache103. At block806, the snoop request is forwarded to the L1 data cache103, e.g., as a forwarded snoop request (e.g., forwarded snoop request611). The forwarded snoop request replaces the physical memory line address of the original snoop request (e.g., PA[51:6] ofFIG.6) with the PAP (e.g., PAP699ofFIG.6) of the entry401of the L2 cache107that was hit, i.e., the way number (e.g., L2 way606ofFIG.6) and the set index (e.g., L2 set index602ofFIG.6) that together point to the hit entry401of the L2 cache107. Operation proceeds to block808. At block808, the L1 data cache103uses N bits of the PAP (e.g., N=6 untranslated address bits such as PA[11:6] ofFIG.6) as lower set index bits to select one or more (S) sets of the L1 data cache103. As described above with respect toFIG.6, for the upper bits of the set index (e.g., two upper bits inFIG.6), the L1 data cache103generates all possible combinations of the upper bits. The upper bits correspond to translated virtual address bits that are used to allocate into the L1 data cache103, e.g., during a load/store operation (e.g., VA [13:12]321ofFIG.3). The L1 data cache103also uses the remaining bits of the PAP (i.e., not used in the L1 set index), which is the dPAP613portion of the PAP699ofFIG.6, to compare against the dPAPs209stored in each valid entry201of the selected sets to determine whether any snoop hits occurred in the L1 data cache103in response to the forwarded snoop request (e.g., as indicated on L1hit indicator616ofFIG.6). To process the forwarded snoop request, the L1 data cache103also performs an invalidation of each hit entry201as well as a write back of any modified cache lines to memory. FIG.9is an example block diagram of a cache subsystem900that employs PAPs to accomplish cache coherence in accordance with embodiments of the present disclosure. The cache subsystem900ofFIG.9is similar in many respects to the cache subsystem600ofFIG.6. However, in the cache subsystem900ofFIG.9, to process the forwarded snoop request611, a single set of the L1 data cache103is selected rather than multiple sets. More specifically, the L1 data cache103uses untranslated bits (e.g., PA[11:6]) of the PAP699of the forwarded snoop request611that correspond to all bits of the L1 set index912to select a single set; the dPAP613is then used by comparators614to compare with the dPAPs209stored in each of the four ways of the single selected set to determine whether any snoop hits occurred in entries201of the L1 data cache103in response to the forwarded snoop request as indicated on L1hit indicator916, as described in block1008ofFIG.10in which operation flows to block1008from block806ofFIG.8(rather than to block808). In one embodiment, the L1 hit indicator616comprises a 4-bit vector. The embodiment ofFIG.9may be employed when the L1 data cache103is sufficiently small and its cache lines size and set associative arrangement are such that the number of set index bits912are less than or equal to the number of untranslated address bits (excluding the cache line offset bits) such that corresponding bits of the L1 and L2 set indices correspond to untranslated address bits of the L1 data cache103virtual address321and the L2 cache107physical memory line address521such that a single set of the L1 data cache103may be selected to process a snoop request. For example, in the embodiment ofFIG.9, the L1 data cache103is a 16 KB cache memory having 4 ways that each store a 64-byte cache line; therefore, the L1 data cache103has 64 sets requiring a set index912of 6 bits that correspond to untranslated virtual address bits VA[11:6] that are mathematically equivalent to untranslated physical address bits PA[11:6] that correspond to the lower 6 bits of the L2 set index602. FIG.11is an example block diagram of a cache subsystem1100that employs PAPs to accomplish cache coherence in accordance with embodiments of the present disclosure. The cache subsystem1100ofFIG.11is similar in many respects to the cache subsystem600ofFIG.6. However, in the cache subsystem1100ofFIG.11, all bits of the PAP699are used as the dPAP1113for processing snoop requests. More specifically, the dPAP209stored in an allocated entry of the L1 data cache103(e.g., at block704ofFIG.7) is the full PAP, no bits of the PAP699are used in the L1 set index1112to select sets to process a forwarded snoop request611, and all bits of the PAP699provided by the forwarded snoop request611, i.e., the dPAP1113, are used by comparators614to compare with the dPAP209stored in the entries201of the L1 data cache103. That is, in the embodiment ofFIG.11, the dPAP and the PAP are equivalent. Furthermore, in the embodiment ofFIG.11, all bits of the PAP stored in the dPAP field209ofFIG.2of all sets of the L1 data cache103are compared by comparators614with the dPAP1113, which is the PAP699of the forwarded snoop request611, and the L1hit indicator1116specifies the hit entries201, as described in block1208ofFIG.12in which operation flows to block1208from block806ofFIG.8(rather than to block808). In one embodiment, the L1 hit indicator1116comprises a 1024-bit vector. The embodiment ofFIG.11may be employed when the address bits that correspond to the set index326used to access the L1 data cache103during an allocation operation (e.g., load/store operation) are not mathematically equivalent to the address bits that correspond to the set index526used to access the L2 cache107. For example, the address bits that correspond to the set index326used to access the L1 data cache103during an allocation operation may be virtual address bits and/or a hash of virtual address bits or other bits such as a translation context of the load/store operation. The embodiments described herein may enjoy the following advantages. First, the use of PAPs may improve timing because the comparisons that need to be performed are considerably smaller than conventional comparisons. To illustrate, assume a conventional processor that compares physical memory line address tags, e.g., on the order of forty bits. In contrast, the comparisons of PAPs or diminutive PAPs may be much smaller, e.g., single-digit number of bits. Thus, the comparisons may be much smaller and therefore much faster, which may improve the cycle time for a processor that compares PAPs or diminutive PAPs rather than physical cache line address tags. Second, there may be a significant area savings due to less logic, e.g., smaller comparators, and less storage elements, e.g., fewer bits to store a PAP or diminutive PAP rather than a physical memory line address in a cache entry, load/store queue entry, write combine buffer, etc. Store-to-Load Forwarding Using PAPs Embodiments are now described in which PAPs are used to make determinations related to store-to-load forwarding. Store-to-load forwarding refers to an operation performed by processors to increase performance and generally may be described as follows. Typically, when a load instruction is executed, the load unit looks up the load address in the cache, and if a hit occurs the cache data is provided to the load instruction. However, there may be an outstanding store instruction that is older than the load instruction and that has not yet written the store data to the cache for the same memory address as the load address. In this situation, if the cache data is provided to the load instruction it would be stale data. That is, the load instruction would be receiving the wrong data. One solution to solving this problem is to wait to execute the load instruction until all older store instructions have written their data to the cache. However, a higher performance solution is to hold the store data of outstanding store instructions (i.e., that have not yet written their store data into the cache) in a separate structure, typically referred to as a store queue. During execution of the load instruction the store queue is checked to see if the load data requested by the load instruction is present in the store queue. If so, the store data in the store queue is “forwarded” to the load instruction rather than the stale cache data. Load and store instructions specify virtual load and store addresses. If forwarding is performed without comparing physical load and store addresses, i.e., forwarding based solely on virtual address comparisons, the forwarded store data may not be the correct requested load data since two different virtual addresses may be aliases of the same physical address. However, there are reasons to avoid comparing physical addresses for store-to-load forwarding purposes. First, the physical addresses are large and would require a significant amount of additional storage space per entry of the store queue. Second, timing is critical in high performance processors, and the logic to compare a large physical address is relatively slow. Historically, high performance processors speculatively perform store-to-load forwarding based on virtual address comparisons and use much fewer than the entire virtual addresses for fast comparisons, e.g., using only untranslated address bits of the virtual addresses. These high performance processors then perform checks later, either late in the execution pipeline or when the load instruction is ready to retire, to determine whether the incorrect data was forwarded to it. Third, even if the store physical addresses were held in the store queue, the load physical address is typically not available early in the load unit pipeline for use in comparing with the store physical addresses in the store queue thus resulting in a longer execution time of the load instruction, more specifically resulting in a longer load-to-use latency of the processor, which is highly undesirable with respect to processor performance. FIG.13is an example block diagram of a store queue (SQ) entry1301of the SQ125ofFIG.1that holds PAPs to accomplish store-to-load forwarding in accordance with embodiments of the present disclosure. The SQ entry1301includes store data1302, a store PAP1304, lower physical address bits PA[5:3]1306, a byte mask1308, and a valid bit1309. The valid bit1309is true if the SQ entry1301is valid, i.e., the SQ entry1301has been allocated to a store instruction and its fields are populated with valid information associated with the store instruction. The store data1302is the data that is specified by the store instruction to be stored to memory. The store data is obtained from the register file105specified by the store instruction. The population of the SQ entry1301is described in more detail below with respect toFIG.15. The store PAP1304is a physical address proxy for a store physical line address to which the store data1302is to be written. The store instruction specifies a store virtual address. The store physical line address is a translation of a portion of the store virtual address, namely upper address bits (e.g., bits12and above in the case of a 4 KB page size). As described above, when a cache line is brought into the L2 cache107from a physical line address, e.g., by a load or store instruction, the upper address bits of the load/store virtual address specified by the load/store instruction are translated into a load/store physical line address, e.g., by the MMU147ofFIG.1. The cache line is brought into, i.e., allocated into, an entry of the L2 cache107, which has a unique set index and way number, as described above. The store PAP1304specifies the set index and the way number of the entry in the L2 cache107into which the cache line was allocated, i.e., the cache line specified by the physical line address of the load/store instruction that brought the cache line into the L2 cache107, which physical line address corresponds to the store physical line address that is a translation of the upper bits of the store virtual address. The lower bits of the store virtual address (e.g., bits [11:0] in the case of a 4 KB page size) are untranslated address bits, i.e., the untranslated bits of the virtual and physical addresses are identical, as described above. The store physical address bits PA[5:3]1306correspond to the untranslated address bits [5:3] of the store virtual address. The store instruction also specifies a size of the store data to be written. In the example embodiment, the largest size of store data (and load data) is eight bytes. Hence, in the embodiment ofFIG.13, the size of the store data1302is up to eight bytes, and the store physical address bits PA[5:3]1306narrows down the location of the store data1302within a 64-byte cache line, for example. The store size and bits [2:0] of the store address may be used to generate the store byte mask1308that specifies, or encodes, which of the eight bytes are being written by the store instruction. Other embodiments are contemplated in which the bytes written by the store instruction are specified in a different manner, e.g., the size itself and bits [2:0] of the store address may be held in the SQ entry1301rather than the byte mask1308. Advantageously, each entry of the SQ125holds the store PAP1304rather than the full store physical line address, as described in more detail below. In the embodiment ofFIG.13, because in the example embodiment the L2 cache107is 4-way set associative, the store PAP1304specifies the 2 bits of the way number of the entry in the L2 cache107into which the cache line specified by the physical line address is allocated. Furthermore, in the embodiment ofFIG.13, because in the example embodiment the L2 cache107has 2048 sets, the store PAP1304specifies the eleven bits of the set index of the set of the entry in the L2 cache107into which the cache line specified by the physical line address is allocated, which corresponds to physical line address bits PA[16:6] in the embodiment. Thus, in the embodiment ofFIG.13, the store PAP1304is thirteen bits, in contrast to a full store physical line address, which may be approximately forty-six bits in some implementations, as described above, and in other implementations there may be more. Advantageously, a significant savings may be enjoyed both in terms of storage space within the SQ125and in terms of timing by providing the ability to compare PAPs rather than full physical line addresses when making store-to-load forwarding determinations, as described in more detail below. FIG.14is an example block diagram of portions of the processor100ofFIG.1used to perform store-to-load forwarding using PAPs in accordance with embodiments of the present disclosure. In the embodiment ofFIG.14, shown are the SQ125, portions of the L1 data cache103(hashed tag array334, tag hash logic312, and comparator348(and mux, not shown, that is controlled based on the result of the comparator348), e.g., ofFIG.3), byte mask logic1491, a mux1446, and forwarding decision logic1499. The byte mask logic1491, mux1446, and forwarding decision logic1499may be considered part of the LSU117ofFIG.1.FIG.14illustrates the processing of a load instruction to which store data may be forwarded from an entry of the SQ125. The load instruction specifies a load virtual address VA[63:0]321(e.g., ofFIG.3) and a load size1489. The byte mask logic1491uses the load VA321and load size1489to generate a load byte mask1493that specifies the eight or less bytes of load data to be read from within an eight-byte aligned memory address range. The load byte mask1493is provided to the forwarding decision logic1499. The load virtual address bits VA[5:3], which are untranslated and identical to the load physical address bits PA[5:3], are also provided to the forwarding decision logic1499. The load virtual address bits VA[11:6], which are untranslated and identical to the load physical address bits PA[11:6], are also provided to the forwarding decision logic1499. As described above, the set index326portion of the load VA321selects a set of the hashed tag array334, each way of the selected set is provided to comparator348, and the tag hash logic312uses the load VA321to generate a hashed tag324provided to comparator348for comparison with each of the selected hashed tags208(ofFIG.2). Assuming a valid match, the comparator348provides the dPAP209(ofFIG.2) of the valid matching entry201of the L1 data cache103, as described above. The dPAP209in conjunction with the load PA[11:6] bits form a load PAP1495. In the embodiment ofFIG.13, the load PAP1495specifies the set index and the way number of the entry in the L2 cache107into which the cache line was allocated, i.e., the cache line specified by the physical line address of the load/store instruction that brought the cache line into the L2 cache107, which physical line address corresponds to the load physical line address that is a translation of the upper bits of the load VA321. The load PAP1495is provided to the forwarding decision logic1499. If there is no valid match, then there is no load PAP available for comparison with the store PAP1304and therefore no store-to-load forwarding may be performed, and there is no valid L1 data out327; hence, a cache line fill request is generated, and the load instruction is replayed when the requested cache line and dPAP are returned by the L2 cache107and written into the L1 data cache103. The SQ125provides a selected SQ entry1399. The selected SQ entry1399may be selected in different manners according to different embodiments, e.g., according to the embodiments ofFIGS.18and19. The store data1302of the selected SQ entry1399is provided to mux1446, which also receives the output data of the hitting entry of the L1 data cache103, i.e., L1 data out327, e.g., ofFIG.3. In the case of a hit in the L1 data cache103, a control signal forward1497generated by the forwarding decision logic1499controls mux1446to select either the store data1302from the selected SQ entry1399or the L1 data out327. The store PAP1304, store PA[5:3] bits1306, store byte mask1308and store valid bit1309of the selected SQ entry1399are provided to the forwarding decision logic1499. The forwarding decision logic1499determines whether the store data1302of the selected SQ entry1399overlaps the load data requested by the load instruction. More specifically, the SQ entry selection and forwarding decision logic1499generates a true value on the forward signal1497to control the mux1446to select the store data1302if the store valid bit1309is true, the load PAP1495matches the store PAP1304, the load PA[5:3] matches the store PA[5:3]1306, and the load byte mask1493and the store byte mask1308indicate the store data overlaps the requested load data, i.e., the requested load data is included in the valid bytes of the store data1302of the selected SQ entry1399; otherwise, the forwarding decision logic1499generates a false value on the forward signal1497to control the mux1446to select the L1 data out327. Stated alternatively, the store data overlaps the requested load data and may be forwarded if the following conditions are met: (1) the selected SQ entry1399is valid; (2) the load physical address and the store physical address specify the same N-byte-aligned quantum of memory, where N is the width of the store data field1302in a SQ entry1301(e.g., N=8 bytes wide), e.g., the load PAP1495matches the store PAP1304and the load PA[5:3] matches the store PA[5:3]1306; and (3) the valid bytes of the store data1302of the selected SQ entry1399as indicated by the store byte mask1308overlap the load data bytes requested by the load instruction as indicated by the load byte mask1493. To illustrate by example, assuming a valid selected SQ entry1399, a PAP match and a PA[5:3] match, assume the store byte mask1308is a binary value 00111100 and the load byte mask1493is a binary value 00110000; then the store data overlaps the requested load data and the store data will be forwarded. However, assume the load byte mask1493is a binary value 00000011; then the store data does not overlap the requested load data and the store data will be forwarded, and instead the L1 data out327will be selected. An example of logic that may perform the byte mask comparison is logic that performs a Boolean AND of the load and store byte masks and then indicates overlap if the Boolean result equals the load byte mask. Other embodiments are contemplated in which the entry201of the L1 data cache103also holds other information such as permissions associated with the specified memory location so that the forwarding decision logic1499may also determine whether it is permissible to forward the store data to the load instruction. Although an embodiment is described in which the width of the store queue data field1302equals the largest possible size specified by a store instruction, other embodiments are contemplated in which the width of the store queue data field1302is greater than the largest possible size specified by a store instruction. Advantageously, the forwarding decision logic1499may compare load PAP1495against the store PAP1304since they are proxies for the respective load physical line address and store physical line address, which alleviates the need for the forwarding decision logic1499to compare the load physical line address and store physical line address themselves. Comparing the PAPs may result in a significantly faster determination (reflected in the value of the forward control signal1497) of whether to forward the store data1302and may even improve the load-to-use latency of the processor100. Additionally, each SQ entry1301holds the store PAP1304rather than the store physical line address, and each L1 data cache103entry201holds the load PAP1495(or at least a portion of it, i.e., the dPAP209) rather than the load physical line address, which may result in a significant savings in terms of storage space in the processor100. Finally, unlike conventional approaches that, for example, make forwarding decisions based merely on partial address comparisons (e.g., of untranslated address bits and/or virtual address bits), the embodiments described herein effectively make a full physical address comparison using the PAPs. Further advantageously, the provision of the load PAP by the virtually-indexed virtually-tagged L1 data cache103may result in a faster determination of whether to forward the store data because the load PAP is available for comparison with the store PAP sooner than in a physically-accessed cache design in which the virtual load address is first looked up in a translation lookaside buffer. Still further, using the hashed tag array334to hold and provide the PAP for the load instruction may result in the load PAP being available for comparison with the store PAP sooner than if a full tag comparison is performed, again which may result in a faster determination of whether to forward the store data. Finally, a faster determination of whether to forward the store data may be obtained because the SQ125provides a single selected SQ entry1399which enables the load PAP to be compared against a single store PAP rather than having to perform a comparison of the load PAP with multiple store PAPs. These various speedups in the store forwarding determination may, either separately or in combination, improve the load-to-use latency of the processor100, which is an important parameter for processor performance. FIG.15is an example flowchart illustrating processing of a store instruction, e.g., by the processor100ofFIG.14, that includes writing a store PAP into a store queue entry in accordance with embodiments of the present disclosure. As described above, the L2 cache107is inclusive of the L1 data cache103such that when a cache line is brought into an entry of the L1 data cache103, the cache line is also brought into an entry of the L2 cache107(unless the cache line already resides in the L2 cache107). As described above, e.g., with respect toFIG.7, when the cache line is brought into the entry401of the L2 cache107, the dPAP209used to specify the allocated L2 entry401is written into the entry201allocated into the L1 data cache103. As described above, the dPAP209is the PAP that specifies the L2 entry401less any bits of the L2 set index of the PAP used in the set index of the L1 data cache103. Stated alternatively, the dPAP is the L2 way number of the L2 entry401along with any bits of the L2 set index of the entry401not used in the set index of the L1 data cache103. Operation begins at block1502. At block1502, the decode unit112ofFIG.1encounters a store instruction and allocates a SQ entry1301for the store instruction and dispatches the store instruction to the instruction schedulers121ofFIG.1. The store instruction specifies a register of the register file105ofFIG.1that holds the store data to be written to memory. The store instruction also specifies a store virtual address, e.g., store VA321ofFIG.3(the store VA321may include all 64 bits, i.e., including bits [5:0], even thoughFIG.3only indicates bits [63:6]) and a size of the data, e.g., one, two, four, or eight bytes. Operation proceeds to block1504. At block1504, the LSU117executes the store instruction. The store virtual address321hits in the L1 data cache103, at least eventually. If the store virtual address321initially misses in the L1 data cache103(e.g., at block702ofFIG.7), a cache line fill request will be generated to the L2 cache107, which involves the DTLB141translating the store virtual address321into a store physical address. A portion of the store physical address is the store physical line address, e.g., store PA[51:6] that is used in the lookup of the L2 cache107to obtain the requested cache line and, if missing in the L2 cache107(and missing in any other higher levels of the cache hierarchy, if present), used to access memory to obtain the cache line. The L2 cache107returns the cache line and the PAP that is a proxy for the store physical line address. More specifically, the PAP specifies the way number and set index that identifies the entry401of the L2 cache107that is inclusively holding the requested cache line. The dPAP portion of the PAP is written along with the cache line to the entry of the L1 data cache103allocated to the store instruction (e.g., at block704ofFIG.7). The store instruction is replayed when the requested cache line and dPAP are returned by the L2 cache107and written into the L1 data cache103. Upon replay, the store virtual address321hits in the L1 data cache103. The hitting entry201of the L1 data cache103provides the store dPAP209that is used along with untranslated bits of the store virtual address321(e.g., VA[11:6], which are identical to store physical address bits PA[11:6]) to form a store PAP that is a physical address proxy of the store physical line address, i.e., the store PAP points to the entry401of the L2 cache107that holds the copy of the cache line held in the entry201of the L1 data cache103hit by the store virtual address321. The store physical line address is the upper bits (e.g., [51:6]) of the store physical address. Operation proceeds to block1506. At block1506, the LSU117obtains the store data from the register file105and writes it into the store data field1302of the SQ entry1301allocated at block1502. The LSU117also forms the store PAP using the store dPAP209obtained from the L1 data cache103at block1504and lower untranslated address bits of the store virtual address321(e.g., store VA[11:6]). The LSU117then writes the store PAP into the store PAP field1304of the allocated SQ entry1301. Finally, the LSU117writes into the allocated SQ entry1301additional information that determines the store physical address and store data size, which in the embodiment ofFIGS.13and14includes writing store address bits [5:3] into the PA[5:3] field1306and writing a store byte mask into the byte mask field1308. The store byte mask indicates which bytes within an eight-byte-aligned quantum of memory the store data are to be written in an embodiment in which the store byte mask if eight bits. As described above, the SQ entry1301is configured to hold the store PAP1304rather than the full store physical line address, which advantageously may reduce the amount of storage needed in the SQ125. FIG.16is an example flowchart illustrating processing of a load instruction, e.g., by the processor100ofFIG.14, that includes using a load PAP and a store PAP from a store queue entry to decide whether to forward store data to the load instruction from the store queue entry in accordance with embodiments of the present disclosure. Operation begins at block1602. At block1602, a load instruction is issued to the LSU (e.g.,117). The LSU looks up the load virtual address (e.g.,321) in the L1 data cache (e.g.,103). In the embodiment ofFIG.14(andFIGS.18and19), the lookup includes looking up the load virtual address in the hashed tag array (e.g.,334). In the embodiment ofFIG.20, the lookup includes looking up the load virtual address in the tag array. Similar to the manner described above with respect to block1504, the load virtual address eventually hits in the L1 data cache. The hit entry (e.g.,201) provides the dPAP (e.g.,209) for the load instruction. The load dPAP along with untranslated bits of the load virtual address (e.g., VA[11:6], which are identical to the load physical address PA[11:6]) are used to form the load PAP (e.g.,1495), e.g., as shown inFIG.14. Additionally, a load byte mask (e.g.,1493ofFIG.14) is generated (e.g., by byte mask logic1491ofFIG.14) from the load data size (e.g.,1489ofFIG.14) and the lowest address bits (e.g., VA[2:0], which are identical to the load physical address PA[2:0]), e.g., as shown inFIG.14. Operation proceeds to block1604. At block1604, the SQ125provides a selected SQ entry (e.g.,1399), which includes the store data (e.g.,1302), store PAP (e.g.,1304), store lower physical address bits (e.g., PA[5:3]), store byte mask (e.g.,1308), and store valid bit (e.g.,1309), e.g., as shown inFIG.14. As described with respect toFIG.14, the SQ entry may be selected in different manners according to different embodiments, e.g., according to the embodiments ofFIGS.18and19. Operation proceeds to block1606. At block1606, the store PAP and load PAP are used (e.g., by forwarding logic1499ofFIG.14)—along with additional information, e.g., the store lower address bits1306and load lower address bits (e.g., PA[5:3]) and store byte mask1308and load byte mask1493ofFIG.14—to determine whether to forward the store data (e.g.,1302) from the selected SQ entry to the load instruction or whether instead the cache data (e.g., L1 data out327) is provided to the load instruction. That is, the store PAP and load PAP and the additional information are used to determine whether the store data of the selected SQ entry overlaps the load data requested by the load instruction. If the store data of the selected SQ entry overlaps the requested load data, then the store data is forwarded; otherwise, the data out of the L1 data cache is provided for the load instruction. Embodiments described herein use the load and store PAPs as proxies for the load and store physical line addresses to determine that the load and store have the same physical line address, which is required for the store data to overlap the requested load data. In contrast, conventional designs may forego a full physical line address comparison because of timing delays (e.g., instead making forwarding decisions based merely on partial address comparisons, e.g., of untranslated address bits and/or virtual address bits), whereas the embodiments described herein effectively make a full physical address comparison using the PAPs, but at a smaller timing cost because of the smaller PAP comparisons. FIG.17is an example block diagram of a SQ entry1701of the SQ125ofFIG.1that holds PAPs to accomplish store-to-load forwarding in accordance with embodiments of the present disclosure. The SQ entry1701ofFIG.17is similar in many respects to the SQ entry1301ofFIG.13. However, the SQ entry1701ofFIG.17further includes a subset of virtual address bits1711. In the embodiment ofFIG.18, the subset of virtual address bits1711is written, along with the other information of the SQ entry1701according to the operation ofFIG.15. That is, during execution of the store instruction the LSU117writes a corresponding subset of bits of the store virtual address321to the subset of virtual address bits field1711of the allocated SQ entry1701, e.g., at block1506, for subsequent use as described below with respect toFIG.18. FIG.18is an example block diagram of portions of the processor100ofFIG.1used to perform store-to-load forwarding using PAPs in accordance with embodiments of the present disclosure. The embodiment ofFIG.18is similar in many respects to the embodiment ofFIG.14, except that each entry1701of the SQ125also includes the subset of virtual address bits1711ofFIG.17. Additionally, in the embodiment ofFIG.18, the selected SQ entry1399(described with respect toFIG.14) is selected using a subset of virtual address bits1801of the load virtual address321, as shown. That is, the subset of the load virtual address bits1801are compared with the subset of virtual address bits1711of each valid entry of the SQ125for matches. If no matches are found, then no store-to-load forwarding is performed. The SQ125receives an indicator that indicates which entries1701of the SQ125are associated with store instructions that are older than the load instruction. Using the indicator, if one or more matches are found that are older in program order than the load instruction, logic within the SQ125selects as the selected SQ entry1399the youngest in program order from among the older matching SQ entries1701. In one embodiment, the decode unit112, which dispatches instructions—including all load and store instructions—to the execution units114in program order, generates and provides to the SQ125, as the indicator, a SQ index1879for each load instruction which is the index into the SQ125of the SQ entry1701associated with the youngest store instruction that is older in program order than the load instruction. In an alternate embodiment, the index of the store instruction within the ROB122is held in each entry1701of the SQ125, and the index of the load instruction within the ROB122(rather than the SQ index1879) is provided to the SQ125, as the indicator, for use, in conjunction with the ROB indices of the SQ entries1701, in selecting the SQ entry1701associated with the matching youngest store instruction older in program order than the load instruction, i.e., selected SQ entry1399. The SQ125provides the selected SQ entry1399to the forwarding decision logic1499and to the mux1446, e.g., according to block1604ofFIG.16. That is,FIG.18describes an embodiment for selecting the selected SQ entry1399, i.e., using virtual address bits and the indicator, and otherwise operation proceeds according to the manner described with respect toFIGS.14and16, advantageously that the load and store PAPs, rather than full load and store physical line addresses, are used to determine whether the store data of the selected SQ entry1399overlaps the requested load data and may thus be forwarded. In an alternate embodiment, the load byte mask1493is provided to the SQ125(rather than to the forwarding decision logic1499), and the logic within the SQ125compares the load byte mask1493against the store byte mask1308of each valid SQ entry1701to determine whether there is overlap of the requested load data by the store data1302of SQ entries1701whose subsets of virtual address bits1711match the load subset of virtual address bits1801. That is, the logic within the SQ125additionally uses the byte mask compares to select the selected SQ entry1399. In one embodiment, the subset of virtual address bits1711may be a hash of bits of the store virtual address321of the store instruction to which the SQ entry1701is allocated, and the subset of load virtual address bits1801used to compare with each valid entry1701of the SQ125may be a hash of bits of the load virtual address321. FIG.19is an example block diagram of portions of the processor100ofFIG.1used to perform store-to-load forwarding using PAPs in accordance with embodiments of the present disclosure. The embodiment ofFIG.19is similar in many respects to the embodiment ofFIG.14, except that the embodiment ofFIG.19uses the memory dependence predictor (MDP)111ofFIG.1to provide a prediction of a store instruction from which to forward store data to the load instruction. In one embodiment, the MDP111receives an instruction pointer (IP)1901value of the load instruction, i.e., the address in memory from which the load instruction is fetched. In another embodiment, the MDP111receives information specifying other characteristics1901of the load instruction, such as a destination register of the store instruction or an addressing mode of the store instruction, i.e., a characteristic of the store instruction that may be used to distinguish the store instruction from other store instructions. The MDP111uses the received load instruction-specific information1901to generate a prediction of the store instruction from which store data should be forwarded to the load instruction. In the embodiment ofFIG.19, the prediction may be an index1903into the SQ125of the entry1301allocated to the predicted store instruction. The predicted SQ entry index1903is provided to the SQ125to select the selected SQ entry1399. The SQ125provides the selected SQ entry1399to the forwarding decision logic1499and to the mux1446, e.g., according to block1604ofFIG.16. That is,FIG.19describes an embodiment for selecting the selected SQ entry1399, i.e., using the MDP111, and otherwise operation proceeds according to the manner described with respect toFIGS.14and16, advantageously that the load and store PAPs, rather than full load and store physical line addresses, are used to determine whether the store data of the selected SQ entry1399overlaps the requested load data and may thus be forwarded. FIG.20is an example block diagram of portions of the processor100ofFIG.1used to perform store-to-load forwarding using PAPs in accordance with embodiments of the present disclosure. The embodiment ofFIG.20is similar in many respects to the embodiment ofFIG.14. However, the embodiment is absent a hashed tag array334. Instead, in the embodiment ofFIG.20, the tag array332holds the dPAPs209, and the tag322of the load VA321is compared with each of the selected tags204(ofFIG.2) to determine which dPAP209to provide for formation into the load PAP1495. Otherwise, operation proceeds according to the manner described with respect toFIGS.14and16, advantageously that the load and store PAPs, rather than full load and store physical line addresses, are used to determine whether the store data of the selected SQ entry1399overlaps the requested load data and may thus be forwarded. FIG.21is an example block diagram of portions of the processor100ofFIG.1used to perform store-to-load forwarding using PAPs in accordance with embodiments of the present disclosure. The embodiment ofFIG.21is similar in many respects to the embodiment ofFIG.14, except that rather than using the load PAP to compare with a store PAP of a single selected SQ entry1399to determine whether the store data of the single selected SQ entry1399overlaps with the requested load data as inFIGS.14through20, instead the load PAP is used to compare with the store PAP of all valid entries1301of the SQ125to select a SQ entry1301from which to forward store data to the load instruction. The embodiment ofFIG.21includes similar elements toFIG.14and additionally includes a SQ head/tail2177(i.e., the head and tail pointers that identify the set of valid SQ entries1301), candidate set identification logic2197, SQ entry selection logic2193, and a mux2189. The storage that stores all the SQ entries1301is also shown, the number of entries1301being denoted N inFIG.21. The mux2189receives the stores data1302of all N of the SQ entries1301and selects the store data indicated by a control signal2191generated by the SQ entry selection logic2193as described in more detail below. The candidate set identification logic2197receives all N SQ entries1301of the SQ125. The candidate set identification logic2197also receives the load PAP1495, the load lower address bits PA[5:3], and the load byte mask1493. The candidate set identification logic2197compares the load PAP1495and load lower address bits PA[5:3] and load byte mask1493with the respective store PAP1304and store lower address bits PA[5:3]1306and store byte mask1308of each of the N entries1301of the SQ125to generate a candidate set bit vector2195. The candidate set bit vector2195includes a bit for each of the N SQ entries1301. A bit of the bit vector2195associated with a SQ entry1301is true if its store PAP1304and store lower address bits PA[5:3]1306match the load PAP1495and load lower address bits PA[5:3] and the store byte mask1308overlaps the load byte mask1493. The SQ entry selection logic2193receives the candidate set bit vector2195, head and tail pointers2177of the SQ125, and the SQ index of the most recent store older than the load1879. Using the head and tail pointers2177of the SQ125and the SQ index of the most recent store older than the load1879, the SQ entry selection logic2193selects, and specifies on mux2189control signal2191, the SQ entry1301associated with the youngest store instruction in program order from among the SQ entries1301whose associated bit of the candidate set bit vector2195is true that is older in program order than the load instruction, if such a SQ entry1301exists. If such a SQ entry1301exists, the SQ entry selection logic2193generates the forward control signal1497to select the selected store data2102out of the mux1446; otherwise, the mux1446selects the L1 data out327. In an alternate embodiment, the index of the load instruction within the ROB122(rather than the SQ index1879) is provided, similar to the description with respect toFIG.18, for use by the SQ entry selection logic2193in generating the mux2189control signal2191to select the store data1302from the SQ entry1301associated with the youngest store instruction older in program order than the load instruction from among the SQ entries1301whose associated bit of the candidate set bit vector2195is true. FIG.22is an example flowchart illustrating processing of a load instruction by the processor100ofFIG.21that includes using a load PAP and a store PAP of each entry of the store queue to decide whether to forward store data to the load instruction from a store queue entry in accordance with embodiments of the present disclosure. Operation begins at block2202. At block2202, operation is similar to the operation described at block1602ofFIG.16. Operation proceeds to block2204. At block2204, the load PAP (e.g.,1495) and load lower address bits (e.g., PA[5:3]) along with the load byte mask (e.g.,1493) are compared (e.g., by candidate set identification logic2197ofFIG.21) with the store PAP (e.g.,1304) and store lower physical address bits (e.g., PA[5:3]) along with the store byte mask (e.g.,1308) of each valid SQ entry (e.g.,1301) to identify a candidate set of SQ entries whose store data (e.g.,1302) overlaps the load data requested by the load instruction (e.g., indicated by candidate set bit vector2195). Operation proceeds to block2206. At block2206, from among the set of candidate SQ entries is selected (e.g., by mux2189controlled by SQ entry selection logic2193) the store data from the SQ entry associated with youngest store instruction that is older in program order than the load instruction. Assuming such a SQ entry is found, the selected store data is forwarded to the load instruction; otherwise, the cache data (e.g., L1 data out327) is provided to the load instruction. That is, the store PAP and load PAP and additional information (e.g., load and store lower address bits [5:3] and byte masks) are used to determine whether the store data of any of the SQ entries overlaps the load data requested by the load instruction. If the store data of the store instruction associated with one or more SQ entries overlaps the requested load data, and at least one of the overlapping store instructions is older than the load instruction, then the store data from the youngest of the older store instructions is forwarded; otherwise, the data out of the L1 data cache is provided for the load instruction. Embodiments described herein use the load and store PAPs as proxies for the load and store physical line addresses to determine that the load and candidate stores have the same physical line address, which is required for the store data to overlap the requested load data. In contrast, conventional designs may forego a full physical line address comparison because of timing delays (e.g., instead making forwarding decisions based merely on partial address comparisons, e.g., of untranslated address bits and/or virtual address bits), whereas the embodiments described herein effectively make a full physical address comparison using the PAPs, but at a smaller timing cost because of the smaller PAP comparisons. Write Combining Using PAPs One of the most precious resources in the processor is the cache memories. More specifically, the demand for access to the cache memories may often been very high. For this reason, a cache generally includes one or more wide data buses to read and write the cache, e.g., 16, 32, 64 bytes wide. However, the caches must also support the writing of small data, i.e., down to a single byte. This is because the size of the store data specified by some store instructions may be small, e.g., a single byte or two bytes, i.e., smaller than the wide busses to the cache. Furthermore, a program may perform a burst of small store instructions that specify addresses that are substantially sequential in nature. If each of these small store data is written individually to the cache, each tying up the entire wide cache bus even though only a single byte is being written on the bus, then the bus resources may be used inefficiently and congestion may occur at the cache, which may have a significant negative performance impact. To alleviate the congestion and to improve the efficiency of the cache and of the processor, a technique commonly referred to as write-combining is often employed in high performance processors. Rather than writing each of the small store data to the cache individually, the store data are first written into a buffer before being written from the buffer to the cache. The processor looks for opportunities to combine the individual small store data into a larger block of data within the buffer that can be written from the buffer to the cache, thereby more efficiently using the wide cache bus and reducing congestion at the cache by reducing the number of writes to it. More specifically, the processor looks at the store addresses of the individual store data to determine whether the store addresses are in close enough proximity to be combined into an entry of the buffer. For example, assume a data block in an entry in the buffer is sixteen bytes wide and is expected to be aligned on a 16-byte boundary. Then individual store instructions whose store addresses and store data sizes are such that their store data falls within the same 16-byte aligned block, i.e., 16-byte aligned memory range, may be combined into a given buffer entry. More specifically, the store addresses that must be examined to determine whether they can be combined must be physical addresses because the combined blocks within the buffer are ultimately written to physical memory addresses. As described above, physical addresses can be very large, and comparison of physical addresses may be relatively time consuming and cause an increase in the processor cycle time, which may be undesirable. Additionally, in the case of a processor having a virtually-indexed virtually-tagged first-level data cache memory, conventionally the store addresses held in the store queue are virtual addresses. Consequently, the store physical address is not conventionally available when a decision needs to be made about whether the store data may be combined with other store data in the buffer. As a result, conventionally the store virtual address may need to be translated to the store physical address in order to make the write combining decision. FIG.23is an example block diagram of a store queue entry1301of the store queue (SQ)125ofFIG.1that holds PAPs to accomplish write-combining in accordance with embodiments of the present disclosure. The SQ entry1301is similar to the SQ entry1301ofFIG.13; however, the SQ entry1301ofFIG.23also includes a store virtual address VA[63:12] field2311. The store VA[63:12] field2311is populated with store VA[63:12]321ofFIG.3when the store instruction is executed by the LSU117. The store VA[63:12] field2311is subsequently used when the store instruction is committed, as described in more detail below. As described above, a store instruction is ready to be committed when there are no older instructions (i.e., older in program order than the store instruction) that could cause the store instruction to be aborted and the store instruction is the oldest store instruction (i.e., store instructions are committed in order), and a store instruction is committed when the store data1302held in the associated SQ entry1301is written into the L1 data cache103based on the store virtual address VA[63:12], PA[11:6] of the store PAP1304, store PA[5:3]1306, and the store byte mask1308held in the SQ entry1301. A store instruction is being committed when the LSU117is writing the store data1302to the L1 data cache103and to the WCB109, as described in more detail below. In one embodiment, only load and store instructions may be committed, whereas all other types of instructions commit and retire simultaneously. FIG.24is an example block diagram of a write combining buffer (WCB) entry2401of the WCB109ofFIG.1that holds PAPs to accomplish write combining in accordance with embodiments of the present disclosure. The WCB entry2401includes write data2402, a write PAP2404, lower physical address bits write PA[5:4]2406, a write byte mask2408, a valid bit2409, a write VA[63:12]2411(virtual write address), and a non-combinable (NC) flag2413. The population of the WCB entry2401is described in detail below with respect toFIGS.25through28. The valid bit2409is true if the WCB entry2401is valid. A WCB entry2401is valid if the relevant information of one or more committed store instructions has been written to the WCB entry2401, and the WCB entry2401has not yet been pushed out to the L2 cache107. The relevant information of a store instruction written to the WCB entry2401is the store data1302, store PAP1304, store PA[5:4]1306, store byte mask1308and store VA[63:12]2311ofFIG.23, which are written to the write data2402, write PAP2404, write PA[5:4]2406, write byte mask2408and write VA[63:12]2411of the WCB entry2401, respectively, e.g., at block2812ofFIG.28, and the valid bit2409is set to a true value. Furthermore, at block2806ofFIG.28, the store data1302is merged into the write data2402, the store byte mask1308is merged into the write byte mask2408, and none of the other fields of the WCB entry2401need be updated. That is, the bytes of the store data1302whose corresponding bit of the store byte mask1308is true overwrite the relevant bytes of the write data2402(and the other bytes of the write data2402are not updated), and a Boolean OR of the store byte mask1308is performed with the appropriate portion of the write byte mask2408, as described below with respect to block2806, which accomplishes correct operation because store instructions are committed in program order. The write data2402is the combined store data1302from the committed one or more store instructions. The write data2402is obtained by the WCB109from the LSU117when a store instruction is committed. The write PAP2404is a physical address proxy for a write physical line address to which the write data2402is to be written. The write physical line address is a physical address aligned to the width of a cache line. The write physical line address is the physical memory address from which a cache line was inclusively brought into the L2 cache107when a copy of the cache line was brought into the L1 data cache103, e.g., during execution of a load or store instruction, as described above. The cache line is brought into, i.e., allocated into, an entry of the L2 cache107, which has a unique set index and way number, as described above. The write PAP2404specifies the set index and the way number of the entry401in the L2 cache107into which the cache line was allocated, i.e., the cache line specified by the physical line address of the load/store instruction that brought the cache line into the L2 cache107. The store PAP1304of each of the store instructions combined into a WCB entry2401is identical since, in order to be combined, the store data1302of each of the store instructions must be written to the same cache line of the L2 cache107, i.e., have the same store physical line address, and the store PAP1304is a proxy for the store physical line address. Thus, the WCB entry2401is able to include a single write PAP2404to hold the identical store PAP1304of all of the combined store instructions. Referring briefly toFIG.25, an example block diagram illustrating a relationship between a cache line and write blocks as used in performing writing combining using PAPs in accordance with one embodiment of the present disclosure is shown. Shown inFIG.25is a cache line2502within which are four write blocks2504, denoted write block02504, write block12504, write block22504, and write block32504. In the example ofFIG.25, a cache block2502is 64 bytes wide and is aligned on a 64-byte boundary such that bits PA[5:0] of the physical line address that specifies the cache line2502are all zero. In the example ofFIG.25, a write block2504is sixteen bytes wide and is aligned on a 16-byte boundary such that bits PA[3:0] of the physical address that specifies the write block2504, referred to as a “physical block address,” are all zero. Furthermore, bits PA[5:4] of the physical block address specify which of the four write block locations within the cache line2502the write block2504belongs. More specifically, write block02504has PA[5:4]=00, write block12504PA[5:4]=01, write block22504PA[5:4]=10, and write block32504PA[5:4]=11, as shown. Generally, the width in bytes of the write data2402in a WCB entry2401corresponds to the width in bytes of a write block and is referred to herein as 2{circumflex over ( )}W (i.e., 2 to the power W), and the width in bytes of a cache line of the L2 cache107is referred to herein as 2{circumflex over ( )}C. In the embodiment ofFIGS.24and25, W is four and C is six, i.e., the width 2{circumflex over ( )}W of the write data2402is sixteen bytes and the width 2{circumflex over ( )}C of a cache line in the L2 cache107is 64 bytes, although other embodiments are contemplated in which W is different than four, e.g., five or six, and C is different than six, e.g., seven or eight. However, W is less than or equal to C, and the memory address to which write data2402is written is 2{circumflex over ( )}W-byte aligned. As may be observed, in embodiments in which W is less than C, the write data2402may belong in one of multiple write blocks of a cache line, as in the example ofFIG.25. More specifically, if W is four and C is six, when the write data2402is written through to the L2 cache107, there are four possible 16-byte-aligned 16-byte blocks within the cache line to which the write data2402may be written. The possible aligned W-width blocks within the C-width cache line are referred to herein as “write blocks,” and the physical address of a write block is referred to herein as a “physical block address.” In the example embodiment ofFIGS.24and25in which W is four and C is six, there are four possible write blocks and the combination of the write PAP2404and write PA[5:4]2406is a proxy for the write physical block address within the L2 cache107, although other embodiments are contemplated as stated above. That is, the write block within the cache line is determined by the write PA[5:4]2406. Because W is less than or equal to C, each store data2402combined into the write data2402of a WCB entry2401has the same write physical line address and belongs within the same cache line and has the same write physical block address and belongs within the same write block. In one embodiment, W is equal to C, i.e., the width of a WCB entry2401is the same as a cache line, in which case the write PA [5:4] bits2406are not needed to specify a write block within a cache line. Referring again toFIG.24, as described above, the write PA[5:4]2406is written with the store PA[5:4] bits1306of the store instruction for which the WCB entry2401is allocated, i.e., at block2812. As described above, the write PA[5:4] specifies which of the four write blocks (e.g., 16-byte write blocks) within the cache line (e.g., 64-byte cache line) specified by the write PAP2404into which the write data2402is to be written. As described above, store PA[5:4]1306correspond to the untranslated address bits [5:4] of the store virtual address. The store PA[5:4]1306of each of the store instructions combined into a WCB entry2401is identical since, in order to be combined, the store data1302of each of the store instructions must be written to the same write block within the same cache line of the L2 cache107, i.e., have the same store physical block address. Thus, the WCB entry2401is able to include a single write PA[5:4]2406to hold the identical store PA[5:4]1304of all of the combined store instructions. The write byte mask2408indicates, or encodes, which bytes of the write data2402are valid. That is, the write byte mask2408indicates which bytes of the write data2402are to be written to the L2 cache107. In the example embodiment, the size of a write block is sixteen bytes. Hence, in the embodiment ofFIG.24, the width W of the write data2402is sixteen bytes, the write byte mask2408is a 16-bit field, the width C of a cache line is 64 bytes, and the write byte mask2408specifies which bytes within a write block of a cache line of the L2 cache107the write data2402is to be written, and the write block of the cache line of the L2 cache107is specified by the write PA[5:4], as described above. As described above, the write byte mask2408is initially written at block2812ofFIG.28with the store byte mask1308of the store instruction being committed, and the write byte mask2408may be subsequently merged at block2806ofFIG.28with the store byte mask1308of a combining store instruction. The NC flag2413is set to a true value if the WCB entry2401is not allowed to be combined with a store instruction. That is, a store instruction that is being committed may not be combined with a WCB entry2401whose NC flag2413is true. The NC flag2413may be set to true because a store instruction, or some other instruction in the program, indicates that the processor100may not weakly-order writes with respect to the store instruction. In other words, the processor100needs to enforce the order in which the store data of the store instruction is written to memory relative to the store data of preceding and/or following store instructions. More specifically, the processor100needs to enforce write ordering to some degree beyond merely enforcing writes in program order that are to the same physical memory address. For example, an instruction that performs an atomic read-modify-write operation may require strict write ordering, e.g., an instruction that atomically adds a value to a memory location. For another example, a fence instruction may indicate that all stores older than the fence must be written before all stores younger than the fence. For another example, the store instruction may indicate that it is to a noncacheable region of memory (in which case its store data1302will not be written to the L1 data cache103nor to the L2 cache107) and should therefore be written in program order with respect to preceding and/or following store instructions. Weakly-ordered writes from the WCB109are described in more detail below with respect toFIG.26. If the store instruction or other program instruction indicates the processor100may not weakly-order writes with respect to the store instruction, the WCB109allocates a WCB entry2401for the store instruction and sets to true the NC flag2413in the allocated WCB entry2401. The WCB109does not attempt to combine a committed store instruction with a WCB entry2401whose NC flag2413is true. Additionally, a true value of the NC flag2413also operates as a fence to prevent the WCB109from combining a committed store instruction with any WCB entry2401that is older than the youngest WCB entry2401whose NC flag2413is true. Stated alternatively, the WCB109only combines a committed store instruction with WCB entries2401that are younger than the youngest WCB entry2401whose NC flag2413is true. The age of a WCB entry2401is described in more detail below, but generally refers to the temporal order in which a WCB entry2401is allocated and de-allocated, rather than to the program order of one or more store instructions written into the WCB entry2401. In one embodiment, the NC flag2413may also be set to true when the entry401of the L2 cache107that is pointed to by the write PAP2404is filled with a new cache line, which may have a physical line address that is different from the physical line address for which the write PAP2404is a proxy. Advantageously, each entry of the WCB109holds the write PAP2404rather than the full physical line address associated with the combined store instructions, as described in more detail below. In the embodiment ofFIG.24, because in the example embodiment the L2 cache107is 4-way set associative, the write PAP2404specifies the 2 bits of the way number of the entry in the L2 cache107into which the cache line specified by the physical line address is allocated. Furthermore, in the embodiment ofFIG.24, because in the example embodiment the L2 cache107has 2048 sets, the write PAP2404specifies the eleven bits of the set index of the set of the entry in the L2 cache107into which the cache line specified by the physical line address is allocated, which correspond to physical line address bits PA[16:6] in the embodiment. Thus, in the embodiment ofFIG.24, the write PAP2404is thirteen bits, in contrast to a full physical line address, which may be approximately forty-six bits in some implementations, as described above, and in other implementations there may be more. Advantageously, a significant savings may be enjoyed both in terms of storage space within the WCB109and in terms of timing by providing the ability to compare PAPs rather than full physical line addresses when making write-combining determinations, as described in more detail below. FIG.26is an example block diagram illustrating portions of the processor100ofFIG.1that perform writing combining using PAPs in accordance with embodiments of the present disclosure.FIG.26includes the ROB122, LSU117, SQ125, L1 data cache103, WCB109, DTLB141, and L2 cache107ofFIG.1. As described above, the ROB122keeps track of the state of processing of each pending instruction and is used to retire instructions in program order. The LSU117is in communication with the ROB122to obtain the state of load and store instructions. More specifically, the LSU117includes logic that detects when load and store instructions are ready to be committed. As described above, a store instruction is ready to be committed when there are no older instructions in program order than the store instruction that could cause the store instruction to be aborted. The LSU117commits a store instruction by writing its store data1302to memory. In one embodiment, writing the store data1302to memory means writing the store data1302to the L1 data cache103and writing the store data1302through to the L2 cache107. The store data1302is written through to the L2 cache107via the WCB109, and the write to the WCB109is performed using the store PAP1304and write PAPs2404, as described herein. In one embodiment, the L1 data cache103is a write-through cache, and if the cache line implicated by the store instruction that is being committed is no longer present in the L1 data cache103, the L1 data cache103is not updated with the store data1302. That is, the LSU117does not generate a fill request for the implicated cache line and does not update the L1 data cache103with the store data1302. In one embodiment, the L2 cache107is a write-back cache, and if the cache line implicated by the store instruction that is being committed is no longer present in the L2 cache107, the L2 cache107generates a fill request to fill the implicated cache line and then updates the filled cache line with the store data1302. The LSU117obtains from the SQ125the SQ entry1301associated with the store instruction that is being committed and then writes the store data1302to the L1 data cache103. In the embodiment ofFIG.26, the LSU117provides the store VA[63:12]2311, untranslated address bits PA[11:6] of the store PAP1302, untranslated store bits PA[5:3], and the store byte mask1308to the L1 data cache103. write the store data1302to memory. The LSU117also writes the store data1302to the L2 cache107via the WCB109. In the embodiment ofFIG.26, the LSU117provides the store data1302, store PAP1304, store PA[5:3]1306, store byte mask1308, and store VA[63:12]2311to the WCB109for either writing into the respective write data2402, write PAP2404, write PA[5:4]2406, write byte mask2408, and write VA[63:12] fields of a newly allocated WCB entry2401(e.g., at block2812ofFIG.28), or for merging the store data1302and store byte mask1308into the respective write data2402and write byte mask2408fields of a matching WCB entry2401(e.g., at block2806ofFIG.28). The WCB109writes out WCB entries2401to the L2 cache107based on the age of the valid WCB entries2401. That is, when the WCB109decides to write out a WCB entry2401to the L2 cache107, the WCB109writes out the oldest WCB entry2401. The age of a WCB109is determined by the order in which it was allocated. In one embodiment, the WCB109is configured as a first-in-first-out (FIFO) buffer with respect to the age of each WCB entry2401. The age of a WCB entry2401within the WCB109does not (necessarily) correspond to the age in program order of the one or more store instructions merged into it, but instead corresponds to the order in which the WCB entry2401was allocated relative to the other valid WCB entries2401in the WCB109. To illustrate by example, assume three store instructions A, B and C which have the program order A, B, C (which is also the same order in which the LSU117commits them). Assume the WCB109is empty, and A and C are to the same write block, but B is to a different write block. Assume that when A is committed, the WCB109allocates an entry 0 for A, and when B is committed, the WCB109allocates entry 1 for B. When C is committed, the WCB109will combine C with A into entry 0. Now entry 0 has the merged store data of both A and C. That is, even though B is ahead of C in program order, C effectively jumps ahead of B in write order, since entry 0 will be written to the L2 cache107before entry 1. This paradigm of weakly-ordered writes is supported by many instruction set architectures such as RISC-V, x86, and others. That is, writes to different addresses can be performed out of program order unless otherwise indicated by the program, e.g., unless a store instruction specifies that the write of its store data to memory must not be reordered with respect to earlier or later stores in program order. However, writes to the same address must be performed in program order, i.e., may not be weakly ordered. The WCB109compares the store PAP1304of the store instruction being committed with the write PAP2404of each WCB entry2401(e.g., at block2802ofFIG.28) and requires a match as a necessary condition for combining the store instruction with a WCB entry2401. In embodiments in which the width of the write data2402of a WCB entry2401is less than the width of a cache line (e.g., as in the embodiment ofFIGS.24through26), the WCB109compares the store PA[54]1306of the store instruction being committed with the write PA[5:4]2406of each WCB entry2401and requires a match as a necessary condition for combining the store instruction with a WCB entry2401. Additionally, the WCB109requires as a necessary condition that a matching WCB entry2401is combinable (e.g., at decision block2804ofFIG.28). More specifically, to be combinable, the NC flag2413of the WCB entry2401must be false and there must not be any younger WCB entries2401whose NC flag2413is true. That is, a store instruction being committed is not allowed to skip over a WCB entry2401whose NC flag2413is true in order to combine with a WCB entry2401older than the WCB entry2401whose NC flag2413is true. Still further, if there are multiple matching and combinable WCB entries2401, the WCB109requires as a necessary condition that the WCB entry2401into which the store data1302is merged is a youngest of the multiple matching WCB entries2401(e.g., at block2806ofFIG.28). If there is exactly one matching and combinable WCB entry2401, it is the youngest matching and combinable entry. Finally, the WCB109requires as a necessary condition that the store instruction itself is combinable (e.g., at decision block2801ofFIG.28), e.g., that strict write ordering is not required for the store instruction. If any of the necessary conditions are not met, then the WCB109allocates a WCB entry2401for the store instruction being committed (e.g., at block2812ofFIG.28). Once the WCB109is ready to write the oldest WCB entry2401to the L2 cache107, the WCB109sends the write VA[63:12]2411from the oldest WCB entry2401to the DTLB141for translation into a write PA[51:12]2613, which the DTLB141provides to the WCB109(e.g., at block2814ofFIG.2). The WCB109then generates an L2 write request2601to the L2 cache107that includes the write data2402, the write PA[51:12], bits PA[11:6] of the write PAP2404, the write PA[5:4]2406, and the write byte mask2408of the oldest WCB entry2401(e.g., at block2816ofFIG.2). FIG.27is an example flowchart illustrating operation of the processor100ofFIG.26to commit a store instruction in accordance with embodiments of the present disclosure. As described above, when a store instruction is executed, information about the store instruction is placed into an entry1301in the SQ125. Typically, the store data is not immediately written to the L1 data cache103. One reason is the store instruction may have been speculatively executed, i.e., the possibility exists that a subsequent event will require the store instruction to be flushed. For example, the processor100may detect that a branch instruction older than the store instruction was mis-predicted, or detect that incorrect data was forwarded to a load instruction that may then have been incorrectly consumed by the store instruction. So, the store instruction is held in an entry1301of the SQ125until the store instruction is ready to be committed, i.e., until there is no longer any possibility that the store instruction will need to be flushed. Store instructions that are ready to be committed are committed in program order. Operation begins at block2702. At block2702, a store instruction needs to be committed. In one embodiment, logic within the LSU117detects that the store instruction associated with a SQ entry1301needs to be committed. The logic may receive information from the ROB122that indicates the store instruction is ready to be committed. The logic commits store instructions in program order. The LSU117obtains the SQ entry1301associated with the store instruction that is being committed. In one embodiment, the LSU117uses an index into the SQ125to obtain the SQ entry1301associated with the store instruction that is being committed. Operation proceeds to block2704. At block2704, the LSU117writes the store data1302from the SQ entry1301to the L1 data cache103, e.g., as data in325ofFIG.3. Additionally, the LSU117writes through the store data1302to the L2 cache107via the WCB109, which is described in more detail below with respect toFIG.28. FIG.28is an example flowchart illustrating operation of the WCB109ofFIG.26to use PAPs to perform write combining in accordance with embodiments of the present disclosure. More specifically,FIG.28illustrates in more detail the portion of block2704ofFIG.27in which the store data is written through to the L2 cache107via the WCB109. Operation begins at decision block2801. At decision block2801, if the store instruction indicates it is not combinable, e.g., needs to be ordered, operation proceeds to decision block2808; otherwise, Operation proceeds to block2802. At block2802, the WCB109compares the store PAP1304and store PA[5:4] with the write PAP2404and write PA[5:4] of each valid entry of the WCB109. Operation proceeds to decision block2804. At decision block2804, if the store PAP1304and store PA[5:4] match the write PAP2404and write PA[5:4] of one or more combinable valid entries2401of the WCB109, operation proceeds to block2806; otherwise, operation proceeds to decision block2808. That is, in addition to the PAP and PA[5:4] matches, an additional condition required for operation to proceed to block2806is that a matching WCB entry2401be combinable. A WCB entry2401is combinable if the NC flag2413is false and there are no younger WCB entries2401whose NC flag2413is true. At block2806, the youngest matching and combinable WCB entry2401is selected for combining with the store instruction. If there is exactly one matching and combinable WCB entry2401, it is selected as the youngest matching and combinable entry. The WCB109combines the store data1302with the selected WCB entry2401by writing each byte of the store data1302having a true-valued corresponding bit of the store byte mask1308to the corresponding byte of the appropriate half of the write data2402, and the WCB109combines the store byte mask1308with the selected WCB entry2401by performing a Boolean OR with the write byte mask2408. At decision block2808, if the WCB109is full (i.e., all entries2401of the WCB109are currently valid), operation proceeds to block2814to free an entry in the WCB109; otherwise, operation proceeds to block2812. At block2812, the WCB109allocates and populates a free WCB entry2401by writing the store data1302, store PAP1304, store PA[5:4]1306, store byte mask1308, and store VA[63:12] to the write data2402, write PAP2404, write PA[5:4]2406, write byte mask2408, and write VA[63:12]. If the store instruction, or some other instruction in the program, indicated the store instruction is not combinable (e.g., at decision block2801), the WCB109sets the NC flag2413to true. At block2814, room needs to be made in the WCB109for the store instruction that is being committed. Therefore, the oldest entry2401in the WCB109needs to be pushed out to the L2 cache107. The WCB109provides the write VA[63:12]2411from the oldest WCB entry2401to the DTLB141for translation into a write PA[51:12]2613, which the DTLB141provides to the WCB109. Operation proceeds to block2816. At block2816, the WCB109pushes out the oldest entry2401of the WCB109to the L2 cache107. That is, the WCB109writes the write data2402to the L2 cache107at the physical address specified by the write PA[51:12]2613, the write PA[11:6] (i.e., bits [11:6] of the write PAP1304), write PA[5:4]2406, and the write byte mask2408. The oldest/pushed out WCB entry2401is now free for use by a new store instruction that is to be committed. Operation proceeds to block2812to populate the newly freed WCB entry2401(which is now the youngest entry2401in the WCB109) with the store instruction that is being committed. In one embodiment, each WCB entry2401also includes a timeout value (not shown) that is initially set to zero and that is periodically incremented (or alternatively initially set to a predetermined value and periodically decremented). When the timeout value of an entry (i.e., the oldest entry) exceeds a predetermined value (or alternatively reaches zero), the WCB109requests the DTLB141to translate the write VA2411of the oldest entry2401into the write PA2613as described above with respect to block2814, and the WCB109pushes the entry2401out of the WCB109to the L2 cache107per block2816. As may be observed from the foregoing, holding write PAPs in the WCB to facilitate write-combining may provide various advantages over conventional solutions. First, the comparisons of the write PAPs with the store PAP to make write combining determinations may be significantly faster than the full physical line address comparisons performed by a conventional processor. Second, the write PAPs held in the WCB consume less storage space than a full physical line address. Third, holding write PAPs in the WCB to facilitate write-combining many enable the employment of a virtually-indexed virtually-tagged first level cache, which may have significant advantages, particularly in terms of performance. For example, one solution a conventional processor with a virtual cache may employ is to compare the virtual line address of the store instruction with the virtual line address stored in each entry of the conventional WCB. However, such as solution is burdened with the requirement to deal with the possibility that the multiple virtual line addresses held in the WCB entries may be synonyms of a single physical line address. In contrast, the embodiments described that hold the write PAPs are not burdened with that requirement. For another example, another solution a conventional processor with a virtual cache may employ is to hold physical line addresses in each WCB entry and to translate the store virtual line address to a store physical line address each time a store instruction is being committed to compare the store physical line address with the physical line address held in each WCB entry. In contrast, embodiments described herein facilitate the translation of a single write virtual line address (which is the same as the store virtual line address of each store instruction combined into the WCB entry) when the WCB entry is ready to be written to memory, rather than requiring a virtual to physical translation each time a store instruction is being committed. This is particularly advantageous in that it may reduce the amount of power consumed by the TLB and may be less complex than the conventional solution. Using PAPs to Perform Store-to-Load Forwarding Correctness Checks Embodiments will now be described in which PAPs are used to perform store-to-load forwarding correctness checks (also referred to herein as forwarding correctness checks). Embodiments are described in which the LSU117executes a load instruction, which involves making a store-to-load forwarding decision (e.g., using PAPs as described above), and subsequently as store instructions older than the load instruction are committed, a check is made at each store instruction commit—using PAP comparisons rather than full physical memory line address comparisons—to determine whether the forwarding decision was correct for the load instruction relative to the store instruction being committed. Forwarding correctness state within the load queue entry associated with each load instruction may be updated based on the correctness check made for each store instruction as it commits. Once all older store instructions have committed, a final determination of the correctness of the forwarding decision can be observed from the final state of the forwarding correctness state based on the individual forwarding correctness checks associated with the commits of the older store instructions. Advantageously, comparisons of the PAPs rather than full physical memory line address comparisons may provide significant savings in terms of storage space within the load queue (LQ)125and in terms of timing when making store-to-load forwarding checks. FIG.29is an example block diagram of a load queue (LQ) entry2901of the LQ125ofFIG.1in accordance with embodiments of the present disclosure. Each LQ entry2901holds a PAP to accomplish store-to-load forwarding correctness checks as described in detail below. The LQ entry2901includes the following fields which are described in more detail below: load PAP2904, load PA[5:3]2906, load byte mask2908, Fwd flag2912, NonFwd flag2914, FwdStId2916, FwdingGood flag2922, FwdingViol flag2924, FwdingViolStId2926, lastStId2932, load RobId2934, and Done flag2936. The load PAP2904, load PA[5:3]2906, and load byte mask2908are referred to collectively as the load address/size information. The Fwd flag2912, NonFwd flag2914, and FwdStId2916are referred to collectively as the forwarding behavior information. The FwdingGood flag2922, FwdingViol flag2924, and FwdingViolStId2926are referred to collectively as the forwarding correctness information. The forwarding behavior information and the forwarding correctness information are referred to collectively as the forwarding information. As described above, the load and store queues125ofFIG.1may be separate memory queue structures or they may be combined into a single memory queue structure rather than separate memory queue structures; hence, the term load/store queue may refer to a combined embodiment, and the term load/store queue may also refer to a separate load queue or a separate store queue. A load instruction loads load data received from the L1 data cache103or forwarded from a SQ entry1301into a physical register of the physical register file105that is the destination register specified by the load instruction. The population of some fields the LQ entry2901is performed prior to dispatch of the load instruction, other fields during execution of the load instruction, and other fields while one or more store instructions older than the load instruction are being committed, as described in more detail below. The load instruction specifies a load virtual address, e.g., load VA321ofFIG.3. The load PAP2904is a physical address proxy for a load physical memory line address that is a translation of the load virtual memory line address (i.e., load VA[63:6]321) and specifies the set index and way of the entry401of the L2 cache107into which a cache line specified by the load physical memory line address is allocated. More specifically, the load physical memory line address is a translation of a memory page address portion of the load virtual address, namely upper address bits (e.g., bits12and above in the case of a 4 KB page size), along with the remaining untranslated address bits that specify a memory line within the memory page (e.g., VA[11:6]). As described above, when a cache line is brought into the L2 cache107from a physical memory line address, e.g., by a load or store instruction, the upper address bits of the load/store virtual address specified by the load/store instruction are translated into a load/store physical memory line address, e.g., by the MMU147ofFIG.1. The cache line is brought into, i.e., allocated into, an entry of the L2 cache107, which has a unique set index and way number, as described above. The load PAP2904specifies the set index and the way number of the entry401in the L2 cache107into which the cache line was allocated, i.e., the cache line specified by the physical memory line address of the load/store instruction that brought the cache line into the L2 cache107. The lower bits of the load virtual address (e.g., bits [11:0] in the case of a 4 KB page size) are untranslated address bits, i.e., the untranslated bits of the virtual and physical addresses are identical, as described above. The load physical address bits PA[5:3]2906correspond to the untranslated address bits [5:3] of the load virtual address. The load instruction also specifies a size of the load data to be read. In the example embodiment, the largest size of load data is eight bytes. Hence, in the embodiment ofFIG.29, the size of the load data is up to eight bytes, and the load physical address bits PA[5:3]2906narrows down the location of the load data within a 64-byte cache line, for example. That is, the address bits PA[5:3]2906specify the offset of an eight byte-aligned eight-byte data word with a 64-byte-aligned 64-byte memory line. The load size and bits [2:0] of the load address may be used to generate the load byte mask2908that specifies, or encodes, which of the eight bytes are being read by the load instruction. Other embodiments are contemplated in which the bytes read by the load instruction are specified in a different manner, e.g., the size itself and bits [2:0] of the load address may be held in the LQ entry2901rather than the load byte mask2908. Advantageously, each entry of the LQ125holds the load PAP2904rather than the full load physical memory line address. In the embodiment ofFIG.29, because in the example embodiment the L2 cache107is 4-way set associative, the load PAP2904specifies the 2 bits of the way number of the entry in the L2 cache107into which the cache line specified by the physical memory line address is allocated. Furthermore, in the embodiment ofFIG.29, because in the example embodiment the L2 cache107has 2048 sets, the load PAP2904specifies the eleven bits of the set index of the set of the entry in the L2 cache107into which the cache line specified by the physical memory line address is allocated, which corresponds to physical memory line address bits PA[16:6] in the embodiment. Thus, in the embodiment ofFIG.29, the load PAP2904is thirteen bits, in contrast to a full load physical memory line address, which may be approximately forty-six bits in some implementations, as described above, and in other implementations there may be more. Advantageously, a significant savings may be enjoyed both in terms of storage space within the LQ125and in terms of timing by providing the ability to compare PAPs rather than full physical memory line addresses when making store-to-load forwarding checks. The Fwd flag2912is true if the LSU117forwarded store data to the load instruction from a SQ entry1301and is false otherwise. The NonFwd flag2914is true if the LSU117tried to forward store data to the load instruction but failed and instead provided the load data from the L1 data cache103, as described in more detail below with respect toFIG.30, and is false otherwise. The LSU117only sets to true one of Fwd2912and NonFwd2914, never both. If the LSU117did not try to forward from a store instruction, Fwd2912and NonFwd2914are both false. The FwdStId2916specifies the SQ entry1301from which the LSU117forwarded or tried to forward store data to the load instruction. In one embodiment, the FwdStId2916is valid if either the Fwd flag2912or the NonFwd flag2914is true. That is, even if the LSU117tried but failed to forward store data and instead provided the load data from the L1 data cache103, the FwdStId2916specifies the SQ entry1301from which the LSU117tried to forward but failed. The FwdingGood flag2922, FwdingViol flag2924, and FwdingViolStId2926may be updated each time a store instruction is committed that is older than the load instruction. The FwdingGood flag2922, if true, tentatively indicates correct forwarding behavior by the load instruction based on the commit of all the older store instructions committed thus far. The FwdingViol flag2924, if true, tentatively indicates incorrect forwarding behavior by the load instruction based on the commit of all the older store instructions committed thus far. As described in more detail below, the FwdingGood flag2922and FwdingViol flag2924may not accurately indicate correct/incorrect forwarding until all older store instructions have committed. The LSU117only sets to true one of FwdingGood2922and FwdingViol2924, never both. The FwdingGood flag2922and FwdingViol flag2924are set to false when the LQ entry2901is allocated. In one embodiment, at execution of the load instruction, the FwdingGood flag2922is set to true and the FwdingViol flag2924is set to false. At store commit time, if one of the FwdingGood flag2922and FwdingViol flag2924is updated to a value, then the other is also updated with the opposite value. The FwdingViolStId2926, if the FwdingViol flag2924is true, specifies the SQ entry1301of the relevant store instruction associated with the store-to-load forwarding violation. In one embodiment, the FwdingViolStId2926may be used to update the predictor that makes store-to-load forwarding predictions. The lastStId2932is populated with the identifier of the SQ entry1301allocated to the youngest store instruction in program order that is older than the load instruction. The load RobId2934is populated with the entry in the ROB122allocated to the load instruction. In one embodiment, the lastStId2932and load RobId2934are populated by the decode unit112before the load instruction is dispatched to the scheduler121. The LSU117sets the Done flag2936when the LSU117completes execution of the load instruction, which includes populating the load address/size information and the forwarding behavior information and providing load data for the load instruction, e.g., via the output of mux1446ofFIG.18. In one embodiment, a LQE2901is valid when it has been allocated for a load instruction and not yet deallocated (which in one embodiment is determined by head and tail pointers of the load queue125) and its Done flag2926is true. FIG.30is an example flowchart illustrating operation of the LSU117to process a load instruction in accordance with embodiments of the present disclosure. To simplify for the purpose of clarity, operation of the LSU117is described inFIG.30from the perspective of a given load instruction; however, as described above, the LSU117may execute multiple load and store instructions concurrently, speculatively, and out-of-order. Operation begins at block3002. At block3002, the LSU117executes a load instruction. The LSU117either obtains the load data for the load instruction from the L1 data cache103or forwards store data from a SQ entry1301to the load instruction as the load data. The latter operation is store-to-load forwarding, as described in detail above. In one embodiment, as described above, a predictor (e.g., MDP111) makes a forwarding prediction for each load instruction that indicates either that no store-to-load forwarding should be performed, or that the load instruction should check for and try to forward from a suitable older store instruction. The LSU117then writes the load address/size information and forwarding behavior information to the LQE2901associated with the load instruction. The load PAP2904is populated with the load PAP1495provided by the L1 data cache103in response to the virtual load address321specified by the load instruction, the load PA[5:3]2906is populated with load VA[5:3] specified by the load instruction, and the load byte mask2908is populated with the load byte mask1493, which are described with respect toFIG.14, for example. If the forwarding prediction indicates the LSU117should forward from a store instruction and the LSU117actually forwards store data to the load instruction from a SQ entry1301, the LSU117sets the Fwd flag2912to true and populates the FwdStId2916with the identifier of the SQ entry1301from which the store data was forwarded; otherwise, the LSU117sets the Fwd flag2912to false. If the forwarding prediction indicates the LSU117should forward from a store instruction and the LSU117tries to forward from an older store instruction and fails because it determines the store instruction is not suitable and instead provides the load data from the L1 data cache103, the LSU117sets the NonFwd flag2914to true and populates the FwdStId2916with the identifier of the SQ entry1301from which the LSU117tried to forward store data but failed; otherwise, the LSU117sets the NonFwd flag2914to false. An example situation in which the LSU117tries to forward from the predicted store instruction and fails because it determines the store instruction is not suitable and instead provides the load data from the L1 data cache103is when the store data of the predicted store instruction does not overlap the load data requested by the load instruction. As described above, e.g., with respect toFIG.14, the store data overlaps the requested load data if the selected SQ entry1399is valid, the load PAP1495matches the store PAP1304and the load PA[5:3] matches the store PA[5:3]1306, and the valid bytes of the store data1302of the selected SQ entry1399as indicated by the store byte mask1308overlap the load data bytes requested by the load instruction as indicated by the load byte mask1493, i.e., for each true bit of the load byte mask1493, the corresponding bit of the store byte mask1308is also true. Another example situation in which the LSU117tries to forward from the predicted store instruction and fails because it determines the store instruction is not suitable and instead provides the load data from the L1 data cache103is when the SQ entry1301the LSU117is trying to forward from is not valid (e.g., the valid bit1309is clear, i.e., there is no valid store data1302and no valid store PAP1304, store PA1306and store byte mask1308to compare) when the load instruction is successfully executed. In one embodiment, the FwdStId2916is simply populated with the SQ entry1301identifier associated with the store instruction that the load instruction tried to forward from. In one embodiment, at execution of the load instruction, the FwdingGood flag2922is set to true and the FwdingViol flag2924is set to false. Operation proceeds to decision block3004. At decision block3004, the LSU117determines whether there are any uncommitted store instructions older than the load instruction. If so, operation proceeds to block3006; otherwise, operation proceeds to block3008. At block3006, the LSU117commits the oldest uncommitted store instruction, as described in detail with respect toFIG.31A. Briefly, committing the oldest uncommitted store instruction includes using PAP comparisons—rather than full physical memory line address comparisons—to make a forwarding correctness check and, in most cases, to update the forwarding correctness fields of the LQ entry2901based on the forwarding correctness check. At block3008, the LSU117waits until the load instruction has become the oldest outstanding load instruction. In one embodiment, each clock cycle the LSU117checks the LSQ125head and tail pointers and the entries1301/2901at the head pointers to determine whether there is an outstanding load/store instruction that is ready to be committed. Thus, although the operations at decision block3004and block3008are shown as occurring sequentially, they may be performed concurrently. For example, as soon as the load instruction executes, it may be that there are no outstanding older load/store instructions, in which case the load instruction immediately becomes ready to commit. In one embodiment, the load instruction may be among a group of oldest load instructions that are committed together in the same clock cycle. Operation proceeds to decision block3012. At decision block3012, the LSU117examines the forwarding correctness information to determine whether any forwarding violation occurred. If so, operation proceeds to block3014; otherwise, operation proceeds to block3016. In one embodiment, the LSU117determines that a forwarding violation occurred if the FwdViol flag2924is true. At block3014, the LSU117signals to the PCL132the need for an abort of the load instruction and all instructions younger than the load instruction. In response, the PCL132initiates an abort process to flush the load instruction and all instructions younger than the load instruction. Subsequently, the PCL132restarts instruction fetch at the load instruction so that the load instruction (and subsequent instructions) may be re-executed. The store-to-load forwarding predictor may also be updated, e.g., based on the forwarding correctness fields from the LQ entry2901. Upon re-execution of the load instruction, typically the store-to-load forwarding behavior will be correct, e.g., since the predictor will have been updated based on the incorrect forwarding behavior of the earlier execution instance of the load instruction. In an alternate embodiment, even if the load instruction has not yet become the oldest outstanding load instruction at block3008, if a forwarding violation occurred with respect to the load instruction and a forwarding violation occur did not occur for all older load instructions, if any, then the LSU117signals to the PCL132the need for an abort. At block3016, the LSU117commits the load instruction. In one embodiment, committing the load instruction includes signaling to the PCL132(e.g., to update the ROB122entry associated with the load instruction) and deallocating the LQ entry2901previously allocated to the load instruction. In one embodiment, committing and retiring the load instruction are not separate events, in which case committing the load instruction also includes committing to architectural state the physical register in the register file105ofFIG.1specified as the destination register of the load instruction. FIG.31Ais an example flowchart illustrating operation of the LSU117to commit a store instruction that is oldest in program order in accordance with embodiments of the present disclosure. Operation begins at block3102. At block3102, a store instruction is ready to be committed. That is, the store instruction has completed execution, does not need to be aborted, and has become the oldest load/store instruction among all outstanding load and store instructions. Committing the store instruction includes the LSU117writing the store data1302from the SQ entry1301to the L1 data cache103, e.g., as described above with respect to block2704ofFIG.27. Operation proceeds to block3104. At block3104, the store instruction that is being committed still has an allocated SQ entry1301. The LSU117compares the store PAP1304, store PA[5:3], and store byte mask1308from the SQ entry1301with the load PAP2904, load PA[5:3]2906, and load byte mask2908of each valid entry2901of the load queue125associated with a load instruction that is younger in program order than the store instruction that is being committed. In one embodiment, the result of the comparison indicates either no match, a full match, or a partial match. A no match result means none of the bytes to be read by the load instruction are available in the store data1302of the SQ entry1301. A no match result may occur because the store PAP1304and the load PAP2904do not match. A no match result may occur because the store PA[5:3]1306and the load PA[5:3]2906do not match. A no match result may occur because none of the true bits of the load byte mask2908have a corresponding true bit in the store byte mask1308. A full match result means all the bytes to be read by the load instruction are available in the store data1302of the SQ entry1301. A full match result occurs when the store PAP1304and the load PAP2904match, the store PA[5:3]1306and the load PA[5:3]2906match, and all of the true bits of the load byte mask2908have a corresponding true bit in the store byte mask1308. A partial match result means at least one but less than all the bytes to be read by the load instruction are available in the store data1302of the SQ entry1301. A partial match result occurs when the store PAP1304and the load PAP2904match, the store PA[5:3]1306and the load PA[5:3]2906match, and at least one but not all of the true bits of the load byte mask2908have a corresponding true bit in the store byte mask1308. In one embodiment, the LSU117is configured such that store-to-load forwarding is not allowed if the store instruction is not able to provide all the requested load data. In such an embodiment, when the load instruction is being executed, if the LSU117detects a partial match result between the predicted store PAP1304, store PA[5:3]1306, and store byte mask1308and the load PAP1495, load PA[5:3] and load byte mask1493, then the LSU117replays the load instruction (i.e., the load instruction does not complete its execution) and a memory dependence operand is created in the scheduler121that causes the scheduler121to wait to re-issue the load instruction for execution until the predicted store instruction has committed its store data to the L1 data cache103(or in an alternate embodiment, until the youngest store instruction older than the load instruction has committed its store data to the L1 data cache103), as described in more detail below with respect toFIGS.31C through31F. Advantageously, the comparisons are performed using the store PAP1304of the store instruction being committed and the load PAP2902of each valid younger LQE2901. Comparisons of PAPs are performed rather than comparisons of physical memory line addresses, which has the advantages of reduced storage space within the LSQ125over an implementation that stores the full load/store physical memory line address and PAP comparisons that are faster than full physical memory line address comparisons, as described above. Operation proceeds to block3106. At block3106, for each valid younger LQ entry2901, the LSU117updates the forwarding correctness information, as needed, based on the result of the associated comparison made at block3104and based on the forwarding behavior information. Recall that for a given load instruction associated with a valid younger LQ entry2901, the whole operation3006ofFIGS.30and31, including the operation at block3106to update the forwarding correctness information, may be performed multiple times since multiple older store instructions may be committed before the load instruction becomes the oldest load/store instruction and is committed. Consequently, the forwarding correctness information may be updated with each store instruction commit, e.g., FwdingViol2924may be set to true and FwdingGood2912may be set to false as the operation at block3106is performed for a first older store instruction that is committed, however FwdingViol2924may be set to false and FwdingGood2912may be set to true as the operation at block3106is performed for a second older store instruction that is committed, and then FwdingViol2924may be set to true and FwdingGood2912may be set to false as the operation at block3106is performed for a third older store instruction that is committed, and this updating may occur multiple times until all older store instructions have been committed. However, it is the resting state of the forwarding correctness information that is ultimately used at block3012ofFIG.30to determine whether a forwarding violation occurred. Updating of the forwarding correctness information for a LQE2901will now be described according to one embodiment. If the comparisons at block3104indicate no match, then the LQ entry2901forwarding correctness fields are not updated. This is because the LSU117will not have forwarded from, although it may have tried to forward from (i.e., the prediction may have indicated to try to forward from), this store instruction because at execution of the load instruction the LSU117will have detected no match. If the comparisons at block3104indicate a full match or a partial match, then the LSU117checks for either a forwarding violation or no forwarding violation situation, as described in the next paragraph, by examining Fwd2912and NonFwd2914and comparing FwdStId2916with the SQ entry1301identifier of the store instruction being committed (which is referred to henceforth as CmtStId). The comparison of FwdStId2916and CmtStId may indicate the LSU117forwarded from this store, i.e., from store instruction being committed (FwdStId2916matches CmtStId), the LSU117forwarded from a younger store than the store instruction being committed (FwdStId2916is younger than CmtStId), or the LSU117forwarded from an older store than the store instruction being committed (FwdStId2916is older than CmtStId). In the case of a forwarding violation, the LSU117sets FwdingGood2922to false, FwdingViol2924to true, and FwdingViolStId2926to CmtStId. If the forwarding violation check indicates no forwarding violation, then the LSU117sets FwdingGood2922to true and FwdingViol2924to false, although in some cases the LSU117simply does not update the LQ entry2901, as described below. If the comparisons at block3104indicate a full match or a partial match, then the following checks are performed. If Fwd2912and NonFwd2914are both false, then a forwarding violation has been detected. If Fwd2912is true and FwdStId2926matches CmtStId, then no forwarding violation is detected. If NonFwd2914is true and FwdStId2926matches CmtStId, then no forwarding violation is detected. This is because, as described above with respect to block3104, the LSU117detected the store instruction is not able to provide all the requested load data (i.e., detected a partial match), set NonFwd2914to true, and replayed the load instruction. If Fwd2912or NonFwd2914is true and the LSU117forwarded from an older store than the store instruction being committed, then a forwarding violation is detected. If NonFwd2914is true and the LSU117forwarded from a younger store than the store instruction being committed, then a forwarding violation is detected. If Fwd2912is true and the LSU117forwarded from a younger store than the store instruction being committed, then the LSU117does not update the forwarding correctness information since the forwarding correctness information will be updated when the younger store instruction is committed. Embodiments have been described in which the LSU117performs store-to-load forwarding behavior correctness checks using load and store PAPs (e.g., load PAP2904and store PAP1304).FIG.31Bdescribed below summarizes the forwarding behavior correctness checks. In the description ofFIG.31B, alternate embodiments will also be described in which the LSU117performs store-to-load forwarding behavior correctness checks similar to the process described above, but in which the LSU117uses load and store physical memory line addresses (PMLAs), rather than load and store PAPs, to perform the forwarding behavior correctness checks. Advantageously, embodiments described above and below, regardless of whether the processor employs PAPs or PMLAs to perform the forwarding behavior correctness checks, perform forwarding behavior correctness checks at commit of each store instruction, rather than at execution of each store instruction as performed by conventional solutions, and therefore may provide an opportunity to employ a load queue125with fewer content-addressable memory (CAM) ports than a conventional processor, which may reduce the amount of power consumed and area over a conventional solution, as described below. Conventional high-performance superscalar out-of-order execution processors exist that perform forwarding behavior correctness checks at store execution time as follows. When a load instruction executes, the conventional processor uses the load address (e.g., untranslated address bits of the load address and perhaps additional virtual load address bits, since the full physical load address may not be available soon enough) to CAM against the store queue to pick a qualified store instruction to forward from. More specifically, if the CAM indicates the load address matches the store address of a store queue entry of an older outstanding store instruction and the load and store sizes are such that the store can provide store data to the load instruction, then the conventional processor picks the matching store instruction to forward from. In the case of multiple qualified older store instructions, the conventional processor picks the youngest of them. However, it is possible that the picked store instruction was not the correct store instruction to forward from. For example, because of out-of-order execution, it is possible that at the time the load instruction executed there was a qualified store instruction that had not yet executed, and therefore had not written its store address and store size to an entry in the store queue to CAM against, that should have been picked to forward from. For another example, if the CAM of the store queue was not made using the full physical addresses of the load and store instructions, then it is possible the picked store instruction should not have been forwarded from because there was not in fact an address match. Because of the possibility that the picked store instruction was not the correct store instruction to forward from, when each store instruction executes, the conventional processor uses the store address to CAM against the load queue to see if there are any load instructions that should have forwarded from this store instruction but did not. That is, the conventional processor performs its forwarding behavior correctness checks when it executes each store instruction. Because conventional high-performance superscalar processors are designed to execute multiple (N) store instructions per clock cycle, i.e., concurrently, each of the concurrently executed store instructions needs to be able to CAM against the load queue at the same time. This requires N CAM ports in the load queue. For example, a conventional high-performance superscalar processor might execute4store instructions concurrently, in which case the load queue requires at least 4 CAM ports, which may imply a significant amount of power consumption and area. Embodiments are described above and below that may facilitate the inclusion of fewer load queue CAM ports and therefore reduce power consumption and area. In one embodiment, the LSU117includes a store commit pipeline used to commit a store instruction, e.g., as described above with respect toFIGS.23-31A, and more specifically blocks3104and3106ofFIG.31A. The store commit pipeline uses a CAM port of the load queue125to CAM store address and size information (e.g., the store PAP1304/store PMLA5305(and store PA[5:3]1306and store byte mask1308) of the store instruction being committed against the load address and size information (e.g., load PAP2904/load PMLA5205(and load PA[5:3]2906and load byte mask2908) of each valid younger entry of the load queue125to detect matches. Advantageously, because the processor100performs store-to-load forwarding checking when a store instruction commits, rather than when a store instruction executes like a conventional processor, and because in one embodiment the processor100is configured to commit fewer store instructions per clock cycle than it is configured to execute (let N be the maximum number of store instructions the processor100is configured to execute per clock cycle and Q be the maximum number of store instructions the processor100is configured to commit per clock cycle), the embodiments of the load queue125need only include Q CAM ports, rather than N. This may result in a significant power and area savings. In one embodiment, in instances in which certain alignment requirements of two program order adjacent store instructions are satisfied, the two store instructions may be committed together as a pair using the same CAM port. FIG.31Bis an example flowchart illustrating operation of the LSU117to commit a store instruction and perform a forwarding behavior correctness check during commit of the store instruction in accordance with embodiments of the present disclosure. Operation begins at block3192. At block3192, during execution of a store instruction, the LSU117records (i.e., populates or writes) store information to a SQ entry1301allocated to the store instruction as described with respect to block1506ofFIG.15. The store information may include the store PAP1304, store PA1306, and store byte mask1308ofFIG.13(and store VA2311ofFIG.23). In an alternate embodiment, rather than recording a store PAP1304, the LSU117records a store physical memory line address (e.g., store PMLA5305ofFIG.53) to the SQ entry1301that may be used to perform forwarding behavior correctness checks, e.g., at block3198below. Operation proceeds to block3194. At block3194, during execution of a load instruction, the LSU117performs forwarding behavior as described with respect to block1606ofFIG.16. That is, the LSU117selectively forwards or does not forward store data1302from a store instruction to the executing load instruction. The forwarding behavior decision may be made based on a load PAP (e.g., load PAP1495ofFIG.14) and one or more store PAPs (e.g., the store PAP1304ofFIG.13) included in the store information recorded at block3192for outstanding store instructions older than the load instruction. In an alternate embodiment, rather than making the forwarding behavior decision based on a load PAP and one or more store PAPs, the LSU117makes the forwarding behavior decision based on a load physical memory line address (e.g., a PMLA provided by a TLB during execution of the load instruction (e.g., by L1 TLB5141ofFIG.51) and one or more store physical memory line addresses (e.g., store PMLA5305ofFIG.53). Operation proceeds to block3196. At block3196, during execution of the load instruction, the LSU117records (i.e., populates or writes) load information to a LQ entry2901allocated to the load instruction as described with respect to block3002ofFIG.30. The load information may include the load PAP2904, load PA2906, and load byte mask2908ofFIG.29. In an alternate embodiment, rather than recording a load PAP2904, the LSU117records a load physical memory line address (e.g., load PMLA5205ofFIG.52) to the LQ entry2901that may be used to perform forwarding behavior correctness checks, e.g., at block3198below. The LSU117also records to the LQ entry2901forwarding behavior information (e.g., Fwd flag2912, NonFwd flag2914, FwdStId2916ofFIG.29) that describes the forwarding behavior. Operation proceeds to block3198. At block3198, during commit of a store instruction, the LSU117uses the store information recorded at block3192for the store instruction and the load information recorded at block3196for each outstanding load instruction younger than the store instruction and the forwarding behavior recorded at block3196to check the correctness of the forwarding behavior performed at block3194as described with respect to blocks3104and3106ofFIG.31A. The forwarding behavior correctness check may be performed using a store PAP (e.g., store PAP1304ofFIG.13) and load PAPs (e.g., load PAP2904ofFIG.29). In an alternate embodiment, rather than performing the forwarding behavior correctness check based on a store PAP and load PAPs, the LSU117performs the forwarding behavior correctness check based on a store physical memory line address (e.g., store PMLA5305ofFIG.53) and load physical memory line addresses (e.g., load PMLA5205ofFIG.52). As described above, committing the store instruction includes writing the store data1302to the L1 data cache103and deallocating the SQ entry1301previously allocated to the store instruction. Further, the LSU117performs the forwarding behavior correctness check at block3198not only at commit of a single store instruction, but also at commit of each additional store instruction older than the load instruction, if any, and selectively updates the forwarding correctness information, i.e., depending upon whether the additional older store instruction is relevant to the correctness of the forwarding behavior, as described above, e.g., with respect toFIG.31A. Performing the forwarding behavior correctness check may also include recording to the LQ entry2901forwarding correctness information (e.g., FwdingGood indicator2922, FwdingViol indicator2924, and FwdingViolStId2926ofFIG.29). Further, if after all store instructions older than the load instruction have committed, the accumulated forwarding behavior correctness checks of the older committed store instructions indicate the forwarding behavior decision made at block3194was incorrect, the LSU117signals the need for an abort of the load instruction, as described above, e.g., with respect toFIG.30. Further, the LSU117performs the forwarding behavior correctness check at block3198for each entry2901of the load queue125whose load instruction is younger than the store instruction being committed, as described above, e.g., with respect toFIG.31A. FIG.31Cis an example block diagram illustrating an entry3151of a structure, e.g., scheduler121ofFIG.1or other re-execution structure (not shown), of the processor100from which a load instruction may be issuable for re-execution after having been issued and executed and determined to be unforwardable in accordance with embodiments of the present disclosure. An unforwardable load instruction, in the present context, is a load instruction for which it is determined during execution of the load instruction that an entry1301of the store queue125holds store data1302that includes some but not all bytes of load data requested by the load instruction. The re-execution structure entry3151includes a memory dependence operand (MDO)3153, an MDO valid bit3155, and other fields3157. The MDO valid bit3155, if true, indicates the memory dependence operand3153is valid. More specifically, the presence of a valid memory dependence operand3153in a valid re-execution structure entry3151indicates that the instruction associated with the entry3151has a dependence upon the availability of a memory operand, and the instruction is ineligible to be issued for re-execution until the dependence is satisfied. In one embodiment, the entry3151may also include a type field that specifies the type of the memory dependence operand. In the case of an unforwardable load instruction, the memory operand upon which the load instruction is dependent is the store data that is written to the L1 data cache103at commit of a store instruction identified in the memory dependence operand3153. That is, the load instruction is not eligible to re-execute until the store instruction whose identifier is in the memory dependence operand3153updates the L1 data cache103with its store data. In one embodiment, the identifier of the store instruction is an index into the store queue125of the SQ entry1301allocated to the identified store instruction. In an alternate embodiment, the identifier of the store instruction is an index into the ROB122of the ROB entry allocated to the identified store instruction. The other fields3157may include other indications of operand dependencies (e.g., register operand dependencies) of the instruction that must be satisfied before the instruction is eligible to be re-executed. The other fields3157may also include a valid bit (distinct from MDO valid bit3155) that indicates whether the entry3151is valid. In an embodiment in which the re-execution structure is the scheduler121, the other fields3157may include an Issued bit that indicates whether or not the instruction has been issued. The Issued bit is initially clear when the entry3151is allocated to the instruction, the Issued bit is set once the instruction is issued for execution, and the Issued bit is cleared if the instruction does not complete its execution, e.g., its execution is canceled as described below with respect to block3162. In one embodiment, the entry3151is not deallocated, i.e., remains allocated, until the instruction completes its execution; as a result, the instruction remains in the scheduler121to be subsequently re-issued and re-executed until the instruction completes its execution. Use of the re-execution structure entry3151and specifically the memory dependence operand3153and MDO valid bit3155will be described in more detail below with respect toFIG.31D. FIG.31Dis an example flowchart illustrating the handling of an unforwardable load instruction during execution of the load instruction in accordance with embodiments of the present disclosure. Operation begins at block3162. At block3162, during execution of a load instruction, the LSU117makes a determination (e.g., during the store-to-load-forwarding determination made according to the operation described above with respect toFIG.16) that an entry1301of the store queue125is holding store data that includes some but not all bytes of the load data requested by the load instruction, i.e., the load instruction is an unforwardable load instruction. More specifically, the LSU117performs a comparison of the load PAP1495, load PA[5:3], and load byte mask1493of the load instruction (e.g., ofFIG.14) with the store PAP1304, store PA[5:3]1306, and store byte mask1308from the SQ entry1301of the store instruction that the load instruction is predicted to forward from (e.g., by comparison of a subset of virtual address bits as described with respect toFIG.18, or by the MDP111as described with respect toFIG.19) and detects a partial match, i.e., the load PAP1495and the store PAP1304match, the load PA[5:3] and the store PA[5:3]1306match, and at least one but not all of the true bits of the load byte mask1493have a corresponding true bit in the store byte mask1308. In an alternate embodiment, rather than making the comparison and unforwardable determination based on the load PAP1495and the store PAP1304, the LSU117makes the comparison and unforwardable determination based on a load physical memory line address (e.g., a PMLA provided by a TLB during execution of the load instruction (e.g., by L1 TLB5141ofFIG.51) and a store physical memory line address (e.g., store PMLA5305ofFIG.53). As a result of the determination that the load instruction is an unforwardable load instruction, the LSU117cancels execution of the load instruction, i.e., the LSU117does not allow the load instruction to complete execution. Operation proceeds to block3164. At block3164, the LSU117writes the identifier of a store instruction that is older in program order than the load instruction to the memory dependence operand3153of the re-execution structure entry3151associated with the load instruction and sets the MDO valid bit3155to indicate that the load instruction is not eligible to re-execute until the identified older store instruction updates the cache with its store data. In one embodiment, the re-execution structure is the scheduler121, although as described above, the re-execution structure may be a separate structure from the scheduler121. In one embodiment in which the re-execution structure is the scheduler121, the entry3151is the same entry of the scheduler121from which the load instruction is initially issued to the LSU117for execution. In one embodiment, the identifier of the identified older store instruction is the index into the store queue125of the SQ entry1301allocated to the identified older store instruction. In one embodiment, the identified older store instruction is the store instruction with which the load instruction has the partial match as determined at block3162. In an alternate embodiment, the identified older store instruction is the youngest store instruction in program order that is older than the load instruction. FIG.31Eis an example flowchart illustrating the handling of an unforwardable load instruction during commit of a store instruction upon which the unforwardable load instruction may be dependent in accordance with embodiments of the present disclosure. Operation begins at block3172. At block3172, the LSU117commits a store instruction. That is, the LSU117writes to the L1 data cache103the store data from the SQ entry1301allocated to the store instruction and deallocates the SQ entry1301. Operation proceeds to block3174. At block3174, in the case of an unforwardable load instruction (e.g., as indicated by the type of the memory dependence operand), the LSU117compares the identifier of the store instruction being committed (e.g., the index of the SQ entry1301that was allocated to the store instruction) with the valid memory dependence operand3153(i.e., has a true MDO valid bit3155) of each valid entry3151of the re-execution structure for a match. Operation proceeds to decision block3176. At decision block3176, if there is a match for a given entry3151, operation proceeds to block3178; otherwise, operation proceeds to block3182. At block3178, the LSU117clears the MDO valid bit3155of the matching entry3151to indicate that the instruction associated with the entry3151, in this case the load instruction whose memory dependence operand3153was written and whose MDO valid bit3155was set at block3164, is no longer ineligible to re-execute because of the previous dependency on the identified older store instruction, which is because the older store instruction identified in the memory dependence operand3153has updated the L1 data cache103with its store data. Operation proceeds to block3182. At block3182, the scheduler121(or other re-execution structure) checks each valid entry3151to see whether all dependencies indicated by the valid entry3151are satisfied, including any dependency indicated by the memory dependence operand3153, in which case the instruction associated with the entry3151is eligible to be re-executed, in this case the load instruction whose memory dependence operand3153was written and whose MDO valid bit3155was set at block3164and whose MDO valid bit3155was cleared at block3178. Additionally, if the load instruction is eligible to be re-executed, the scheduler121checks to see if there is an LSU117pipeline available to execute the load instruction and, if so, issues the load instruction for re-execution to the available LSU117pipeline. Operation proceeds to block3184. At block3184, during re-execution of the load instruction, the LSU117makes a determination that the store instruction from which the load instruction is predicted to forward is no longer outstanding, so the LSU117reads the load data from the L1 data cache103and sets the NonFwd flag2914to true, as described above with respect to block3002ofFIG.30. Advantageously, the embodiments described with respect toFIGS.31C through31E, by identifying a specific older store instruction and then re-executing the unforwardable load instruction after the identified older store instruction has written its store data to cache, may advantageously avoid the need to perform an abort process to remedy a store-to-load forwarding violation, or at least reduce the likelihood of the need to perform the abort process. Avoiding an abort process may be advantageous because the effect of re-execution involves consumption of a second execution bandwidth of an LSU117pipeline and a preceding delay until the identified store instruction writes the cache, which may have a small negative impact on program performance relative to an abort process, whereas an abort process may affect many other instructions besides the load instruction (i.e., all instructions younger than the load instruction), and the abort process involves consumption of a second execution bandwidth of an execution pipeline the for all the aborted instructions, and the abort process involves the delay of re-fetching, re-decoding, re-dispatching, and re-executing all the aborted instructions. In the embodiment in which the identified store instruction is the youngest store instruction older than the load instruction, the load instruction is advantageously guaranteed to not cause a store-to-load forwarding violation and its concomitant abort since upon re-execution all older store instructions will have been committed such that the LSU117will correctly read the load data from the L1 data cache103. The embodiment in which the identified store instruction is the store instruction with which the load instruction has the partial match may have the advantage of reduced re-execution delay relative to the first embodiment and may have the disadvantage that there is still a possibility the load instruction will cause a store-to-load forwarding violation, e.g., by reading the load data from the L1 data cache103and subsequently determining through the store commit-time store-to-load forwarding checking that the load instruction has a partial match or full match with an older store instruction younger than the store instruction with which there was a partial match, i.e., younger than the predicted store instruction. PAP Reuse Management FIG.32is an example block diagram illustrating portions of the processor100ofFIG.1that manage PAP reuse in accordance with embodiments of the present disclosure. As described above, a PAP is a proxy for a physical memory line address, and a PAP uniquely identifies an entry401in the L2 cache107into which a line of memory at the physical memory line address is allocated. That is, the set index and way number of the PAP uniquely identify the entry401. Because two different physical memory line addresses may map to the same L2 cache entry401, two different physical memory line addresses may map to a given PAP. This may occur when a first physical memory line address is allocated into an entry of the L2 cache107and a PAP is formed as a proxy for the first physical memory line address, and subsequently the first physical memory line address is removed from the entry of the L2 cache107and a second, i.e., different, physical memory line address is allocated into the L2 cache107. At this point, if the processor100were to begin to use the PAP as a proxy for the second physical memory line address while the same PAP is still being used as a proxy for the first physical memory line address, incorrect results could be generated by the processor100. PAP reuse management refers to the handling of such occurrences by the processor100to assure correct operation, i.e., to the reuse of PAPs. As an example of incorrect operation that could occur if the processor100did not properly perform PAP reuse management, assume a PAP is held as a store PAP1304in a SQ entry1301allocated to a store instruction that has completed its execution and that specifies a virtual memory line address that translates into a first physical memory line address. That is, the store PAP1304is being used as a proxy for the first physical memory line address that specifies the location of the line of memory to which the store data1302held in the SQ entry1301is waiting to be written. Further, assume the processor100were to assign the same PAP as a proxy for a load instruction that specifies a virtual memory line address that translates into a second physical memory line address that is different from the first physical memory line address, and assume the processor100were to store the same PAP into the L1 data cache entry201allocated to the virtual memory line address specified by the load instruction—which the processor100does not do because this could cause incorrect results, but is being assumed in order to illustrate the need for proper PAP reuse management. Still further, assume during execution of the load instruction the LSU117compares the load PAP1495of the load instruction (received from the L1 data cache entry201upon which the load virtual memory line address hits) with the store PAP1304of the store instruction and detects a match and decides to forward the store data1302to the load instruction. This could produce incorrect results because the load instruction would receive the incorrect data since the load and store instructions are referencing two different physical memory line addresses. Similarly, the store-to-load forwarding checks performed when the store instruction commits would fail to catch the fact that the store data was incorrectly forwarded because the store PAP1304would match the load PAP2904, even though their actual physical memory line addresses do not match. PAP reuse management, as described below, prevents such occurrences and assures correct results are obtained, while still enjoying the benefits of the use of PAPs, as described herein, e.g., less space to store smaller PAPs rather than larger physical memory line addresses in the various structures (e.g., L1 data cache103, SQ125, LQ125, WCB109) and faster comparisons of smaller PAPs than larger physical memory line addresses. As another example of incorrect operation that could occur if the processor100did not properly perform PAP reuse management, assume a PAP is held as a write PAP2404in a WCB entry2401and the write PAP2404is a proxy for a first physical memory line address. That is, the write PAP2404is being used as a proxy for the first physical memory line address that specifies the location of the line of memory to which the write data2402held in the WCB entry2401is waiting to be written. Further, assume the processor100were to assign the same PAP as a proxy for a store instruction that specifies a virtual memory line address that translates into a second physical memory line address that is different from the first physical memory line address, and assume the processor100were to store the same PAP into the store PAP1304of the SQ entry1301allocated to the store instruction—which again the processor100does not do because this could cause incorrect results, but is being assumed in order to illustrate the need for proper PAP reuse management. Still further, assume during commit of the store instruction the WCB109compares the store PAP1304of the store instruction being committed with the write PAP2404of the WCB entry2401and detects a match and decides to combine the store data1302into the WCB entry2401. This could produce incorrect results because the store instruction being committed and the one or more older store instructions whose store data were previously combined into the WCB entry2401are referencing two different physical memory line addresses. PAP reuse management, as described below, prevents such occurrences and assures correct results are obtained, which still enjoying the benefits of the use of PAPs, as described herein. The processor100ofFIG.32includes the PCL132, front-end110, decode unit112, schedulers, load and store queues (LSQ)125, LSU117, WCB109, DTLB141, L1 data cache103, and L2 cache107ofFIG.1. The LSU117also generates a PAP reuse abort request901to the PCL132. The PCL132generates a global abort signal1115provided to the front-end110, the decode unit112, the schedulers121, the LSQ125, and the LSU117. The PCL132also generates a block dispatch signal3205received by the decode unit112. The PCL132also generates a flush request signal1001received by the schedulers121, LSU117, and LSQ125. The LSU117also generates an LSQ snoop3293received by the LSQ125. The generation of each of these signals and their uses is described in more detail below, including a more detailed description of logic blocks of the PCL132. A load/store instruction is outstanding when the processor100has allocated the resources necessary for it to execute (e.g., ROB122entry and SQ entry1301or LQ entry2901), the decode unit112has dispatched the load/store instruction to the back-end130(i.e., in program order to the schedulers121), and the load/store instruction has not yet been committed. The LSU117generates a ld/st completed signal3207to the PCL132to indicate that a load/store instruction has completed its execution. In response, the PCL132sets a completed flag in the identified entry of the ROB122to true. A load/store instruction has completed execution when it hits in the L1 data cache103and receives a PAP from the L1 data cache103and writes the PAP to the entry2901/1301allocated for it in the load/store queue125, i.e., the PAP held in the allocated entry2901/1301is valid. However, a load/store instruction may execute but not complete execution. That is, the scheduler121may issue a load/store instruction to the LSU117to be executed and the LSU117may execute the load/store instruction; however, the load/store instruction may not complete execution during that execution instance for various reasons. For example, the load/store instruction may miss in the L1 data cache103and need to be replayed, i.e., sent back to the scheduler121until the L1 data cache103has been filled with a cache line implicated by the virtual address specified by the load/store instruction that missed in the L1 data cache103, at which time the load/store instruction will be ready to be re-issued for execution. In other words, just because a load/store instruction is executed does not mean it has completed its execution. The LSU117generates a ld/st committed signal3203to the PCL132to indicate that a load/store instruction has been committed. In response, the PCL132sets a committed flag in the identified entry of the ROB122to true. A load/store instruction is ready to be committed when there are no older outstanding instructions (i.e., older in program order than the load/store instruction) that could cause the load/store instruction to be aborted and the load/store instruction is the oldest outstanding load/store instruction (i.e., load/store instructions are committed in program order). In one embodiment, the ld/st committed signal3203and the ld/st completed signal3207each specify the ROB identifier of the committed/completed load/store instruction. As described above, a store instruction that is older than a load instruction can cause the load instruction to need to be aborted. This is because the LSU117may have incorrectly forwarded store data from the store instruction to the load instruction, as determined during store-to-load forwarding checks, as described above. Therefore, a load instruction is not ready to be committed until the youngest store instruction that is older than the load instruction in program order commits. Furthermore, a store instruction that is younger than an uncommitted load instruction is not ready to be committed since the load instruction could still be aborted which would require the store instruction to also be aborted. Thus, the LSU117requires that a load/store instruction must be the oldest outstanding load/store instruction in order to be ready to be committed. To commit a load instruction, the LSU117deallocates the entry in the load queue (LQ)125that has been allocated for the load instruction (e.g., previously by the decode unit112), i.e., the LSU117frees up the entry for use by another load instruction. In one embodiment, the processor100retires a load instruction by promoting to architectural state the destination register specified by the load instruction within the physical register file105. In some instances, retirement of a load/store instruction may occur simultaneously with commitment of the load/store instruction, whereas in other instances, retirement of a load/store instruction may occur after commitment of the load/store instruction. To commit a store instruction, the LSU117performs forwarding correctness checks, as described above. Additionally, the LSU117writes the store data1302held in the associated SQ entry1301to the L1 data cache103, if necessary, e.g., unless the cache line implicated by the store instruction that is being committed is no longer present in the L1 data cache103. Still further, the LSU117writes the store data1302into the WCB109, which may include combining the store data1302with store data of an older store instruction, as described above. (In an embodiment in which the L1 data cache103is a write-back cache, the store data1302need not be written to the WCB109.) Finally, the LSU117deallocates the entry in the store queue (SQ)125that has been allocated for the store instruction (e.g., previously by the decode unit112), i.e., the LSU117frees up the entry for use by another store instruction. The L2 cache107, LSU117, and PCL132operate together to prevent the processor100from updating architectural state based on any comparison of the PAP as a proxy for the second physical memory line address that was made while the PAP is still available for comparison as a proxy for the first physical memory line address (e.g., as described with respect toFIG.33). Stated alternatively, the L2 cache107, LSU117, and PCL132operate together to prevent the processor100from concurrently using a PAP as a proxy for the first and the second physical memory line addresses. More specifically, the LSU117generates a PAP reuse abort request901(e.g., as described with respect to block3414ofFIG.34) to the PCL132when the LSU117detects that a second physical memory line address, e.g., at address Y, misses in the L2 cache107and the PAP associated with the entry401of the L2 cache107to be replaced or invalidated in response to the miss is already in use as a proxy for a first physical memory line address, e.g., address X, different from the second physical memory line address by an outstanding load/store instruction that has completed execution (e.g., as described with respect to block3406ofFIG.34). The LSU117generates the PAP reuse abort request901to prevent the processor100from updating architectural state based on any comparison of the PAP as a proxy for the second physical memory line address that was made while the PAP is still available for comparison as a proxy for the first physical memory line address (e.g., as described with respect to block3416ofFIG.34) and to prevent concurrent use of the PAP as a proxy for the first and the second physical memory line addresses. In response to the PAP reuse abort request901, the PCL132performs an abort process that includes non-selectively flushing all instructions from the in-order front-end110and mid-end120of the processor100, restoring microarchitectural state of the processor100to its appropriate state, and selectively flushing from the out-of-order back-end130all instructions younger than a flush boundary1117and, in a first embodiment, temporarily preventing dispatch of instructions until instructions older than the flush boundary1117have committed, as described below in more detail. In an alternate embodiment, the flush boundary1117is selected such that the dispatch prevention is not needed, as described in more detail below. A flush of an instruction includes invalidating, or removing, the instruction (e.g., clearing a valid bit associated with the instruction as it flows down a pipeline and/or sits in a storage structure) from all pipelines (e.g., pipelines of the EUs114) and relevant storage structures (e.g., entries in the scheduler121, entries in the load/store queue125). A flush of an instruction also includes invalidating the entry in the ROB122allocated to the instruction. The PCL132includes prioritization and location logic1102and flush logic1104that are part of the abort and exception-handling logic134. The prioritization and location logic1102receives the PAP reuse abort request901in addition to the oldest outstanding ROB ID1113. The flush logic1104receives the youngest outstanding ROB ID1111as well as the flush boundary1117and the global abort1115from the prioritization and location logic1102. The abort process begins with the prioritization and location logic1102determining and providing the flush boundary1117, asserting the block dispatch signal3205that is received by the decode unit112, and asserting the global abort signal1115that is received by the front-end110, decode unit112, schedulers121, LSU117, and load and store queues125. In response to the global abort1115and flush boundary1117, the flush logic1104generates one or more flush requests1001to the schedulers121, LSU117, and load and store queues125to accomplish the selective flushing of the back-end130, i.e., to flush all instructions younger in program order than the flush boundary1117. The global abort signal1115includes information needed by various units of the processor100to accomplish the abort process. The prioritization and location logic1102may concurrently receive abort requests from multiple sources of the processor100, e.g., different abort request types from different execution units114. The prioritization and location logic1102prioritizes the abort requests to select a highest priority abort request. The prioritization and location logic1102also locates the flush boundary1117appropriate for the selected highest priority abort request. The flush boundary1117is a location in between two instructions in the program order. In the case of a PAP reuse abort request901, in a first embodiment, the prioritization and location logic1102locates the flush boundary1117just before the oldest in program order load/store instruction that has not yet completed its execution, as described in more detail below. In an alternate embodiment, the prioritization and location logic1102locates the flush boundary1117just before the oldest in program order load/store instruction that has completed its execution and whose load/store PAP2904/1304matches the PAP of the entry401being removed from the L2 cache107, also referred to as the removal PAP. A removal of an entry in the cache can occur in the following circumstances. First, a removal occurs when the cache replaces the entry with a copy of a line of memory specified by a different physical memory line address. Second, a removal occurs when the cache invalidates the entry, which may occur in response to an external snoop request that specifies the physical memory line address held in the cache entry. In both circumstances, if the cache line has been modified, the cache writes the modified cache line held in the entry back to memory before replacing or invalidating the entry. Third, a cache maintenance instruction may either flush or invalidate a line from the cache, in which a flush cache maintenance instruction writes back the modified cache line before invalidating the cache entry. All instructions younger than the flush boundary1117are flushed during the abort process. Once the prioritization and location logic1102selects the highest priority abort request and locates the flush boundary1117, it generates the global abort signal1115to begin the abort process. In response to the global abort signal1115, the front-end110and the decode unit112non-selectively flush all instructions therein, and the schedulers121stop issuing instructions to the execution units114for execution. In the first flush boundary embodiment, the PCL132continues to generate the block dispatch signal3205to block dispatch of instructions to the back-end130until all load/store instructions after the flush boundary1117are committed. The PCL132generates the flush request signal1001to flush one or more instructions. The flush request1001may include a ROB ID that specifies the location in the ROB122, and thus the instruction's location in program order, of at least one instruction that is requested to be flushed from the back-end130. Embodiments may include a single flush request in which the single instruction specified by the ROB ID is flushed, a flash flush request in which all instructions younger than and including the instruction specified by the ROB ID are flushed, or a hybrid flush request in which the number of instructions specified in the flush request1001that are younger than and including the instruction whose location is specified by the ROB ID are flushed. The abort processing may be performed similarly to abort processing embodiments described in U.S. patent application Ser. No. 17/204,662 (VENT.0104) and Ser. No. 17/204,701 (VENT.0123), filed Mar. 17, 2021, each of which is hereby incorporated by reference in its entirety. The L2 cache107sends a PAP reuse snoop request3299to the LSU117followed by a line fill request return3297. The PAP reuse snoop request3299specifies a PAP (e.g., formed at block3404), e.g., the removal PAP. In response to the PAP reuse snoop request3299, the LSU117determines whether the PAP is already in use, i.e., is available for comparison with other PAPs, as a proxy for a physical memory line address different from the physical memory line address that missed in the L2 cache107by snooping the LSQ125, e.g., as described in more detail below with respect to block3406ofFIG.34. If so, the LSU117generates a PAP reuse abort request901to prevent update of architectural state based on a comparison using the PAP as a proxy for the physical memory line address that missed in the L2 cache107when the PAP is already in use as a proxy for a different physical memory line address and to prevent concurrent use of the PAP as a proxy for the first and the second physical memory line addresses, as described in more detail below. The line fill request return3297returns to the L1 data cache103a line of memory at a physical memory line address specified by the line fill request, along with a PAP that is a proxy for the physical memory line address specified by the line fill request. FIG.33is an example flowchart illustrating operation of the processor100ofFIG.1to manage PAP reuse in accordance with embodiments of the present disclosure. Operation begins at block3302. At block3302, the L2 cache107allocates an entry401for a physical memory line address, which in the example will be denoted address X. That is, the L2 cache107selects an entry401, having a unique set index and way combination, into which a cache line at address X will be filled, i.e., written. The L2 cache107forms a PAP for address X from the set index and way of the allocated entry401. In one embodiment, e.g., as described above, the PAP includes physical address bits PA[16:6] and the two bits of the way number L2 way[1:0], although other embodiments are contemplated in which the PAP is formed in other manners. For example, if the L2 cache107has more than four ways, e.g., eight ways, then the PAP includes more bits to specify the way. For another example, if the L2 cache107has more than 2048 sets, then the PAP includes more bits to specify the set index. For yet another example, embodiments are contemplated in which the L2 cache107hashes bits of the virtual address321to generate the set index bits. Operation proceeds to block3304. At block3304, the LSU117makes the PAP formed at block3302available as a proxy for address X for comparison with PAPs that are proxies of other physical memory line addresses. For example, the LSU117may make the PAP available within a SQ entry1301(store PAP1304) for comparison with a load PAP (e.g., PAP1495ofFIG.14) of a load instruction during its execution to determine whether store data of the SQ entry1301should be forwarded to the load instruction, as described above in detail. For another example, the LSU117may make the PAP available within a LQ entry2901(load PAP2904) for comparison with the store PAP1304of a SQ entry1301during commit of a store instruction to perform store-to-load forwarding checking, as described above in detail. For another example, the LSU117may make the store PAP1304available from a SQ entry1301for a store instruction that is being committed for comparison with a write PAP2404of a WCB entry2401to determine whether the store data of the store instruction may be combined with the store data of older store instructions before being written to the L2 cache107. In each of these examples, the processor100is making a comparison of PAPs to determine whether there are physical memory line address matches rather than making a comparison of the physical memory line addresses themselves. Advantageously, a PAP comparison is faster than and requires less storage space than a physical memory line address. As described above, e.g., atFIG.7, the PAP formed at block3302is provided by the L2 cache107to the L1 data cache103where it is stored. During execution of a load instruction, the PAP is provided by the L1 data cache103and written into a LQ entry2901. During execution of a store instruction, the PAP is provided by the L1 data cache103and written into a SQ entry1301. The PAP may also be used by the L1 data cache103to service a snoop request received from the L2 cache107, as described above with respect toFIG.8. Operation proceeds to block3306. At block3306, the L2 cache107replaces the same entry401previously allocated for address X at block3302with a cache line of memory at a different physical memory line address, which in the example will be denoted address Y. That is, physical memory line addresses X and Y map to the same set of the L2 cache107and the replacement algorithm of the L2 cache107selected the same way within the selected set for address Y to replace that was selected for address X at block3302. In other words, in the example embodiment, physical address bits PA[16:6] of addresses X and Y are identical, and the replacement algorithm selected the same way in both instances. Consequently, the L2 cache107forms a PAP for address Y from the set index and way of the entry401selected for replacement, which is the same PAP value formed at block3302. Additionally, because the L2 cache107is inclusive of the L1 data cache103, as described above with respect to block706ofFIG.7, the L2 cache107causes the L1 data cache103to evict its copy of the cache line replaced in the L2 cache107here at block3306(e.g., in response to receiving the PAP reuse snoop request at block3406described below). Operation proceeds to block3308. At block3308, the L2 cache107, LSU117, and PCL132operate to prevent update of architectural state based on any comparison of the PAP as a proxy for the physical memory line address Y that was made while the PAP is still available for comparison as a proxy for physical memory line address X and to prevent concurrent use of the PAP as a proxy for physical memory line addresses X and Y. As described in more detail with respect toFIG.34and the remaining Figures, this may involve flushing any load/store instruction for which the LSU117makes a comparison of the PAP as a proxy for address Y that is made while the PAP is still available for comparison as a proxy for address X, e.g., via an abort process initiated by a PAP reuse abort request901made by the LSU117. FIG.34is an example flowchart illustrating operation of the processor100ofFIG.1to manage PAP reuse in accordance with embodiments of the present disclosure. More specifically,FIG.34illustrates operation at blocks3306and3308ofFIG.33in more detail in the case of a load/store instruction miss in the L1 data cache103that precipitates the replacement of an L2 cache entry401at block3306. Operation begins at block3402. At block3402, in response to a miss of a virtual address specified by a load/store instruction in the L1 data cache103, the LSU117generates a cache line fill request to the L2 cache107that specifies physical memory line address Y of block3306ofFIG.33into which the virtual address is translated. During processing of the fill request, address Y misses in the L2 cache107. In response to the miss, the LSU117generates a cache line fill request to memory (or a higher-level cache, e.g., L3 cache) that specifies physical memory line address Y. Operation proceeds to block3404. At block3404, the L2 cache107picks a replacement way in the set of the L2 cache107selected by the set index obtained from address Y, e.g., PA[16:6]. The L2 cache107forms a PAP using the set index and way of the entry401selected for replacement. In one embodiment, the operation at block3404is described with respect toFIG.35. Operation proceeds to block3405. At block3405, the L2 cache107then sends a PAP reuse snoop request3299to the LSU117that specifies the PAP formed at block3404so the LSU117can determine whether it needs to generate a PAP reuse abort because the PAP is already in use. In one embodiment, the PAP reuse snoop request3299also instructs the L1 data cache103to evict any entry201of the L1 data cache103having the formed PAP, which is in use as a proxy for the physical memory line address (e.g., physical memory line address X) at which a copy of a line of memory is being removed from the L2 cache107and L1 data cache103(assuming the line at physical memory line address X is in the L1 data cache103), in furtherance of the policy that the L2 cache107is inclusive of the L1 data cache103. Operation proceeds to block3406. At block3406, the LSU117checks to see if the formed PAP specified in the PAP reuse snoop request3299is already in use as a proxy for a physical memory line address different from address Y, e.g., address X, by any outstanding load/store instruction that has completed execution. That is, the LSU117checks to see if the formed PAP is available for comparison as a proxy for a physical memory line address different from address Y by any outstanding load/store instruction that has completed execution. In one embodiment, the LSU117makes the check by snooping the store queue125and load queue125(e.g., LSQ snoop3293ofFIG.32) to compare the formed PAP against the store PAP1304and the load PAP2904of each entry of the load/store queue125that is associated with an outstanding load/store instruction that has completed execution. If the LSU117detects a valid match, then the PAP is already in use, i.e., is available for comparison as a proxy for a physical memory line address different from address Y by an outstanding load/store instruction that has completed execution. The formed PAP, also referred to as the removal PAP, is included in the LSQ snoop3293, and the LSQ125responds to the LSU117with a match indication. Additionally, as described above, the L1 data cache103evicts any copy of the cache line being replaced in the L2 cache107(i.e., the cache line is at physical memory line address X, for which the formed PAP is a proxy), e.g., at block3408. In one embodiment, the eviction is performed as part of the PAP reuse snoop request3299, i.e., the LSU117looks up the specified PAP in the L1 data cache103and evicts all matching entries201. In an alternate embodiment, the L2 cache107sends a separate request to evict any copy of the cache line in the L1 data cache103. Operation proceeds concurrently to block3408and to decision block3412. At block3408, the L2 cache107obtains the line of memory specified by address Y (e.g., from system memory or from a higher-level cache) and fills the new line of memory into the entry401of the L2 cache107selected for replacement at block3404. The L2 cache107also returns the obtained line of memory, along with the formed PAP, to the L1 data cache103in response to the cache line fill request generated at block3402. Specifically, the L2 cache107sends the cache line fill request return3297at block3408after sending the PAP reuse snoop request3299at block3405. In one embodiment, the PAP reuse snoop request3299and the cache line fill request return3297are sent on the same bus to the L1 data cache103, which ensures ordering of the PAP reuse snoop request3299and the fill request return3297. The ordering facilitates that at block3416described below, the PCL132can reliably determine the flush boundary needed to ensure that any load/store instructions that use the PAP as proxies for address Y to perform physical memory line address comparisons while the PAP is still available for comparison as a proxy for address X are flushed, which effectively prevents concurrent use of the PAP as a proxy for the removed physical memory line address and the filled physical memory line address. Operation proceeds concurrently to blocks3413and3418. At decision block3412, if the PAP is already in use, i.e., is available for comparison as a proxy for a physical memory line address different from address Y, operation proceeds to block3414; otherwise, operation proceeds to block3413. At block3413, in response to the cache line fill request return3297made by the L2 cache107at block3408, the L1 data cache103fills the returned cache line and returned PAP into an entry201allocated to the virtual address specified by the load/store instruction at block3402. In the case at block3413, the PAP was not already in use, so no PAP reuse abort process needs to be performed. At block3414, the LSU117signals to the PCL132the need for a PAP reuse abort901. Additionally, the LSU117signals to the WCB109to set the NC flag2413to true for any WCB entry2401whose write PAP2404matches the PAP provided by the LSU117, i.e., the PAP received in the PAP reuse snoop request3299. Operation proceeds to block3416. At block3416, the PCL132determines the flush boundary1117in response to the PAP reuse abort901. The flush boundary1117is chosen so as to prevent concurrent use of the formed PAP as a proxy for different physical memory line addresses (e.g., physical memory line addresses X and Y) and to prevent the update of architectural state that is based on any comparison of the PAP as a proxy for address Y that is made while the PAP is still available for comparison as a proxy for address X, i.e., is still in use as a proxy for address X. In a first embodiment, the flush boundary1117is chosen to be before the oldest load/store instruction that has not yet completed execution. A load/store instruction cannot complete execution until it hits in the L1 data cache103and has received its PAP. If the load/store instruction misses in the L1 data cache103, the load/store instruction goes back to the scheduler121, which will subsequently re-issue the load/store instruction, e.g., once the missing cache line and PAP are filled. The load/store instruction has completed execution once it receives the PAP from the L1 data cache103and writes the PAP to the entry2901/1301allocated for it in the load/store queue125, i.e., the PAP written to the allocated entry2901/1301is available for comparison, i.e., is in use as a proxy for a physical memory line address. In an alternate embodiment, the flush boundary1117is chosen to be before the oldest load/store instruction that has completed execution and whose load/store PAP2904/1304matches the PAP specified in the PAP reuse snoop request3299and LSQ snoop3293. In one embodiment, the PAP reuse abort request901may specify the ROB identifier of the oldest load/store instruction associated with a LSQ entry2901whose load PAP2904matches the snoop PAP, and the PCL132may determine the flush boundary at before the instruction specified by the received ROB identifier. Below are descriptions of how the embodiments prevent concurrent use of the formed PAP as a proxy for different physical memory line addresses and prevents the update of architectural state that is based on any comparison of the PAP as a proxy for address Y that is made while the PAP is still available for comparison as a proxy for address X. In one embodiment, the LSU117controls the update of both the load and store queues125and effectively the ROB122regarding indications of whether a load/store instruction has completed execution, i.e., whether a load/store instruction has a valid load/store PAP2904/1304that is in use as a proxy for a load/store physical memory line address. That is, the LSU117updates the indications in the load/store queue entries2901/1301of whether or not a load/store instruction has completed execution (e.g., Done flag2936in LQ entry2901that indicates load PAP2904is valid, and similar indicator (not shown) in the SQ entry1301that indicates the store PAP1304is valid). Furthermore, via ld/st completed signal3207, the LSU117effectively controls the update of indications in the ROB122entries of whether or not a load/store instruction has completed execution. Finally, the LSU117and PCL132are configured such that the execution completion status in the load and store queues125viewed by the LSU117at the time the LSQ snoop3293occurs matches the execution completion status in the ROB122viewed by the PCL132at the time the PCL132determines the flush boundary1117in response to the PAP reuse abort request901. Operation proceeds concurrently to blocks3418and3422. At block3418, in response to the cache line fill request return3297made by the L2 cache107at block3408, the L1 data cache103fills the returned cache line and returned PAP into an entry201allocated to the virtual address specified by the load/store instruction at block3402. In one embodiment, the LSQ snoop3293is performed before the fill of the new line into the entry201of the L1 data cache103. In another embodiment, the LSQ snoop3293is performed after but in an atomic manner with the fill of the new line into the entry201of the L1 data cache103. The atomic manner means the snoop and fill are performed such that no load/store instruction is able to hit on the entry201after the fill and before the snoop. In one embodiment, the LSQ snoop3293at block3406is performed after the removal (i.e., eviction) of the entry201of the L1 data cache103. In another embodiment, the LSQ snoop3293is performed before but in an atomic manner with the removal of the entry201of the L1 data cache103. The atomic manner means the snoop and removal are performed such that no load/store instruction is able to hit on the entry201after the snoop and before the removal. The performance of the LSQ snoop3293after the removal or atomically therewith and before the fill or atomically therewith ensures that the state of the LSQ125captured by the LSQ snoop3293reflects any uses of the formed PAP as a proxy for physical memory line address X by outstanding load/store instructions that have completed execution and does not reflect any uses of the formed PAP as a proxy for physical memory line address Y by outstanding load/store instructions that could complete execution after the fill, which enables the LSU117to determine whether or not to signal the need for a PAP reuse abort901at block3414to prevent mixing of old and new uses of the PAP. At block3422, the PCL132flushes (e.g., via flush request signal1001) all load/store instructions younger than the flush boundary1117determined at block3416. Additionally, in the first embodiment in which the flush boundary is determined before the oldest load/store instruction that has not completed execution, the PCL132blocks dispatch (e.g., via block dispatch signal3205) of further load/store instructions (e.g., load/store instructions that may be flushed and then re-fetched and decoded) to the back-end130(i.e., to scheduler121) until all load/store instructions after the flush boundary1117are committed. Once the returned PAP is filled into the entry201of the L1 data cache103at block3418, the PAP is now available to be reused as a proxy for address Y. For example, an illegal PAP reuse event may occur in which, after the PAP is filled into the entry201at block3418, a load/store instruction gets issued to the LSU117for execution, hits in the L1 data cache103, and uses the received PAP as a proxy for physical memory line address Y in PAP comparisons, e.g., for store-to-load forwarding, store-to-load forwarding checks, and store data write combining. So, the L2 cache107, LSU117, and PCL132work in combination to try to prevent an illegal PAP reuse event from happening, e.g., by blocking dispatch of load/store instructions by the decode unit112until all load/store instructions older than the flush boundary are committed. However, in some embodiments there may be a small window, discussed in more detail below, during which an illegal PAP reuse event may occur. In the unlikely illegal PAP reuse event, the PCL132flushes any load/store instruction associated with an illegal PAP reuse event before it updates architectural state. The load/store instruction is instead subsequently re-fetched and re-executed and may then be able to update architectural state on the re-execution. During the re-execution of the load/store instruction, the PAP is no longer in use as a proxy for address X such that the load/store instruction is free to use the PAP as a proxy for physical memory line address Y in PAP comparisons. In the first flush boundary embodiment, the PAP is no longer in use as a proxy for address X because any load/store instruction using the PAP as a proxy for address X either was younger than the flush boundary and therefore flushed, or was older than the flush boundary and therefore was allowed to commit before any load/store instructions that will use the PAP as a proxy for address Y are allowed to be dispatched and executed. In the alternate flush boundary embodiment, the PAP is no longer in use as a proxy for address X because any load/store instruction using the PAP as a proxy for address X was younger than the flush boundary and therefore flushed. The small window alluded to above may occur between the time that the PAP is filled into the L1 data cache103at block3418and the completion of the abort process that begins at block3422. However, the L2 cache107, LSU117and PCL132are designed to determine the flush boundary1117to prevent any load/store instruction associated with an illegal PAP reuse from updating architectural state. As described above, in one embodiment PCL132examines the state of outstanding instructions in the ROB122to determine the flush boundary1117at the point before the oldest load/store instruction that has not completed execution. A load/store instruction cannot be marked in the ROB122as having completed execution until it has hit in the L1 data cache103and received its PAP. Hence, as long as the PCL132determines the flush boundary1117at block3416before the PAP is filled into the L1 data cache103at block3418, any load/store instruction potentially associated with an illegal PAP reuse will not have completed its execution and will therefore be behind the flush boundary1117. In one embodiment, this is accomplished by the L2 cache107, LSU117, and PCL132being designed such that the number of clocks J from when the LSU117receives the PAP reuse snoop request3299from the L2 cache107until the PCL132determines the flush boundary1117in response to the PAP reuse abort request901is no greater than the number of clocks K from the time the LSU117receives the cache line fill request return3297from the L2 cache107until the time the L1 data cache103can provide the returned PAP in response to a hit of a virtual address specified by a load/store instruction that subsequently executes after the returned PAP is written to the L1 data cache103. Choosing the flush boundary1117before the oldest load/store instruction that has not yet completed execution enables forward progress to be made, e.g., to avoid a livelock, while ensuring there is no update of architectural state that is based on a comparison of the PAP as a proxy for address Y made while the PAP is still in use as a proxy for address X. More specifically, any load/store instructions that are using the PAP as a proxy for address X that are older than the flush boundary will be allowed to commit, whereas any load/store instructions that have not completed execution, and therefore could subsequently use the PAP as a proxy for address Y, will be flushed and blocked from dispatch, which prevents them from hitting in the L1 data cache103and receiving the PAP for use as a proxy for address Y, until all load/store instructions using the PAP as a proxy for address X have committed and are therefore are no longer using the PAP as a proxy for address X. In an alternate embodiment, the processor100includes logic to detect a potential livelock condition and to prevent a livelock from occurring. For example, the livelock detection and prevention logic may detect that the operation ofFIG.34has occurred a predetermined number of times within a predetermined number of clock cycles. In response, the logic may temporarily place the processor100into a low performance mode (e.g., a mode that does not allow out of order execution). As stated above, in an alternate embodiment, the flush boundary1117is determined to be before the oldest load/store instruction that has completed execution and whose load/store PAP2904/1304matches the PAP specified in the PAP reuse snoop request3299(and LSQ snoop3293). In the alternate embodiment, because the flush boundary1117is determined to be before the oldest matching load/store instruction that has completed execution, all load/store instructions that are using the PAP as a proxy for address X will be flushed (since they have completed execution), and when re-fetched and re-dispatched and re-executed they will use the PAP as a proxy for address Y. Additionally, any load/store instructions younger than the flush boundary that had not completed execution will be flushed and, upon their re-fetch and re-dispatch and re-execution, will use the PAP as a proxy for address Y. As a result, the PCL132need not block dispatch until all load/store instructions using the PAP as a proxy for address X have committed since the load/store instructions using the PAP as a proxy for address X will all have been flushed and will subsequently use the PAP as a proxy for address Y. FIG.35is an example flowchart illustrating operation of the processor100ofFIG.1to manage PAP reuse in accordance with embodiments of the present disclosure. More specifically,FIG.35illustrates operation at block3404ofFIG.34in more detail. Operation begins at block3502. At block3502, per block3402ofFIG.34, physical memory line address Y has missed in the L2 cache. That is, a set index that selects a set of the L2 cache107was obtained from address Y, and the tag portion of address Y did not match the tag404of any of the L2 cache entries401in any way of the selected set. Therefore, the L2 cache107sends the L2 set index to the L1 data cache103. Operation proceeds to block3504. At block3504, for each possible way number of the L2 cache107, the L1 data cache103effectively forms a PAP with the way number and the received L2 set index. For example, in an embodiment in which the L2 cache107has four ways, the L1 data cache103forms four possible PAPs using the four possible way numbers each concatenated with the L2 set index. Operation proceeds to block3506. At block3506, for each of the PAPs formed at block3504, the L1 data cache103determines whether the formed PAP is resident in the L1 data cache103. In one embodiment, the PAP residency determination for each formed PAP is as follows. The untranslated bits PA[11:6] of the L2 set index (e.g., corresponding untranslated bits PA[11:6] of physical memory line address Y), along with the four possible values of the upper two bits of the L1 data cache103set index are used to select four sets of the L1 data cache103(similar to the manner described above with respect toFIG.6), which implicates sixteen entries201of the L1 data cache103. The dPAP209of each of the sixteen entries201is compared against four different formed dPAPs to generate 16×4=64 match results. The four formed dPAPs are formed using the four different possible L2 way values (i.e., 00, 01, 10, 11) concatenated with the upper five bits of the L2 set index sent from the L2 cache107at block3502. The sixteen match results associated with each of the four formed dPAPs are Boolean OR'ed together to generate a single-bit PAP residency indicator associated with the formed dPAP. If the single-bit PAP residency indicator is true, this indicates the associated formed PAP is resident in the L1 data cache103, which indicates a high likelihood that the formed PAP is in use as a proxy for a physical memory line address different than address Y, e.g., address X. The four single-bit PAP residency indicators are sent as a 4-bit indicator to the L2 cache107. Operation proceeds to block3508. At block3508, the L2 cache107uses the indicators sent at block3506to pick the way of the L2 cache107to replace that reduces the likelihood that the PAP formed by the picked way and the L2 cache set index is already in use as a proxy for a physical memory line address different from address Y. In one embodiment, the PAP residency determination may be performed according toFIG.36described below. The PAP residency determination may be conceptualized effectively as an approximation of the check performed at block3406ofFIG.34to determine whether the PAP is already in use as a proxy for a physical memory line address different from address Y. Advantageously, typically the time required for the L1 data cache103to make the PAP residency determination is hidden behind the time needed for the L2 cache107to go get the missing cache line from system memory or a third level cache memory. Furthermore, there may be many instances in which the PAP residency determination indicates there is at least one way of the selected L2 cache set for which the formed PAP is not resident in the L1 data cache103, indicating a high likelihood that the formed PAP is not already in use. FIG.36is an example flowchart illustrating operation of the processor100ofFIG.1to manage PAP reuse in accordance with embodiments of the present disclosure. More specifically,FIG.36illustrates operation at block3508ofFIG.35in more detail. Operation begins at decision block3602. At decision block3602, if the PAP residency indicator indicates there is only one way of the L2 cache107that could be used along with the L2 set index sent at block3502to form a PAP that is not resident in the L1 data cache103, operation proceeds to block3604; otherwise, operation proceeds to decision block3606. At block3604, the L2 cache107picks for replacement the single non-resident way indicated in the PAP residency indicator. At decision block3606, if the PAP residency indicator indicates there are no ways of the L2 cache107that could be used along with the L2 set index sent at block3502to form a PAP that is not resident in the L1 data cache103, operation proceeds to block3608; otherwise, operation proceeds to block3612. At block3608, the L2 cache107picks for replacement using its normal replacement policy (e.g., least recently used (LRU)) from among all ways of the set of the L2 cache107selected by the L2 set index. At block3612, the L2 cache107picks for replacement using its normal replacement policy (e.g., least recently used (LRU)) from among only the ways of the set of the L2 cache107selected by the L2 set index that the PAP residency indication indicates are not resident in the L1 data cache103. Thus, for example, if the PAP residency indication indicates ways 0, 1, and 3 are not resident in the L1 data cache103, then the L2 cache107picks one of ways 0, 1, and 3 (e.g., the LRU way among ways 0, 1, and 3) to replace, but does not pick way 2 since it is highly likely the PAP associated with way 2 is already in use as a proxy for a physical memory line address different from address Y. FIG.37is an example flowchart illustrating operation of the processor100ofFIG.1to manage PAP reuse in accordance with embodiments of the present disclosure. More specifically,FIG.37illustrates operation at blocks3306and3308ofFIG.33in more detail in the case of a prefetch request that misses in the L1 data cache103that precipitates the replacement of an L2 cache entry401at block3306. The operation according toFIG.37is similar in many respects to the operation ofFIG.34. Operation begins at block3702. At block3702, in response to miss of a virtual address specified by a prefetch request in the L1 data cache103, the LSU117generates a cache line fill request to the L2 cache107that specifies physical memory line address Y of block3306ofFIG.33into which the virtual address is translated. During processing of the fill request, address Y misses in the L2 cache107. In response to the miss, the LSU117generates a cache line fill request to memory (or a higher level cache, e.g., L3 cache) that specifies physical memory line address Y. Operation proceeds to block3404and continues through blocks3405through3422similar to the manner described above with respect toFIG.34. FIG.38Ais an example flowchart illustrating operation of the processor100ofFIG.1to manage PAP reuse in accordance with embodiments of the present disclosure. More specifically,FIG.38Aillustrates operation at blocks3306and3308ofFIG.33in more detail in the case of a prefetch request that specifies a physical memory line address Y that misses in the L2 cache107that precipitates the replacement of an L2 cache entry401at block3306. The operation according toFIG.38Ais similar in many respects to the operation ofFIG.34. Operation begins at block3802. At block3802, a prefetch request to the L2 cache107that specifies physical memory line address Y of block3306ofFIG.33misses in the L2 cache107. In response to the miss, the LSU117generates a cache line fill request to memory (or a higher level cache, e.g., L3 cache) that specifies physical memory line address Y. Operation proceeds to block3404and continues through blocks3405,3406,3212,3414,3416, and3422similar to the manner described above with respect toFIG.34. However, from block3405operation proceeds concurrently to block3406and block3808. At block3808, the L2 cache107obtains the line of memory specified by address Y (e.g., from system memory or from a higher level cache) and fills the line of memory into the entry401selected for replacement at block3404. However, unlike operation at block3408ofFIG.34, in the operation ofFIG.38Athe L2 cache107does not return the obtained line of memory, along with the formed PAP, to the L1 data cache103since there was no cache line fill request generated. Additionally,FIG.38Adoes not include block3413nor block3418. As described above, a PAP is a proxy for a physical memory line address, and a PAP uniquely identifies an entry401in the L2 cache107into which a line of memory at the physical memory line address is allocated. That is, the set index and way number of the PAP uniquely identify the entry401. Because the L2 cache107is set associative, at two different instances in time, i.e., not concurrently, a physical memory line address may get allocated into two different entries401of the L2 cache107. Consequently, the two different PAPs of the two different entries401of the L2 cache107may be used as a proxy for the physical memory line address at the two different instances in time. This may occur when the physical memory line address is allocated into a first entry of the L2 cache107(a first way of the selected set, e.g., way 1) and a first PAP is formed as a proxy for the physical memory line address, and subsequently the physical memory line address is removed from the first entry401(e.g., by an external snoop that specifies the physical memory line address), and subsequently the physical memory line address is allocated into a second entry of the L2 cache107(a second way of the selected set, e.g., way 3) and a second PAP is formed as a proxy for the physical memory line address. At this point, if the processor100were to begin to use the second PAP as a proxy for the physical memory line address while the first PAP is still being used as a proxy for the physical memory line address, incorrect results could be generated by the processor100.FIG.38Bdescribes PAP reuse management by the processor100to prevent such occurrences to assure correct operation. FIG.38Bis an example flowchart illustrating operation of the processor100ofFIG.1to manage PAP reuse in accordance with embodiments of the present disclosure. More specifically,FIG.38Billustrates PAP management in the case of an external snoop request that specifies a physical memory line address X that hits in the L2 cache107that precipitates the invalidation of an L2 cache entry401. The operation according toFIG.38Bis similar in many respects to the operation ofFIG.38A. Operation begins at block3801. At block3801, the L2 cache107receives an invalidating external snoop request (e.g., as described above with respect toFIG.6) that specifies a physical memory line address, referred to here as address X, which hits on an entry401in the L2 cache107. Operation proceeds to block3803. At block3803, the L2 cache107invalidates the hit entry401. Additionally, the L2 cache107forms a PAP using the set index and the way of the hit entry401, i.e., the invalidated entry401. Unlike operation at block3404ofFIG.38Ain which the L2 cache107picks a way to replace, at block3803the invalidated entry401is determined by the external snoop request (i.e., physical memory line address X), and the L2 cache107simply forms the PAP based on the set index and way of the invalidated entry401. Operation proceeds to block3405. At block3405, the L2 cache107then sends a PAP reuse snoop request3299to the LSU117that specifies the PAP formed at block3803so the LSU117can determine whether it needs to generate a PAP reuse abort because the PAP is already in use, similar to the manner described with respect to block3405ofFIG.34. Once physical memory line address X is no longer in the L2 cache107(e.g., due to its invalidation at block3803), it is possible that physical memory line address X subsequently will be filled into a different way of the same set of the L2 cache107. In such case, a new PAP may be used as a proxy for physical memory line address X that is different than the old PAP used as a proxy for physical memory line address X prior to the invalidation of the entry401hit upon by the external snoop request. Because the possibility exists that uncommitted load/store instructions are still using the old PAP as a proxy for physical memory line address X, the L2 cache107sends the PAP reuse snoop request3299to find out and, if so, generate a PAP reuse abort. Operation proceeds to block3807. At block3807, the LSU117checks to see if the formed PAP specified in the PAP reuse snoop request3299is already in use as a proxy for physical memory line address X by any outstanding load/store instruction that has completed execution, e.g., via LSQ snoop3293, similar to the manner described above with respect to block3406ofFIG.34. As described above, the L1 data cache103evicts any copy of the cache line at physical memory line address X being invalidated in the L2 cache107. Operation proceeds to decision block3412and proceeds through blocks3414,3416and3422as inFIG.38Asimilar to the manner described. UnlikeFIG.38A, there is no block3808inFIG.38B, i.e., there is no fill into the L2 cache107. AlthoughFIG.38Bdescribes the invalidation of an entry401of the L2 cache107caused by an external snoop, a similar process may be performed by the processor100in response to other events that invalidate an entry401of the L2 cache107, such as execution of a cache management operation that invalidates/flushes the entry401or an operation that flushes the entry401for power management purposes. Generational PAPs As may be observed from the above description, there may be a performance penalty incurred in instances in which an abort process is performed in response to a need for a PAP reuse abort request. Although the frequency of occurrence of such instances is likely to be relatively small, nevertheless embodiments are now described that may reduce the likelihood. More specifically, the notion of generational PAPs (GPAPs) is described. GPAPs may reduce the likelihood that a PAP is already in use as a proxy for a first physical memory line address when a different second physical memory line address replaces the entry in the L2 cache allocated to the first physical memory line address. Each L2 cache entry is configured to store a generational identifier (GENID) that is incremented each time the entry is replaced, and the GENID is used—along with the set index and way number of the entry—to form the GPAP, as will now be described in more detail. FIG.39is an example block diagram of an alternate embodiment of a cache entry401of L2 cache107ofFIG.1that employs GPAPs in accordance with embodiments of the present disclosure. The L2 cache entry401ofFIG.39is similar in many respects to the L2 cache entry401ofFIG.4. However, the L2 cache entry401ofFIG.39also includes a GENID[1:0] field408, as shown, also referred to as GENID408. In the embodiment ofFIG.39, the GENID408is two bits. However, other embodiments are contemplated in which the GENID408is only one bit or is more than two bits. As described in more detail below, the GENID408is incremented each time the L2 cache entry401is replaced. The GENID408is used to form a GPAP which is used—rather than a PAP—as a proxy for a physical memory line address. Correspondingly, each of the L1 data cache entry201, SQ entry1301, LQ entry2901, and WCB entry2401is also modified to hold a GPAP—rather than a PAP—for comparisons, as described below with respect toFIGS.41,43, and45. In other words, in each place where a PAP was held or compared in the embodiments described with respect toFIGS.1through38B, a GPAP is held and compared instead in order to reduce the PAP reuse abort likelihood. Advantageously, by slightly increasing the amount of storage required to hold the small GENID408, the likelihood of incurring a PAP reuse abort may be decreased. FIG.40is an example block diagram illustrating an alternate embodiment of the L2 cache107ofFIG.1that employs GPAPs in accordance with embodiments of the present disclosure. The L2 cache107ofFIG.40is similar in many respects to the L2 cache107ofFIG.5. However, the tag array532also holds the GENID[1:0] of each L2 cache entry401ofFIG.39, as shown. FIG.41is an example block diagram of an alternate embodiment of a cache entry201of L1 data cache103ofFIG.1in accordance with embodiments of the present disclosure. The L1 data cache entry201ofFIG.41is similar in many respects to the L1 data cache entry201ofFIG.2. However, the L1 data cache entry201ofFIG.41holds a generational dPAP (GdPAP)209rather than a dPAP209as inFIG.2. The GdPAP209is similar to the dPAP209ofFIG.2except that it is concatenated with the GENID[1:0], as shown. FIG.42is an example block diagram illustrating an alternate embodiment of the L1 data cache103ofFIG.1that employs GPAPs in accordance with embodiments of the present disclosure. The L1 data cache103ofFIG.42is similar in many respects to the L1 data cache103ofFIG.3. However, the L1 data cache103stores in each entry201a GdPAP209(rather than a dPAP). That is, similar to the manner described at block704ofFIG.7, when the L2 cache107returns the GdPAP323(rather than the dPAP) to the L1 data cache103in response to a cache line fill request made at block702, the GdPAP323(rather than the dPAP) is written to the GdPAP209of the L1 data cache entry201ofFIG.41. Additionally, when a load/store instruction hits in the L1 data cache103, the L1 data cache103outputs the GdPAP209(rather than the dPAP) of the hit entry209, e.g., similar to the manner described with respect to block1602ofFIG.16or block1504ofFIG.15, respectively. FIG.43is an example block diagram of an alternate embodiment of a cache subsystem600that employs GPAPs in accordance with embodiments of the present disclosure. The cache subsystem600ofFIG.43is similar in many respects to the cache subsystem600ofFIG.6and performs hardware cache coherency in a similar manner in many respects. However, the cache subsystem600ofFIG.43employs GPAPs instead of PAPs. In particular, on a hit in the L2 cache107of a snoop request601, comparators604provide an output606that is the GENID[1:0] concatenated with the L2 way[1:0] (rather than just the L2 way[1:0]). Additionally, similar to the manner described at block806ofFIG.8, the snoop forwarding logic607forwards a GPAP699(rather than a PAP) that includes the GENID[1:0] to the L1 data cache103in the forwarded snoop request611. The forwarded snoop request611includes a GdPAP613portion (rather than a dPAP portion) that includes a GENID[1:0]. As described above with respect toFIG.42, each L1 data cache entry201holds a GdPAP209(rather than a dPAP). Finally, similar to the manner described at block808ofFIG.8, in response to the forwarded snoop request611ofFIG.43that specifies a GPAP699, the L1 data cache103outputs the GdPAP209(rather than the dPAP) of each entry201of each selected set (e.g., of sixteen entries201) for provision to comparators614, and the comparators614compare the sixteen GdPAPs209against the GdPAP613(rather than the dPAP) of the forwarded snoop request611to generate the L1hit signal616. Similar modifications may be made in the embodiments ofFIGS.9and11, i.e., to form and provide a GPAP699rather than a PAP699and to compare GdPAPs613/209rather than dPAPs613/209, and hardware cache coherency operations according toFIGS.10and12may similarly be modified to compare GdPAPs rather than dPAPs similar to the manner described above with respect toFIG.43. FIG.44is an example block diagram of an alternate embodiment of a store queue (SQ) entry1301of the SQ125ofFIG.1that holds GPAPs in accordance with embodiments of the present disclosure. The SQ entry1301ofFIG.44is similar in many respects to the SQ entry1301ofFIG.13. However, the SQ entry1301ofFIG.44holds a store GPAP1304rather than a store PAP1304as inFIG.13. The store GPAP1304is similar to the store PAP1304ofFIG.13except that it is concatenated with the GENID[1:0], as shown. Similar to the manner described with respect to block1506ofFIG.15, the store GPAP1304is populated with a GPAP received from the L1 data cache103when the store virtual memory line address specified by a store instruction during its execution hits in the L1 data cache103similar to the manner described with respect to block1504ofFIG.15. FIG.45is an example block diagram of portions of an alternate embodiment of the processor100ofFIG.1used to perform store-to-load forwarding using GPAPs in accordance with embodiments of the present disclosure. The processor100ofFIG.45is similar in many respects to the processor100ofFIG.14. However, in the embodiment ofFIG.45, the compare block348outputs a GdPAP209(rather than a dPAP) in response to a load instruction virtual address, and a load GPAP1495(rather than a load PAP) is formed for provision to the forwarding decision logic1499, e.g., similar to the manner described at block1602ofFIG.16. Additionally, the store queue125provides to the forwarding decision logic1499a store GPAP1304(rather than a store PAP), e.g., similar to the manner described at block1604ofFIG.16. Finally, the forwarding decision logic1499uses the load GPAP1495(rather than a load PAP) and the store GPAP1304(rather than a store PAP) to determine whether to forward store data to the load instruction, e.g., similar to the manner described at block1606ofFIG.16. Similar modifications to the SQ entry1301may be made in the embodiments ofFIGS.17and23, i.e., to hold a store GPAP1304rather than a store PAP1304, and store-to-load forwarding operations according toFIGS.18through22may similarly be modified to compare load GPAPs1495with store GPAPs1304, rather than load PAPs1495with store PAPs1304. FIG.46is an example block diagram of a load queue (LQ) entry2901of the LQ125ofFIG.1in accordance with embodiments of the present disclosure. The LQ entry2901ofFIG.46is similar in many respects to the LQ entry2901ofFIG.29. However, the LQ entry2901ofFIG.46holds a load GPAP2904rather than a load PAP2904as inFIG.29. The load GPAP2904is similar to the load PAP2904ofFIG.29except that it is concatenated with the GENID[1:0], similar to the manner shown inFIG.44for the store GPAP1304. Similar to the manner described above with respect to block3002ofFIG.30, the load GPAP2904is populated with the load GPAP1495ofFIG.45provided by the L1 data cache103in response to the virtual load address321specified by the load instruction. Similar to the manner described above with respect to block3006ofFIGS.30,31A and31Band particularly block3104, the store GPAP1304(rather than a store PAP) of the store instruction being committed is compared with the load GPAP2904(rather than a load PAP) of each valid LQ entry2901to determine whether there is no match, a full match, or a partial match in order to perform store-to-load forwarding correctness checks. FIG.47is an example block diagram of an alternate embodiment of a write combining buffer (WCB) entry2401of the WCB109ofFIG.1that holds GPAPs to accomplish write combining in accordance with embodiments of the present disclosure. The WCB entry2401ofFIG.47is similar in many respects to the WCB entry2401ofFIG.24. However, the WCB entry2401ofFIG.47holds a write GPAP2404rather than a write PAP2404as inFIG.24. The write GPAP2404is similar to the write PAP2404ofFIG.24except that it is concatenated with the GENID[1:0], as shown. FIG.48is an example block diagram illustrating an alternate embodiment of portions of the processor100ofFIG.1that perform write combining using GPAPs in accordance with embodiments of the present disclosure. The embodiment ofFIG.48is similar in many respects to the embodiment ofFIG.26. However, similar to the manner described with respect to block2812ofFIG.28, the write GPAP2404is populated with a store GPAP1304rather than a store PAP of a store instruction being committed. Additionally, similar to the manner described above with respect to block2802ofFIG.28, the WCB109compares the committed store GPAP1304(rather than the store PAP) against the write GPAP2404(rather than the write PAP) of all the WCB entries2401of the WCB109to determine whether the store instruction being committed may be combined with any of the WCB entries2401at block2806ofFIG.28. FIG.49is an example flowchart illustrating operation of the processor100ofFIG.1to manage GPAP reuse in accordance with embodiments of the present disclosure. More specifically,FIG.49illustrates operation at blocks3306and3308ofFIG.33in more detail in the case of a load/store instruction miss in the L1 data cache103that precipitates the replacement of an L2 cache entry401at block3306in an embodiment in which each L2 cache entry401holds a GENID408which is used to form GPAPs (rather than PAPs). Generally, GPAP reuse management is performed similar to the manner described with respect to the operation ofFIG.33. However, at blocks3302and3306, a GPAP is formed (rather than a PAP); at block3304, the LSU117makes GPAPs (rather than PAPs) available for comparison with other GPAPs (e.g., held in a SQ entry1301, LQ entry2901, WCB entry2401, L1 data cache entry201); and at block3308, the L2 cache107, LSU117, PCL132operate to prevent update of architectural state based on a comparison of the GPAP as a proxy for the physical memory line address that is made while the GPAP is still available for comparison as a proxy for a different physical memory line address. Operation begins inFIG.49at block3402(similar to the operation at block3402ofFIG.34) and proceeds to block4904. At block4904, the L2 cache107picks a replacement way in the set of the L2 cache107selected by the set index obtained from address Y, e.g., PA[16:6]. The L2 cache107then increments the GENID408held in L2 cache entry401of the way picked for replacement. The L2 cache107then forms a GPAP using the incremented GENID408, the set index, and the way of the entry401selected for replacement. In one embodiment, the operation at block4904is described with respect toFIG.50. Operation proceeds from block4904to block3405and proceeds through blocks3422in a manner similar to that described above with respect toFIG.34. However, at block3405ofFIG.49, the formed GPAP (rather than the formed PAP) is sent in the PAP reuse snoop request; at block3406ofFIG.49, the LSU117checks to see if the GPAP (rather than the PAP) is in use; at decision block3412ofFIG.49, operation proceeds according to whether the GPAP (rather than the PAP) is already in use; and at block3414ofFIG.49, the LSU117signals to the WCB109to set the NC flag for all WCB entries2401with the in use GPAP (rather than the PAP). FIG.50is an example flowchart illustrating operation of the processor100ofFIG.1to manage GPAP reuse in accordance with embodiments of the present disclosure. More specifically,FIG.50illustrates operation at block4904ofFIG.49in more detail. Operation begins at block5001. At block5001, the L2 cache107reads the GENID408from the L2 cache entry401of each way of the selected set and increments the values read to create an incremented GENID for each of the ways. That is, the L2 cache107does not increment the GENID408itself that is held in the L2 cache entry401of the non-picked way (see below), but instead merely creates the incremented GENIDs for use at block5002. More specifically, the LSU117increments only the GENID408held in the L2 cache107of the way picked at block5008, according to the operation described above with respect to block4904. At block5002, per block3402ofFIG.34, physical memory line address Y has missed in the L2 cache. That is, a set index that selects a set of the L2 cache107was obtained from address Y, and the tag portion of address Y did not match the tag404of any of the L2 cache entries401in any way of the selected set. Therefore, the L2 cache107sends the L2 set index along with the incremented GENIDs created at block5001to the L1 data cache103. Operation proceeds to block5004. At block5004, for each possible way number of the L2 cache107, the L1 data cache103effectively forms a GPAP with the way number, the received L2 set index, and the respective incremented GENID of the way that was created at block5001and sent at block5002. For example, in an embodiment in which the L2 cache107has four ways, the L1 data cache103forms four possible GPAPs using the four possible way numbers each concatenated with the L2 set index and with the respective incremented GENID of the way. Operation proceeds to block5006. At block5006, for each of the GPAPs formed at block5004, the L1 data cache103determines whether the formed GPAP is resident in the L1 data cache103. In one embodiment, the GPAP residency determination for each formed GPAP is as follows. The untranslated bits PA[11:6] of the L2 set index (e.g., corresponding untranslated bits PA[11:6] of physical memory line address Y), along with the four possible values of the upper two bits of the L1 data cache103set index are used to select four sets of the L1 data cache103(similar to the manner described above with respect toFIG.6), which implicates sixteen entries201of the L1 data cache103. The GdPAP209of each of the sixteen entries201is compared against four different formed GdPAPs to generate 16×4=64 match results. The four formed GdPAPs are formed using the four different possible L2 way values (i.e.,00,01,10,11) concatenated with the upper five bits of the L2 set index sent from the L2 cache107at block5002and further concatenated with the respective incremented GENID of the way. The sixteen match results associated with each of the four formed GdPAPs are Boolean OR'ed together to generate a single-bit GPAP residency indicator associated with the formed GdPAP. If the single-bit GPAP residency indicator is true, this indicates the associated formed GPAP is resident in the L1 data cache103, which indicates a high likelihood that the formed GPAP is in use as a proxy for a physical memory line address different than address Y, e.g., address X. The four single-bit GPAP residency indicators are sent as a 4-bit indicator to the L2 cache107. Operation proceeds to block5008. At block5008, the L2 cache107uses the indicators sent at block5006to pick the way of the L2 cache107to replace that reduces the likelihood that the GPAP formed by the picked way and the L2 cache set index is already in use as a proxy for a physical memory line address different from address Y. In one embodiment, the GPAP residency determination may be performed according toFIG.36described below. The GPAP residency determination may be conceptualized effectively as an approximation of the check performed at block3406ofFIG.34to determine whether the GPAP is already in use as a proxy for a physical memory line address different from address Y. Advantageously, typically the time required for the L1 data cache103to make the GPAP residency determination is hidden behind the time needed for the L2 cache107to go get the missing cache line from system memory or a third level cache memory. Furthermore, there may be many instances in which the GPAP residency determination indicates there is at least one way of the selected L2 cache set for which the formed GPAP is not resident in the L1 data cache103, indicating a high likelihood that the formed GPAP is not already in use. Same Address Load-Load Ordering Violation Handling As described above, in a system that includes multiple processors that share system memory and that each include a cache memory, there is a need for attaining cache coherency, which involves each cache processing snoop requests from the other caches. The presence of external snoop requests introduces a potential for what is referred to herein as a same address load-load ordering violation (SALLOV). A SALLOV may be defined as follows. A first processing core is processing a program that includes older and younger load instructions in program order that read from the same memory address, and there is no instruction that writes to the memory address that intervenes in the program order the older and younger load instructions. A cache memory of the first processing core holds a current copy of the memory line that includes current data at the memory address. The first processing core executes the younger load instruction by reading the current data from the cache memory before the older load instruction has been executed, i.e., the younger load instruction is executed out of program order. A second processing core writes to the memory address new data that is different from the current data held in the current copy of the memory line. More specifically, the second processing core sends the cache memory an invalidating snoop request that specifies the memory address and then writes the new data to the memory address. Thus, after execution of the younger load instruction, the cache memory invalidates the current copy of the memory line implicated by the memory address in response to receiving the snoop request. After invalidation of the current copy of the memory line, the first processing core attempts to execute the older load instruction resulting in a miss in the cache memory. In response to the cache miss, the first processing core fills the cache memory with a new copy of the memory line implicated by the memory address that includes the new data written by the second processing core. After the fill with the new copy of the memory line, the first processing core successfully executes the older load instruction by reading the new data from cache memory. As a result, the younger load instruction returns the current data that is older than the new data returned by the older load instruction. In this sense, the external snoop request is said to intervene between out of program order execution of the older and younger load instructions that read from the same memory address. The events just described generally occur in the order in which they are listed, but not necessarily. For example, the second processing core may begin the process of writing the new data to the memory address before the first processing core begins to execute the younger load instruction but the invalidating external snoop does not arrive at the cache memory until after the younger load instruction accesses the cache memory yet before the older load instruction accesses the cache memory. Many instruction set architectures disallow SALLOVs. Thus, conventional processors include hardware logic dedicated to the task of explicitly looking for the occurrence of a SALLOV and undoing it before it is committed to architectural state. For example, a conventional processor may do this by performing the following actions. First, when the conventional processor receives an invalidating snoop that specifies a physical address, it may compare the snoop physical address with each physical address in the load queue and mark a flag of all matching entries, which will include the entry of the younger load instruction of a potential SALLOV. Second, when the conventional processor executes any load instruction, e.g., the older load instruction of a potential SALLOV, it may compare the physical address specified by the load instruction with each physical address in the load queue. If any matching entry has a marked flag and is younger than the executing load instruction, this indicates a SALLOV has occurred speculatively. The SALLOV occurrence is speculative because, although the conventional processor has speculatively executed the younger and older load instructions and they have received the wrong load data that would violate the ISA disallowance of a SALLOV (i.e., when the younger load executed first it received the old data in the cache, and when the older load executed second it received the new data filled into the cache after the snoop invalidated the old data), the conventional processor has not yet committed the wrong load data to architectural state. So, before the wrong load data is committed to architectural state, the conventional processor flushes instructions to prevent the architectural state of the processor (e.g., the destination register specified by the younger load) from being updated with the old/wrong data. The process performed by the conventional processor described above requires hardware logic dedicated to the task of detecting a SALLOV speculative occurrence. The dedicated hardware logic, during execution of all load instructions, compares the physical address specified by the load instruction with the physical address specified in each of the entries of the load queue. In high performance superscalar execution processors designed to execute N load instructions in parallel, the load queue is a content-addressable memory (CAM) and the load queue includes N CAM ports to receive the N physical addresses from the N concurrently executing load instructions to see if they match any of the physical addresses specified in the load queue entries. Each additional CAM port in a load queue may add significantly more size and may consume significantly more power. Rather than including logic, as a conventional processor does, that checks for same address load-load ordering violations during execution of each load instruction, embodiments are described herein that check for the possibility of a SALLOV at the time an entry in the cache is filled with a new copy of a line of memory, which may significantly reduce the number of CAM ports needed in the load queue. FIG.51is an example block diagram illustrating portions of the processor100ofFIG.1that perform SALLOV prevention in accordance with embodiments of the present disclosure. The embodiments of the processor100described with respect toFIGS.51through60do not employ PAPs and therefore do not require PAP reuse management. Otherwise, the processor100shown in the block diagram ofFIG.51may be similar in many respects to the processor100of the block diagram shown inFIG.32. However, differences will now be described. In one embodiment, the L1 data cache103is a physically-indexed and physically-tagged (PIPT) cache103, and the processor100includes a level-1 translation lookaside buffer (L1 TLB)5141coupled to the LSU117, as shown inFIG.51. During execution of a load/store instruction, the LSU117looks up the virtual address specified by the load/store instruction (e.g., load/store VA321ofFIG.3). In the case of a hit in the L1 TLB5141, the L1 TLB5141provides a TLB physical page address (PPA)5509(described in more detail below with respect toFIG.55). The TLB PPA5509is combinable with untranslated bits of the load/store virtual address321(e.g., VA[11:6]) to form a physical memory line address (PMLA)5592, referred to herein as a load/store PMLA5592, into which the load/store virtual address321specified by the load/store instruction is translated. In an alternate embodiment, the L1 data cache103is a virtually-indexed and virtually-tagged (VIVT) cache103(as described above), and the processor100is absent the L1 TLB5141, and the VIVT L1 data cache103provides a cache PPA5609(described in more detail below with respect toFIG.56) that is combinable with the untranslated address bits (e.g., VA[11:6]) to form the PMLA5592during execution of the load/store instruction. Additionally, the LSU117is shown to generate a SALLOV abort request5101, rather than a PAP reuse abort request901, to the PCL132on the interface between the LSU117and PCL132. In response to a SALLOV abort request5101, the PCL132operates similarly to a PAP reuse abort request901, except with respect to only load instructions rather than both load/store instructions. Additionally, inFIG.51a LQ snoop5193, rather than a LSQ snoop3293ofFIG.32, is shown on the interface between the LSU117and LSQ125. Additionally, inFIG.51there is no PAP reuse snoop request3299ofFIG.32. Instead, the cache line fill request return3297serves a similar purpose to the PAP reuse snoop request ofFIG.32. That is, the fill return3297operates to trigger the LSU117to perform a LQ snoop5193. The fill return3297specifies a PMLA, rather than a PAP specified in the PAP reuse snoop request3299ofFIG.32. The LQ snoop5193snoops the load queue125with the PMLA specified in the fill return3297. A LQ snoop5193operates similarly to a LSQ snoop3293, except with respect to only load instructions rather than both load/store instructions. That is, the LQ snoop5193only snoops the load queue125—in contrast the LSQ snoop3293ofFIG.32snoops both the load queue125and the store queue125—in embodiments in which the load queue125and store queue125are distinct entities. The PMLA of the fill return3297is compared with a load PMLA5205(described below with respect toFIG.52) of each LQ entry2901associated with a load instruction that has completed execution. A load instruction has completed execution when it has received a PMLA5592—either from the L1 TLB5141or from the L1 data cache103—and written the received PMLA5592to the load PMLA field5205of the LQ entry2901allocated to the load instruction. The L2 cache107, LSU117, and PCL132operate together to prevent a SALLOV, as described in more detail below. Since the processor100ofFIG.51does not employ PAPs other differences may include: with respect toFIGS.6and8, the snoop request611from the L2 cache107to the L1 data cache103may specify a physical memory line address rather than a PAP; and with respect toFIG.7, the fill return at block704may include a physical memory line address rather than a PAP. FIG.52is an example block diagram of a load queue (LQ) entry2901of the LQ125ofFIG.1in accordance with embodiments of the present disclosure. The LQ entry2901ofFIG.52is similar to the LQ entry2901ofFIG.29. However, rather than a load PAP2904, the LQ entry2901ofFIG.52has a load PMLA5205. In an embodiment in which a cache line is 64 bytes, the load PMLA5205is PA[51:6], although in other embodiments having different cache line sizes and/or different physical address sizes, different bits of the physical address may be used. During commit of a store instruction, the load PMLA5205is compared with a store PMLA5305(described below with respect toFIG.53) of the store being committed to perform store-to-load forwarding checks, similar to the manner described above that instead uses a load PAP2904and a store PAP1304. FIG.53is an example block diagram of a store queue (SQ) entry1301of the SQ125ofFIG.1in accordance with embodiments of the present disclosure. The SQ entry1301ofFIG.53is similar to the SQ entry1301ofFIG.13. However, rather than a store PAP1304, the SQ entry1301ofFIG.53has a store PMLA5305. During execution of a load instruction, the store PMLA5305is compared with a load PMLA5592(described below with respect toFIG.55) of the load being execution to perform a store-to-load forwarding decision, similar to the manner described above that instead uses a store PAP1304and a load PAP1495. Additionally, during commit of a store instruction, the store PMLA5305is compared with a write PMLA5405(described below with respect toFIG.54) to perform a write-combining decision, similar to the manner described above that instead uses a store PAP1304and a write PAP2404. FIG.54is an example block diagram of a write-combine buffer (WCB) entry2401of the WCB109ofFIG.1in accordance with embodiments of the present disclosure. The WCB entry2401ofFIG.54is similar to the WCB entry2401ofFIG.24. However, rather than a write PAP2404, the WCB entry2401ofFIG.54has a write PMLA5405. FIG.55is an example block diagram of an entry5501of the L1 TLB5141ofFIG.51that is employed to accomplish SALLOV prevention in accordance with embodiments of the present disclosure. The L1 TLB entry5501includes a tag5504, a status5506and a physical page address (PPA)5509. To lookup a load/store virtual address321in the L1 TLB5141, a portion of the load/store VA321is used as a set index to select a set of the L1 TLB5141, and a tag portion of the load/store VA321is compared against the TLB PPA5509of each valid entry5501(e.g., indicated in the status5506) of the L1 TLB5141to detect a match (i.e., hit). If the load/store virtual address321hits in the L1 TLB5141, the TLB PPA5509is provided from the hit entry5501to the LSU117for use in further execution of the load/store instruction. The TLB PPA5509is combinable with the untranslated address bits (e.g., VA[11:6]) to form a PMLA during execution of the load/store instruction, which is referred to herein as load/store PMLA5592, that may be used in comparisons for various purposes by the processor100(rather than a PAP that is used for similar purposes in embodiments described above), including SALLOV prevention as described in more detail below. If the load/store virtual address321misses in the L1 TLB5141, the TWE145performs a page table walk and returns a physical page address (e.g., PA[51:12]) that is a translation of the corresponding portion of the load/store virtual address321(i.e., the virtual page address). The tag5504is then populated with the tag portion of the load/store virtual address321, and the TLB PPA5509is populated with the translated physical page address, and the status5506is updated to indicate the L1 TLB entry5501is valid. FIG.56is an example block diagram of a cache entry201of L1 data cache103ofFIG.1that is employed to accomplish SALLOV prevention in accordance with embodiments of the present disclosure. The L1 data cache103cache entry201ofFIG.56is similar in many respects to the cache entry201ofFIG.2. However, the cache entry201ofFIG.56specifies a physical page address (PPA)5609, rather than a dPAP209. The cache PPA5609is populated when a cache line is filled into the entry201. The cache entry201embodiment ofFIG.56corresponds to the alternate embodiment in which the L1 TLB5141is not present, and the L1 data cache103provides the cache PPA5609that is combinable with the untranslated address bits (e.g., VA[11:6]) to form the PMLA5592during execution of the load/store instruction. FIG.57is an example flowchart illustrating operation of the processor100ofFIG.1to accomplish SALLOV prevention in accordance with embodiments of the present disclosure. Operation begins at block5702. At block5702, a virtual address misses in the L1 data cache103, and in response the LSU117generates a cache line fill request to the L2 cache107, similar to the manner described with respect to blocks3402and3702ofFIGS.34and37. The virtual address may be specified by a load/store instruction (e.g., load/store virtual address321) or by a prefetch operation. The virtual address is translated into a physical memory line address (PMLA) Y (e.g., by DTLB141ofFIG.1) that is specified in the fill request. The L2 cache107returns to the LSU117PMLA Y and a copy of the line of memory at PMLA Y, similar to the manner described above with respect to block3408ofFIG.34, except that the L2 cache107does not return a PAP, but instead returns a PMLA. Additionally, the L2 cache107does not send a PAP reuse snoop request to the LSU117as inFIG.34. Instead, the cache line fill request return3297triggers a LQ snoop5193, as described below. Operation proceeds to block5704. At block5704, in response to the fill return3297, the LSU117performs a LQ snoop5193to snoop the load queue125with PMLA Y, which is the physical memory line address of the copy of the line of memory provided by the fill return3297. That is, PMLA Y is compared against the load PMLA5205of each entry2901of the load queue125that is associated with an outstanding load instruction that has completed execution. Additionally, the LSU117performs a fill of an entry201of the L1 data cache103with the returned copy of the line of memory at PMLA Y and writes the corresponding bits of the returned PMLA Y to the PPA5609of the entry201. In one embodiment, the LQ snoop5193is performed before the fill. In another embodiment, the LQ snoop5193and the fill are performed in an atomic manner with respect to the ability of a filled entry to be hit upon by execution of any load instruction. The atomic manner means the LQ snoop5193and fill are performed such that, even if the LQ snoop5193is performed after the fill, no load instruction is able to hit on the entry201after the fill and before the snoop, and therefore no load instruction is able to obtain the new data from the filled entry201nor write PMLA Y to the load PMLA5205of its associated LQ entry2901before the LQ snoop5193obtains the state of the LQ125. If the LQ snoop5193is performed before the fill, the two will effectively be atomic since any intervening load execution, i.e., before the fill, will miss and then wait for the fill into the L1 data cache103in response to the fill return3297from the L2 cache107. A description of the need for atomicity is described below at the end ofFIG.58. Operation proceeds to block5706. At block5706, based on the results of the LQ snoop5193at block5704, the LSU117determines whether a condition is true. The condition is that the PMLA of the filled entry201(i.e., PMLA Y) matches the load PMLA5205of at least one load instruction that has completed execution and that there is at least one other load instruction that has not completed execution. In one embodiment, the Done flag2936may be used to determine whether a load instruction has completed execution. In one embodiment, the condition further includes that among the load instructions that have completed execution and whose load PMLA5205matches the PMLA of the filled entry201(i.e., PMLA Y), there is at least one of them that is younger than the oldest load instruction that has not completed execution. Such an embodiment may prevent unnecessary SALLOV abort requests5101(in exchange for more complex condition checking logic) since if all the load instructions that have completed execution and whose load PMLA5205matches the PMLA of the filled entry201are older than the oldest load instruction that has not completed execution, then there is no possibility of a SALLOV occurring and therefore no need to generate a SALLOV abort request. Operation proceeds to decision block5708. At decision block5708, if the condition determined at block5706is true, then operation proceeds to block5712. At block5712, the LSU117signals a SALLOV abort request5101to the PCL132. Operation proceeds to block5714. At block5714, in response to the SALLOV abort request5101, the PCL132determines a flush boundary before the oldest load instruction that has not completed execution. In one embodiment, the PCL132is given the flush boundary1117by the LSU117in the SALLOV abort request5101, and the LSU117determines the flush boundary based on the state of the LQ125obtained in the LQ snoop5193. In another embodiment, the PCL132determines the flush boundary1117based on the state of the ROB122, and the LSU117operates (e.g., by controlling the ld/st completed3207and SALLOV abort request5101signals) to cause the state of the ROB122with respect to load instructions to match the state of the LQ125at the time the PCL132determines the flush boundary1117. In one embodiment, similar the manner described above with respect to block3416ofFIG.34, the LSU117controls the update of both the load queue125and effectively the ROB122regarding indications of whether a load instruction has completed execution, i.e., whether a load instruction has a valid load PMLA5205, and the LSU117effectively controls, via ld/st completed signal3207, the update of indications in the ROB122entries of whether or not a load instruction has completed execution, and the LSU117and PCL132ensure that the execution completion status in the load queue125viewed by the LSU117at the time the LQ snoop5193occurs matches the execution completion status in the ROB122viewed by the PCL132at the time the PCL132determines the flush boundary1117in response to the SALLOV abort request5101. Operation proceeds to block5716. At block5716, the PCL132flushes all load instructions younger than the flush boundary. FIG.58is an example flowchart illustrating operation of the processor100ofFIG.1to accomplish SALLOV prevention in accordance with embodiments of the present disclosure. Operation begins at block5802. At block5802, an entry201of the L1 data cache103is holding a first copy of a line of memory at a PMLA Y, which is referred to as “old data” in the example. Additionally, the cache PPA5609of the entry201is holding the corresponding bits of PMLA Y. In the alternate embodiment in which the L1 data cache103is physically-indexed and physically-tagged (i.e., the processor100includes the L1 TLB5141ofFIG.51), rather than the L1 data cache103, the TLB PPA5509of an entry of the L1 TLB5141may be holding the corresponding bits of PMLA Y. Operation proceeds to block5804. At block5804, the LSU117executes a younger load instruction that specifies a virtual address that translates into PMLA Y. The execution of the younger load instruction is out of program order with respect to execution of an older load instruction that specifies a virtual address that also translates into the PMLA Y, which presents the possibility of a SALLOV. Operation proceeds to block5806. At block5806, the younger load instruction, during its execution, hits in the L1 data cache103and receives load data, i.e., old data, from the hit entry201. Additionally, the younger load instruction receives PMLA Y, i.e., PMLA5592formed from the hit entry201and the untranslated virtual address bits VA[11:6]. In the alternate embodiment in which the L1 data cache103is physically-indexed and physically-tagged and the processor100includes the L1 TLB5141, the load virtual address hits in the L1 TLB5141, and PMLA5592is formed from the hit entry5501of the L1 TLB5141and the untranslated virtual address bits VA[11:6], i.e., PMLA Y, for provision to the younger load instruction and to the L1 data cache103, and PMLA Y hits in the entry201of the L1 data cache103. Operation proceeds to block5808. At block5808, the LSU117writes PMLA Y to the load PMLA5205of the LQ entry2901allocated to the younger load instruction and sets the Done flag2936to indicate the load instruction has completed execution, thus making it available for comparison during a subsequent LQ snoop5193, e.g., at block5814below and at block5704ofFIG.57. Operation proceeds to block5812. At block5812, the first copy of the line of memory at PMLA Y is removed from the L1 data cache103. For example, the first copy of the cache line may be removed by an external snoop request that results in invalidation of the entry201holding the cache line. For another example, the first copy of the cache line may be replaced by another cache line, i.e., the entry201holding the first copy of the cache line may be filled with a copy of another line of memory (e.g., at block5814below). The removal of the first copy of the line of memory at PMLA Y creates the possibility of a SALLOV, e.g., in the event that the line of memory at PMLA Y were to be updated by the other processor, and then a copy of the updated line of memory (“new data”) were subsequently filled into the L1 data cache103(e.g., at block5814below), and then the older load instruction was to execute and receive the new data from the filled entry201. Operation proceeds to block5814. At block5814, a virtual address (e.g., specified by a load/store instruction or a prefetch) that translates to PMLA Y misses in the L1 data cache103, which triggers operation ofFIG.57and results in a second copy of a line of memory (new data) at PMLA Y being filled into the L1 data cache103at block5704. In the alternate embodiment in which the L1 data cache103is physically-indexed and physically-tagged (i.e., the processor100includes the L1 TLB5141ofFIG.51), the PMLA Y misses in the L1 data cache103, which triggers operation ofFIG.57and results in a second copy of a line of memory (new data) at PMLA Y being filled into the L1 data cache103. Operation proceeds to block5816. At block5816, in the example, the condition determined at block5706is true. That is, at the time of the LQ snoop5193: (1) the younger load instruction had completed execution and therefore the LQ snoop5193with PMLA Y at block5704matched the load PMLA5205of the LQ entry2901allocated to the younger load instruction, and (2) the older load instruction had not yet completed execution and therefore the older load instruction had not hit in L1 data cache103and had not received PMLA Y and had not written PMLA Y to load PMLA5205of the LQ entry2901allocated to the older load instruction. As a result of operation ofFIG.57(e.g., blocks5712,5714and5716), a flush boundary is determined before the oldest load instruction that has not completed execution, resulting in both the younger and older load instructions being flushed. The older and younger load instructions will subsequently be re-fetched and re-executed during which they will both receive their load data from the new data filled into the L1 data cache103at block5814. As illustrated by the example ofFIG.58, a SALLOV was prevented according to the operation ofFIG.57, i.e., the younger load instruction does not commit to architectural state load data that is older than load data committed to architectural state by the older load instruction. Advantageously, the SALLOV was prevented by the embodiments described without logic to check for SALLOVs during execution of load instructions. Instead, the check for SALLOV prevention is performed at the time of the fill return into the L1 data cache103. In one embodiment, a single fill return3297may be performed per clock cycle, which requires a single CAM port to perform the corresponding LQ snoop5193. More specifically, advantageously the SALLOV was prevented by the embodiments described without the need for N CAM ports in the load queue125to enable N different load instructions, during their concurrent execution, to CAM concurrently against the load queue125to prevent a SALLOV as in a conventional processor, as described above. Given that load executions tend to occur much more frequently than cache line fills, the absence of logic to check for SALLOVs during execution of load instructions may result in reduced power consumption relative to a conventional processor. Additionally, the absence of logic to check for SALLOVs during execution of load instructions may also result in reduced size relative to a conventional processor. Finally, the embodiments described, unlike a conventional processor, accomplish SALLOV prevention without the need to mark load queue entries that match an external snoop request physical address, as described above, which may further reduce size and power consumption relative to a conventional processor. To illustrate the need for atomicity as described above with respect to block5704ofFIG.57, assume for the moment that the processor100is configured such that the fill and LQ snoop5193are not atomic, i.e., the LQ snoop5193occurs after the fill and that a load instruction is able to hit on the filled entry201before the LQ snoop5193. Further assume that, before the old data is removed, a first load instruction older than the flush boundary1117completes execution and receives old data and writes PMLA Y to its load PMLA5205and sets the Done flag2936before the LQ snoop5193. Further assume that, after the fill, a second load instruction older than the flush boundary1117and older than the first load instruction completes execution and receives new data and writes PMLA Y to its load PMLA5205and sets the Done flag2936before the LQ snoop5193. In the embodiment ofFIG.57, the flush boundary1117is selected before the oldest load instruction that has not completed execution at block5714. Thus, these two load instructions, being older than the flush boundary1117, would not be flushed and would constitute a SALLOV. However, because the processor100is configured, according to block5704, to either perform the LQ snoop5193before the fill or to perform the fill and the LQ snoop5193atomically, the LQ125state obtained by the LQ snoop5193will indicate the second load in the example above has not completed execution. Therefore, the condition determined at block5706will be satisfied, and the flush boundary1117determined at block5714will be at least before the second load instruction, such that both the first and second load instructions will be flushed and a SALLOV will be prevented. Stated alternatively, the performance of the LQ snoop5193before the fill or atomically therewith ensures that the state of the LQ125captured by the LQ snoop5193reflects any outstanding load/store instructions that have completed execution and obtained old data at PMLA Y and does not reflect any outstanding load/store instructions that could complete execution after the fill and obtain new data at PMLA Y, which enables the LSU117to determine whether there is a possibility of a SALLOV and, if so, to signal the need for a SALLOV abort5101at block5712to prevent a SALLOV. FIG.59is an example flowchart illustrating operation of the processor100ofFIG.1to accomplish SALLOV prevention in accordance with an alternate embodiment of the present disclosure. In the embodiment ofFIG.59, a location for the flush boundary is determined that is different from the flush boundary determined inFIG.57. The operation of the processor100according to the embodiment ofFIG.59is similar in many respects to the operation of the embodiment ofFIG.57. Operation ofFIG.59begins at block5702, as inFIG.57, and proceeds to block5904. At block5904, in response to the fill return3297, the LSU117performs a LQ snoop5193to snoop the load queue125with PMLA Y, which is the physical memory line address of the copy of the line of memory provided by the fill return3297. Additionally, the LSU117performs a fill of an entry201of the L1 data cache103with the returned copy of the line of memory at PMLA Y and writes the corresponding bits of the returned PMLA Y to the PPA5609of the entry201. Operation proceeds to block5906. At block5906, based on the results of the LQ snoop5193at block5704, the LSU117determines whether a condition is true. The condition is that the PMLA of the filled entry201(i.e., PMLA Y) matches the load PMLA5205of at least one load instruction that has completed execution (i.e., there is not the additional requirement as at block5706ofFIG.57that there is at least one other load instruction that has not completed execution). Operation proceeds from block5906to decision block5708and then to block5712(assuming the condition is true) as inFIG.57. Operation ofFIG.59proceeds from block5712to block5914. At block5914, in response to the SALLOV abort request5101, the PCL132determines a flush boundary before the oldest load instruction that has completed execution and whose load PMLA5205matches the physical memory line address of the copy of the line of memory provided by the fill return3297at block5702and with which the load queue125was snooped at block5704. Operation proceeds to block5716. At block5716, the PCL132flushes all load instructions younger than the flush boundary. FIG.60is an example flowchart illustrating operation of the processor100ofFIG.1to accomplish SALLOV prevention in accordance with an alternate embodiment of the present disclosure. The operation of the processor100according to the embodiment ofFIG.60is similar in many respects to the operation of the embodiment ofFIG.58. Operation ofFIG.60begins at block5802and proceeds to block5804, then to block5806, then to block5808, then to block5812, as inFIG.58. From block5812, operation ofFIG.60proceeds from block5812to block6014. At block6014, a virtual address (e.g., specified by a load/store instruction or a prefetch) that translates to PMLA Y misses in the L1 data cache103, which triggers operation ofFIG.59and results in a second copy of a line of memory (“new data”) at PMLA Y being filled into the L1 data cache103. In the alternate embodiment in which the L1 data cache103is physically-indexed and physically-tagged (i.e., the processor100includes the L1 TLB5141ofFIG.51), the PMLA Y misses in the L1 data cache103, which triggers operation ofFIG.57and results in a second copy of a line of memory (“new data”) at PMLA Y being filled into the L1 data cache103. Operation proceeds to block6016. At block6016, in the example, the condition determined at block5906is true. That is, at the time of the LQ snoop5193the younger load instruction had completed execution and therefore the LQ snoop5193with PMLA Y at block5704matched the load PMLA5205of the LQ entry2901allocated to the younger load instruction. As a result of operation ofFIG.59(e.g., blocks5712,5914and5716), a flush boundary is determined before the oldest load instruction that has completed execution and whose load PMLA5205matches the PMLA (i.e., PMLA Y) of the entry201filled at block6014, resulting in the younger load instruction being flushed. The younger load instruction will subsequently be re-fetched and re-executed during which it will receive its load data from the new data filled into the L1 data cache103at block6014. If the older load instruction is older than the flush boundary, it will eventually execute and receive its load data from the new data filled into the L1 data cache103at block6014. If the older load instruction is younger than the flush boundary, it will subsequently be re-fetched and re-executed during which it will receive its load data from the new data filled into the L1 data cache103at block6014. So, in either case, a SALLOV is prevented because both the older and younger load instructions receive the new data. As illustrated by the example ofFIG.60, a SALLOV was prevented according to the operation ofFIG.59, i.e., the younger load instruction does not commit to architectural state load data that is older than load data committed to architectural state by the older load instruction. Advantageously, as described above with respect toFIG.58, the SALLOV was prevented by the embodiments described without the need for logic to check for SALLOVs during execution of load instructions but instead checks for SALLOV prevention at the time of the fill return into the L1 data cache103and accomplishes SALLOV prevention with similar advantages. Embodiments of the processor100described with respect toFIGS.51through60do not employ PAPs and perform SALLOV prevention using physical memory line addresses. Embodiments will now be described in which the processor100uses PAPs and performs SALLOV prevention using PAPs. More specifically, embodiments described above that perform PAP reuse management by performing a PAP reuse snoop request and performing a PAP reuse abort if needed, as described above primarily with respect toFIGS.33through38B, may advantageously operate to prevent a SALLOV without additional logic, as will now be described. FIG.61is an example flowchart illustrating operation of the processor100ofFIG.1to accomplish SALLOV prevention in accordance with embodiments of the present disclosure. Operation begins at block6102. At block6102, an entry401of the L2 cache107and an entry201of the L1 data cache103are holding a first copy of a line of memory at a PMLA Y, which is referred to as “old data” in the example. Additionally, the entry201of the L1 data cache103is holding a PAP, referred to in the example as “PAP Q,” that is a proxy for PMLA Y. More specifically, the entry201of the L1 data cache103is holding the dPAP209portion of PAP Q (i.e., the L2 way[1:0] and PA[16:12]), and the remaining bits of PAP Q (i.e., PA[11:6]) are the untranslated bits VA[11:6] of a load/store virtual address321used to access the L1 data cache103. Operation proceeds to block6104. At block6104, the LSU117executes a younger load instruction that specifies a virtual address that translates into PMLA Y. The execution of the younger load instruction is out of program order with respect to execution of an older load instruction that specifies a virtual address that also translates into the PMLA Y, which presents the possibility of a SALLOV. Operation proceeds to block6106. At block6106, the younger load instruction, during its execution, hits in the L1 data cache103and receives load data, i.e., old data, from the hit entry201. Additionally, the younger load instruction receives PAP Q, i.e., the dPAP209of the hit entry201and the untranslated virtual address bits VA[11:6]. Operation proceeds to block6108. At block6108, the LSU117writes PAP Q to the load PAP2904of the LQ entry2901ofFIG.29allocated to the younger load instruction and sets the Done flag2936to indicate the load instruction has completed execution, thus making it available for comparison during a subsequent LSQ snoop3193, e.g., at block6114below and at block3406ofFIG.34orFIG.37. Operation proceeds to block6112. At block6112, the first copy of the line of memory at PMLA Y is removed from the L2 cache107. More specifically, the first copy of the cache line is replaced by another cache line, i.e., the entry401holding the first copy of the cache line is filled with a copy of another line of memory at a fill physical memory line address different from PMLA Y (e.g., at block6114below), which triggers operation of the processor100according toFIG.34. Additionally, consistent with the policy that the L2 cache107is inclusive of the L1 data cache103, the first copy of the line of memory at PMLA Y is also removed (evicted) from the L1 data cache103(e.g., at block3405). The removal of the first copy of the line of memory at PMLA Y creates the possibility of a SALLOV, e.g., in the event that the line of memory at PMLA Y were to be updated by the other processor, and then a copy of the updated line of memory at PMLA Y (“new data”) were subsequently filled into the L1 data cache103(e.g., at block6114below), and then the older load instruction was to execute and receive the new data from the filled entry201. Operation proceeds to block6114. At block6114, a virtual address (e.g., specified by a load/store instruction or a prefetch) that translates to PMLA Y misses in the L1 data cache103, which triggers operation ofFIG.34orFIG.37and results in a second copy of a line of memory (new data) at PMLA Y being filled into the L2 cache107at block3408ofFIGS.34/37and into the L1 data cache103at block3418ofFIGS.34/37. Operation proceeds to block6116. At block6116, in the example, PAP Q is already in use at block3412ofFIGS.34/37. That is, at the time of the LSQ snoop3193, the younger load instruction had completed execution and therefore the LSQ snoop3193with PAP Q at block3406matched the load PAP2904of the LQ entry2901allocated to the younger load instruction. Additionally, the older load instruction had not yet completed execution and therefore the older load instruction had not hit in L1 data cache103and had not received PAP Q and had not written PAP Q to load PAP2904of the LQ entry2901allocated to the older load instruction. As a result of operation ofFIGS.34/37(e.g., blocks3414,3416and3422), a flush boundary is determined. In the embodiment in which the flush boundary is determined at block3416to be before the oldest load/store instruction that has not completed execution, both the younger and older load instructions are flushed and will subsequently be re-fetched and re-executed during which they will both receive their load data from the new data filled into the L1 data cache103at block6114. In the embodiment in which the flush boundary is determined at block3416to be before the oldest matching load/store instruction that has completed execution: (1) in the case that the younger load instruction is the oldest matching completed load, then the younger load instruction will be flushed and the older load instruction will eventually execute and receive new data, and the younger load will subsequently be re-fetched and re-executed during which it will receive new data; (2) in the case that the oldest matching completed load is older than the older load instruction, then both the younger and the older load instructions will be flushed and will subsequently be re-fetched and re-executed during which they will both receive new data. FIG.62is an example flowchart illustrating operation of the processor100ofFIG.1to accomplish SALLOV prevention in accordance with embodiments of the present disclosure. Operation according toFIG.62is similar in many respects to operation according toFIG.61. However, whereasFIG.61describes operation in which the copy of the line of memory is removed by replacement and the SALLOV is prevented via operation ofFIG.34or37,FIG.62describes operation in which the copy of the line of memory is removed by invalidation by an external snoop request, and the SALLOV is prevented via operation ofFIG.38B. Operation begins at block6102and proceeds to block6104then to block6106then to block6108as described with respect toFIG.61. From block6108, operation proceeds to block6212. At block6212, the first copy of the line of memory at PMLA Y is removed from the L2 cache107. More specifically, the entry401holding the first copy of the cache line is invalidated in response to an external snoop request that specifies PMLA Y, which triggers operation of the processor100according toFIG.38B. Additionally, consistent with the policy that the L2 cache107is inclusive of the L1 data cache103, the first copy of the line of memory at PMLA Y is also removed (evicted) from the L1 data cache103(e.g., at block3405). The removal of the first copy of the line of memory at PMLA Y creates the possibility of a SALLOV, e.g., in the event that the line of memory at PMLA Y were to be updated by the other processor, and then a copy of the updated line of memory at PMLA Y (“new data”) were subsequently filled into the L1 data cache103, and then the older load instruction was to execute and receive the new data from the filled entry201. Operation proceeds to block6216. At block6216, in the example, PAP Q is already in use at block3412ofFIG.38B. That is, at the time of the LSQ snoop3193, the younger load instruction had completed execution and therefore the LSQ snoop3193with PAP Q at block3406matched the load PAP2904of the LQ entry2901allocated to the younger load instruction. Additionally, the older load instruction had not yet completed execution and therefore the older load instruction had not hit in L1 data cache103and had not received PAP Q and had not written PAP Q to load PAP2904of the LQ entry2901allocated to the older load instruction. As a result of operation ofFIG.38B(e.g., blocks3414,3416and3422), a flush boundary is determined. In the embodiment in which the flush boundary is determined at block3416to be before the oldest load/store instruction that has not completed execution, both the younger and older load instructions are flushed and will subsequently be re-fetched and re-executed during which they will both receive their load data from new data filled into the L1 data cache103. In the embodiment in which the flush boundary is determined at block3416to be before the oldest matching load/store instruction that has completed execution: (1) in the case that the younger load instruction is the oldest matching completed load, then the younger load instruction will be flushed and the older load instruction will eventually execute and receive new data, and the younger load will subsequently be re-fetched and re-executed during which it will receive new data; (2) in the case that the oldest matching completed load is older than the older load instruction, then both the younger and the older load instructions will be flushed and will subsequently be re-fetched and re-executed during which they will both receive new data. FIG.63is an example flowchart illustrating operation of the processor100ofFIG.1to accomplish SALLOV prevention in accordance with embodiments of the present disclosure. Operation according toFIG.63is similar in many respects to operation according toFIG.62. However, whereasFIG.62describes operation in which the copy of the line of memory is removed by invalidation by an external snoop request and the SALLOV is prevented via operation ofFIG.38B,FIG.63describes operation in which the copy of the line of memory is removed by replacement precipitated by a prefetch request to the L2 cache107, and the SALLOV is prevented via operation ofFIG.38A. Operation begins at block6102and proceeds to block6104then to block6106then to block6108as described with respect toFIG.61. From block6108, operation proceeds to block6312. At block6312, the first copy of the line of memory at PMLA Y is removed from the L2 cache107. More specifically, the first copy of the cache line is replaced by another cache line, i.e., the entry401holding the first copy of the cache line is filled with a copy of another line of memory at a fill physical memory line address different from PMLA Y specified by a prefetch request to the L2 cache107, which triggers operation of the processor100according toFIG.38A. Additionally, consistent with the policy that the L2 cache107is inclusive of the L1 data cache103, the first copy of the line of memory at PMLA Y is also removed (evicted) from the L1 data cache103(e.g., at block3405). The removal of the first copy of the line of memory at PMLA Y creates the possibility of a SALLOV, e.g., in the event that the line of memory at PMLA Y were to be updated by the other processor, and then a copy of the updated line of memory at PMLA Y (“new data”) were subsequently filled into the L1 data cache103, and then the older load instruction was to execute and receive the new data from the filled entry201. Operation proceeds to block6316. At block6316, in the example, PAP Q is already in use at block3412ofFIG.38A. That is, at the time of the LSQ snoop3193, the younger load instruction had completed execution and therefore the LSQ snoop3193with PAP Q at block3406matched the load PAP2904of the LQ entry2901allocated to the younger load instruction. Additionally, the older load instruction had not yet completed execution and therefore the older load instruction had not hit in L1 data cache103and had not received PAP Q and had not written PAP Q to load PAP2904of the LQ entry2901allocated to the older load instruction. As a result of operation ofFIG.38A(e.g., blocks3414,3416and3422), a flush boundary is determined. In the embodiment in which the flush boundary is determined at block3416to be before the oldest load/store instruction that has not completed execution, both the younger and older load instructions are flushed and will subsequently be re-fetched and re-executed during which they will both receive their load data from new data filled into the L1 data cache103. In the embodiment in which the flush boundary is determined at block3416to be before the oldest matching load/store instruction that has completed execution: (1) in the case that the younger load instruction is the oldest matching completed load, then the younger load instruction will be flushed and the older load instruction will eventually execute and receive new data, and the younger load will subsequently be re-fetched and re-executed during which it will receive new data; (2) in the case that the oldest matching completed load is older than the older load instruction, then both the younger and the older load instructions will be flushed and will subsequently be re-fetched and re-executed during which they will both receive new data. As illustrated by the examples ofFIGS.61through63, a SALLOV was prevented according to the operation ofFIGS.34,37,38A and38B, i.e., the younger load instruction does not commit to architectural state load data that is older than load data committed to architectural state by the older load instruction. Advantageously, the SALLOV was prevented by the embodiments described without logic to check for SALLOVs during execution of load instructions. Instead, the check for SALLOV prevention is performed at the time of the fill return into the L1 data cache103. In one embodiment, a single fill return3297ofFIG.32may be performed per clock cycle, which requires a single CAM port in the load queue125to perform the corresponding LSQ snoop3193. More specifically, advantageously the SALLOV was prevented by the embodiments described without the need for N CAM ports in the load queue125to enable N different load instructions, during their concurrent execution, to CAM concurrently against the load queue125to prevent a SALLOV as in a conventional processor, as described above. Given that load executions tend to occur much more frequently than cache line fills, the absence of logic to check for SALLOVs during execution of load instructions may result in reduced power consumption relative to a conventional processor. Additionally, the absence of logic to check for SALLOVs during execution of load instructions may also result in reduced size relative to a conventional processor. Finally, the embodiments described, unlike a conventional processor, accomplish SALLOV prevention without the need to mark load queue entries that match an external snoop request physical address, as described above, which may further reduce size and power consumption relative to a conventional processor. Although embodiments of prevention of a SALLOV have been described with respect to PAP use, other embodiments are contemplated in which the prevention of a SALLOV is similarly performed with respect to generational PAP (GPAP) use. It should be understood—especially by those having ordinary skill in the art with the benefit of this disclosure—that the various operations described herein, particularly in connection with the figures, may be implemented by other circuitry or other hardware components. The order in which each operation of a given method is performed may be changed, unless otherwise indicated, and various elements of the systems illustrated herein may be added, reordered, combined, omitted, modified, etc. It is intended that this disclosure embrace all such modifications and changes and, accordingly, the above description should be regarded in an illustrative rather than a restrictive sense. Similarly, although this disclosure refers to specific embodiments, certain modifications and changes can be made to those embodiments without departing from the scope and coverage of this disclosure. Moreover, any benefits, advantages, or solutions to problems that are described herein with regard to specific embodiments are not intended to be construed as a critical, required, or essential feature or element. Further embodiments, likewise, with the benefit of this disclosure, will be apparent to those having ordinary skill in the art, and such embodiments should be deemed as being encompassed herein. All examples and conditional language recited herein are intended for pedagogical objects to aid the reader in understanding the disclosure and the concepts contributed by the inventor to furthering the art and are construed as being without limitation to such specifically recited examples and conditions. This disclosure encompasses all changes, substitutions, variations, alterations, and modifications to the example embodiments herein that a person having ordinary skill in the art would comprehend. Similarly, where appropriate, the appended claims encompass all changes, substitutions, variations, alterations, and modifications to the example embodiments herein that a person having ordinary skill in the art would comprehend. Moreover, reference in the appended claims to an apparatus or system or a component of an apparatus or system being adapted to, arranged to, capable of, configured to, enabled to, operable to, or operative to perform a particular function encompasses that apparatus, system, or component, whether or not it or that particular function is activated, turned on, or unlocked, as long as that apparatus, system, or component is so adapted, arranged, capable, configured, enabled, operable, or operative. Finally, software can cause or configure the function, fabrication and/or description of the apparatus and methods described herein. This can be accomplished using general programming languages (e.g., C, C++), hardware description languages (HDL) including Verilog HDL, VHDL, and so on, or other available programs. Such software can be disposed in any known non-transitory computer-readable medium, such as magnetic tape, semiconductor, magnetic disk, or optical disc (e.g., CD-ROM, DVD-ROM, etc.), a network, wire line or another communications medium, having instructions stored thereon that are capable of causing or configuring the apparatus and methods described herein.
304,061
11860795
DETAILED DESCRIPTION The Applicants have realized that in some development environments, a developer may utilize a development platform which includes a Random Access Memory (RAM) memory unit having a first size, to develop firmware or code or other program. The program may run properly on the development platform; however, the same program may later fail to run, or may run improperly or may cause runtime errors or faults, when being embedded or integrated into a runtime platform which includes a RAM memory unit having a second size which is smaller than the first size. For example, a developer may utilize a development platform having a processor and a RAM memory unit of 1,000 kilobytes. The developer produces a firmware code that runs properly on the development platform. Later, the same firmware code is installed in, or is integrated or embedded in, a runtime platform which includes a RAM memory unit of only 300 kilobytes. The firmware code may fail to run on the runtime platform, or may run improperly or may generate runtime errors or faults when running on the runtime platform. For example, the firmware code may attempt to access a memory cell or a memory region in the RAM memory of the runtime unit that is beyond the 300 kilobyte capacity of the RAM memory of the runtime platform. Such attempt may occur via one or more memory access ways in which a program or a code accesses a memory region or a memory address; such as, directly by a pointer to the target area of the memory being accessed (e.g., directly to a particular memory address), or using a suitable function for memory allocation (e.g., malloc) of the Operating System (OS). For example, an attempt by the developed code to read a value from a RAM memory address that is located at the 400th kilobyte of the RAM memory, would be successful on the development platform (which includes RAM memory of 1,000 kilobytes), but would fail on the runtime platform (which includes RAM memory of only 300 kilobytes). The Applicants have realized that conventional solutions that attempted to mitigate this problem are partial or inadequate. In a first conventional solution, the runtime platform is equipped to include a RAM memory unit having a large size, or having a RAM memory size that is at least as large as the size of the RAM memory of the development platform. However, realized the Applicants, such conventional solution may be inadequate. For example, in some situations, it may not be possible to equip the runtime platform with a larger-size RAM memory, due to form factor constraints of the runtime platform, or due to power consumption constraints, or design constraints, or physical size or physical arrangements constraints, or other limitations. This is particularly true when the runtime platform is not a full-size desktop computer or laptop computer, but rather, when the runtime platform is a small-sized or even a minuscule Internet-of-Things (IoT) sensor or device, or is part of a vehicular component or vehicle, or is part of a medical device or medical equipment, or is part of another electronic device having a limited form factor that makes it prohibitive to increase the size of the RAM memory of the runtime device. Additionally, realized the Applicants, in some situations, the entity that constructs or manufactures the runtime platform, does not know at all what was the size of the RAM memory that was utilized (often by a different entity) on the development platform; and/or does not know what is the minimum size of RAM memory that should be included in the runtime platform in order to ensure that the code properly run thereon. Furthermore, in some situations, the knowledge of the fact that the development platform had include 10 megabytes of RAM memory, does not help at all the entity which constructs or manufactures the runtime platform which is structured to have RAM memory in the size range of 300 to 600 kilobytes. In another conventional and cumbersome solution that attempts to mitigate the problem, a code that was developed on a development platform having RAM memory of (for example) 10 megabytes, can be tested on various runtime platforms having different sizes of RAM memory units. However, this requires to manually construct dozens, or even hundreds, of such runtime platforms having different sizes of RAM memory units; and requires to manually install and test the numerous hardware configurations, in a cumbersome process that is time consuming and effort consuming and does not scale well. Furthermore, realized the Applicants, even such conventional testing may not always detect or reach the correct answer; for example, particularly if the developed code attempts to read a value from a “forbidden” region of the RAM memory, rather than attempting to write a value into such RAM region. For example, the developed code is run on a first runtime platform having only 300 kilobytes of RAM; attempts to write a value into a memory cell located at the 420th kilobyte of RAM; and such write operation causes a fault that the runtime platform may capture and report during a debugging stage. However, realized the Applicants, an attempt of the developed code, intentionally or accidently, to read a value from the 420th kilobyte of RAM, may go unnoticed and may not necessarily trigger an error or a fault or an exception on the runtime platform; since, for example, in some situations, such Read attempt may return a “zero” or a “null” result instead of an error indication, thereby letting such “forbidden” Read memory access to go unnoticed on the runtime device. In a similar conventional attempt to mitigate the problem, a conventional system may “paint” the RAM memory that is utilized on the development platform with a particular pattern or repeated fixed string of values or bits, and to later check whether that pattern was overwritten (during runtime, or during test runs) at memory regions that are “forbidden” to be accessed. For example, the development platform includes a RAM memory of 5,000 kilobyte; and the develop estimates that the developed code requires only 300 kilobytes of RAM, and also estimates that the developed code would never access the upper 4,700 kilobytes of RAM that actually exist on the development platform. Accordingly, the developer modifies the developed code, such that: before the intended code is run, a Memory Painting code is executed, and it fills the upper 4,700 kilobytes of RAM memory with a repeated string (e.g., the string “DEADBEEF”, repeated continuously as in “DEADBEEFDEADBEEF” and so forth). Then, after the intended code is run, the developed code executes a Painting Checker code, which checks whether the upper 4,700 kilobytes of RAM memory still contain, exclusively, only the repeated string “DEADBEEF” without any modifications. This conventional method may, in some situations, detect that the developed code has indeed written a value into the “forbidden” RAM area of the upper 4,700 kilobytes of RAM which had been “painted” in advance with the repeated string. However, the Applicants have realized that such solution is cumbersome, time consuming and effort consuming; and it requires to develop and to execute a prior code and a subsequent code, before and after (respectively) the execution of the actual code that would ship to third parties; and the memory “painting” process, as well as the later verification that the “painted” memory regions remained unmodified, may be slow. Furthermore, this conventional method might catch some improper Write operations towards a RAM memory address in the forbidden RAM region; but it may not catch an improper Read operation from such RAM memory address in the forbidden RAM region. Moreover, this conventional method may not catch an improper Write operation that wrote into the forbidden RAM memory region a value that happens to be identical to the value that had been “painted” there in advance; such as, a Write operation that writes the value “D” into the exact place that already stored a value of “D” in one of the repeating “DEADBEEF” strings that had been painted. The Applicants have also realized that there does not exist an adequate solution or system, which enable a developer utilizing a development platform, to determine and to ascertain in advance what is the minimum RAM memory requirements that are mandatory to exist in the runtime platform in order for the developed code to properly run on such runtime platform. Indeed, many software applications that are sold to the general public as a boxed software product are often marketed with a caption indicating “minimum requirements” for Operating System version, RAM memory size, hard disk storage size, and/or processing power; however, such requirements are often inaccurate and do not reflect the actual size of mandatory RAM memory that is absolutely necessary in order for the code to run; and such general software requirement do not apply to a firmware code that is developed on a development platform in order to later be embedded as firmware in a particular electronic device. The Applicants have thus realized that there is a need for a system that would enable a developer, who utilizes a development platform to develop code (particularly firmware code, although this is only a non-limiting example) to test and to check in advance, on his development platform, whether the developed code is expected to run properly on a different platform (runtime platform) that would have a particular RAM memory size that is smaller than the RAM memory size of the development platform. The Applicants have further realized that there is also a need for a system that would enable a developer, who utilizes a development platform to develop code, to determine and to ascertain in advance what is the minimum requirement of RAM memory size that a runtime platform must include, in order for the developed code to run properly on such runtime platform. The Applicants have also realized that this information would also be useful for entities that construct and produce such runtime platforms, which may often be constrained with regard to the storage capacity or the size of the memory unit that they can include in the runtime platform, due to form factor constraints, power consumption constraints, design constraints, or other constraints. Accordingly, some implementations enable to test and to check in advance whether a firmware code, that is being developed on a development platform having a first circuit board with a RAM memory unit having a first RAM memory size, would run properly on a runtime platform having a second circuit board with a RAM memory unit having a second, smaller, RAM memory size. Some implementations may enable a developer or a development group or entity, to ensure that a developed code that indeed runs properly on a development platform having a development board and a development RAM memory, would also run properly on runtime platforms having other sizes of RAM memory; and to ensure that such developed code would not improperly attempt, accidently or intentionally, to read values from or to write values into a memory address that exists on the development platform but does not exist on the runtime platform. Some implementations may enable developers of firmware, and other types of code, to confidently ship such firmware or code to an entity that produces runtime platforms, without the need to later debug and fix the already-shipped code due to its failure to properly run on a runtime platform. Reference is made toFIG.1A, which is a block diagram illustration of a system100A, in accordance with some demonstrative implementations. System100A may be utilized to develop and test a code or a program (e.g., particularly firmware code, as a non-limiting example), which may be referred to as “the target code” or “the executable code”; and to determine what is the minimum amount or size of RAM memory that is absolutely required to exist on a runtime platform in order to properly run the target code, and to test (e.g., iteratively) whether or not the target code would properly run on a runtime platform having a particular amount or size or RAM memory. In a demonstrative implementation, System100A may comprise a Development Platform105for developing a code, and a Testing Platform107for testing the code and determining its minimum RAM requirement at runtime (e.g., in order to determine what is the absolute minimum size of RAM memory, that an IoT device or other target device should have in order to properly run the Target Code122therein). Development Platform105may include an Integrated Development Environment (IDE)110or other tool for efficient development of a code or program. For example, IDE110may include a Source-Code Editor111, enabling a developer to write and edit source-code. IDE110may further comprise a Compiler112, able to translate the source-code, typically from a high-level programming language to a lower-level language (e.g., machine code, object code, assembly language) and/or able to create from the source-code a machine-executable program. IDE110may further include one or more Build Tool(s)113or build automation tools, to facilitate the generation or the packaging of a “software build”. IDE110may include a Debugger114, enabling to run the target code under controlled conditions that permit monitoring and tracking of the execution progress as well as the utilization and status of system resources. IDE110may include a User Interface (UI)115(e.g., graphical UI and/or text-based UI), which enables a developer or a development team to activate and to configure the relevant modules of IDE110. Optionally, IDE110may comprise, or may be associated with, other suitable modules or unit; such as, for example, a version control system, a class browser, an object browser, or the like. A developer or a development team may utilize the Development Platform105and the IDE110in order to write and edit a Source Code121, and to generate from it an executable code which is Target Code122. Development Platform105may be implemented via, or on, a computerized device or a computer, for example, a laptop computer or a desktop computer or a workstation. Development Platform105may include suitable hardware components and software components, such as: a processor131able to execute code, a storage unit132able to store data long-term (e.g., a hard disk drive, a solid state drive, a non-volatile storage unit); a memory unit133able to store data short-term (e.g., a volatile memory unit, a RAM memory unit); one or more input units134(e.g., keyboard, mouse, touch-pad, touch-screen, audio microphone); one or more output units135(e.g., screen, touch-screen, audio speakers); one or more wired and/or wireless transceivers136(e.g., Wi-Fi transceiver, Bluetooth transceiver); or the like. System100A may include, or may be implemented using, other suitable hardware components and/or software components; for example, an Operating System (OS)137, one or more drivers and applications, a power source, or the like. In the implementation shown inFIG.1A, the target code122is generated on the Development Platform105; and is then copied or transferred to the Testing Platform107, for the purpose of testing the target code122to determine its minimum RAM requirement when it runs on a target IoT device or other target device. In such implementation, the Development Platform105and the Testing Platform107are depicted as two separate devices or units or sub-systems. However, this is only a non-limiting example, and other implementations may be used, such that components of System100A may be distributed across a smaller or a larger number of devices or sub-systems. In a demonstrative example, a developer utilizes the Development Platform105and the IDE110to write Source Code121. The Source Code121is stored in storage unit132, and occupies (for example) 500 kilobytes there. The Source Code121is compiled into the executable code, which is Target Code122; which is also stored in storage unit132, and occupies (for example) 400 kilobytes there. In the implementation shown inFIG.1A, target code122may be copied or transferred from the development platform105, to the testing platform107that will perform the testing for determining the minimum RAM requirement of that target code122. In some implementations, the two size values that were mentioned above (e.g., the size of the source code, and the size of the executable target code) are still not indicative of the actual RAM memory that is required, as a minimum, in order to properly run the target code122on the target IoT device; since, for example, a target code122that occupies by itself 400 kilobytes, may still require to access a memory region of additional 600 kilobytes for purposes of data processing at runtime. In a demonstrative example, a RAM memory unit140is shown, having a size of (for example) 2,000 kilobytes. A dedicated component or unit, such as a Memory Protection Unit/Memory Management Unit (MPU/MMU)139, may be comprised in system100A (e.g., as part of Testing Platform107), and may operate to manage, map and/or protect the RAM memory unit140and/or particular memory portions or memory regions or memory addresses therein. For example, the MPU/MMU139may receive from processor131a request to access a particular virtual memory address; and may translate it to the respective physical memory address. The MPU/MMU139may perform operations other than virtual/physical address translation; for example, configuration and management of virtual memory, management of page tables, memory protection (e.g., preventing a process from accessing a memory address that was not allocated to it), bus arbitration, cache control, or the like. System100A may comprise a Runtime Execution Unit141, which enables a developer to execute the Target Code122in a controlled manner, on the Testing Platform107. In accordance with some implementations, an MPU/MMU Configuration Unit142may be utilized, in order to configure or set or modify the registers of the MPU/MMU139, such that during runtime of the Target Code122by the Runtime Execution Unit141, (i) the MPU/MMU139would allow usage of, or access to, a first particular region or portion of RAM memory unit140(denoted Authorized Portion140A), and (ii) the MPU/MMU139would disallow or prevent or block usage of, or access to, a second particular region or portion of RAM memory unit140(denoted Blocked Portion140B). In a demonstrative example, the developer may utilize the MPU/MMU Configuration Unit142to configure the MPU/MMU139, such that: in the next execution of the Target Code122(e.g., on the Testing Platform107), only the first 300 kilobytes of RAM memory unit140would be defined and utilized as Authorized Portion140A and would be accessible by the Target Code122(e.g., would be accessible by the Runtime Execution Unit141that runs the Target Code122); and the remaining 1,700 kilobytes of RAM memory unit140would be defined and utilized as Blocked Portion140A and would not be accessible by the Target Code122(e.g., would not be accessible by the Runtime Execution Unit141that runs the Target Code122) even though those additional 1,700 kilobytes of RAM memory actually exist in the physical component of RAM memory unit140. During runtime of the Target Code122on the Runtime Execution Unit141, the MPU/MMU139enforces the memory access constraints as configured by via the MPU/MMU Configuration Unit142; such that, only the Authorized Portion140A of RAM memory unit140is accessible by the running code, and such that the Blocked Portion140A of RAM memory unit140is blocked and is not accessible by the running code. During runtime of Target Code122at the Runtime Execution Unit141, if the Target Code122attempts to access a RAM memory address (or memory region, or memory portion) that was defined as “forbidden” (e.g., located in Blocked Portion140B; located outside of Authorized Portion140A; located outside the runtime boundaries as defined via the MPU/MMU Configuration Unit142), then a Fault (or error, or exception) would occur, and a Fault Detector143may detect the fault and log it, and may notify the developer (in real time, or subsequently) that a memory access fault has occurred. In some implementations, optionally, Fault Detector143may be configured or constructed to track, log, or report one or more types of faults or errors, for example: a memory read error or fault; a memory write error or fault; a memory access error or fault; an error or fault in (or related to) memory mapping or memory allocation, such as a failure of a malloc function or other memory allocation operation; a “crash” or an unexpected stopping of the executed code; non-responsiveness of the code, or the code being stuck in an infinite loop; or other faults or errors. Optionally, an Iterative Memory-Boundaries Configurator144may be utilized in order to automate and implement a process, in which the memory boundaries for the runtime of the Target Code122(e.g., the size or the relative size of Authorized Portion140A and/or Blocked Portion140B, or their ratio) are defined and then re-defined iteratively. Based on such runtime iterations, a Minimum Required RAM Determination Unit147may operate to determine the minimum size of RAM memory that is required in order for the Target Code122to properly run (e.g., to run without causing any memory access fault or memory access error). In a demonstrative example, the Iterative Memory-Boundaries Configurator144may implement a binary search algorithm (or a “binary chop” search algorithm, or a half-interval search algorithm) to find the minimum RAM memory that would be required at runtime of Target Code122. It is noted that the binary search algorithm is only a non-limiting example, which may be implemented via a Required RAM Searcher Unit151; which, in other implementations, may utilize other suitable types of search algorithms in order to achieve the same goal. For example, the full size of RAM memory unit140is 2,000 kilobytes; the Iterative Memory-Boundaries Configurator144divides this by two, and configures the MPU/MMU139to allow runtime access to only the first 1,000 kilobytes of RAM memory unit140. Such runtime of Target Code122is monitored, and no memory access faults are detected by the Fault Detector143(e.g., within a complete run; or within a run that performs a particular set of operations; or within a run that spans a pre-defined number of operations or time). Then, the Iterative Memory-Boundaries Configurator144proceeds automatically to divide by half the last used boundary (which was 1,000 kilobytes), and it configures the MPU/MMU139to allow runtime access to only the first 500 kilobytes of RAM memory unit140. Then, another run of Target Code122is executed (with enforcement of the new memory boundary of 500 kilobytes), and memory access faults are monitored but are not detected. Then, the Iterative Memory-Boundaries Configurator144again halves the previous boundary (which was 500 kilobytes), and configures the MPU/MMU139to allow access only to 250 kilobytes of RAM memory unit140. In such run of Target Code122, a memory access fault is detected; thereby indicating that while 500 kilobytes of RAM memory are sufficient, 250 kilobytes of RAM memory are not sufficient. In the next iteration, the Iterative Memory-Boundaries Configurator144halves the interval between 250 and 500 kilobytes, thereby setting a new RAM memory boundary of 375 kilobytes, and triggering an additional run of Target Code122with such boundary. If this run (at 375 kilobyte memory boundary) lacks any memory access fault, then the next run of Target Code122would be performed with a memory boundary of 313 kilobytes (e.g., approximately the average of 250 and 375 kilobytes); whereas, if that run (at 375 kilobyte memory boundary) caused a memory access fault, then the next run of Target Code122would be performed with a memory boundary of 438 kilobytes (e.g., approximately the average of 375 and 500 kilobytes). The iterative process may be repeated for a pre-defined number of iterations (e.g., for 10 iterations); or, until the interval of memory range that needs to be checked is smaller than a pre-defined threshold value (e.g., the system determined that the minimum required RAM memory is “between 250 and 283 kilobytes”, and that interval of 33 kilobytes is smaller than a pre-defined value of, for example, 50 kilobytes). Accordingly, system100A and its Iterative Memory-Boundaries Configurator144may enable efficient, rapid, iterative testing of the minimum RAM memory size that is required by Target Code122in order to run properly; without the need to re-compile the Source Code121or to re-build the Target Code122or to generate multiple different images of the Target Code122for this purpose. Rather, a single version of the executable Target Code122, or a single image of the executable Target Code122, or a single compiled Target Code122, may be used and re-used, repeatedly and iteratively, without re-building of images or re-compilation into multiple versions of executables; by causing dynamic configuration and re-configuration of the MPU/MMU139during runtime (or, immediately prior to each run of Target Code122; or, immediately at the commencement of each run of Target Code122). Additionally or alternatively, system100A may include a RAM Sufficiency Determination Unit146, which may enable a developer to efficiently and rapidly test and verify, whether a particular Target Code122(e.g., firmware, executable code) is expected to run properly on a runtime platform that would have a particular RAM memory size. In a demonstrative implementation, a developer receives a request to develop a firmware code that would require only 320 kilobytes of RAM memory at runtime. The developer utilizes the Development Platform105to write an initial source code121, and to generate (compile, build) from it an initial Target Code122, denoted as Build1. The developer utilizes the Testing Platform107and the MPU/MMU Configuration Unit142to configure the MPU/MMU139to allow runtime access of the Target Code122to only to 320 kilobytes of RAM memory (even though the Development Platform105itself or the Testing Platform107itself has RAM memory of (for example) 2,000 kilobytes). The run of Target Code122(Build1) is thus executed via the Testing Platform107with an enforced constraint of access to only 320 kilobytes of RAM memory. In a demonstrative example, this run causes a memory access fault, that is detected by the Fault Detector143and is reported or notified to the developer. Accordingly, the developer reviews and modifies his source code122, and generates from it a modified Target Code122(Build2), which he again tests with a memory access boundary of 320 kilobytes. This version (Build2) of the Target Code122did not generate any memory access faults; and the developer now knows that this Target Code122(Build2) is expected to run properly on a runtime platform or a runtime device that is equipped with only 320 kilobytes of RAM memory. In some implementations, system100A may further be utilized for other purposes; such as, for debugging code, and for detecting code that attempts to access a memory address or a memory region that is “forbidden” for such code to access, or that was not allocated to a particular code or process or thread or program, or for discovering unexpected attempts by a code to access a memory address or a memory region that the developer did not intend such code to access. Some implementations may be utilized for other purposes that may involve tracking of memory usage, or that may involve tracking of memory address space access. The Applicants have realized that system100A may provide benefits or advantageous that are not available from conventional systems, and that are not available from (for example) a conventional emulator or simulator. For example, a conventional emulator program or simulator program may run on a laptop computer having a RAM memory of 8,000 kilobytes, to emulate or to simulate a different computing device that has a RAM memory of only 1,000 kilobytes. However, such conventional emulators or simulators only provide an emulated environment or a simulated environment, and by definition they cannot ensure that the emulated or simulated program does not access memory regions or memory space that is beyond the intended memory space that was prescribed for the emulation or simulation purposes. In the scenario, an emulated program may run properly on the emulator program which emulates a 1,000 kilobyte RAM memory; yet in fact, the emulated program might perform a read access from a memory address that is beyond those 1,000 kilobytes of RAM memory, without necessarily triggering a fault or a crash. Similarly, in that scenario, an emulator or a simulator program may fail to properly detect or to properly report a forbidden memory access, or may handle it in a manner that conceals it rather than reporting it. Furthermore, even an observation that a memory access fault has occurred in a simulated or emulated environment, may still be a result of other issues and may not necessarily be indicative of the RAM memory size that is required; for example, such fault may occur due to a failure of the emulator or simulator program itself, or due to incorrect programming or configuring of such emulator or simulator program, or due to failure of such emulator or simulator to provide perfect or seamless emulation or simulation, or the like. In contrast, system100A and its implementations may be utilized to run the actual Target Code122, directly on an actual processor and with access to a RAM memory, without the need for intermediate emulators or simulators that affect the process and that are only emulated or simulated versions. Reference is made toFIG.1B, which is a block diagram illustration of a system100B, in accordance with some demonstrative implementations. System100B may be generally similar to system100A ofFIG.1A; however, all the components may be implemented as a single, unified, apparatus that enables both development and testing of the Target Code122, rather than separating the system into a Development Platform and a Testing Platform. This is only a non-limiting example; and other implementations may utilize a different number of devices or units, or may distribute the components of the system across multiple devices in other suitable ways. Reference is made toFIG.2, which is a schematic illustration of a set of components200, demonstrating RAM memory usage in accordance with some implementations. For example, a processor201may execute the instructions of an executable code202. Access of processor201to a RAM memory unit222(e.g., of 1,000 kilobytes) is performed via an MPU/MMU205, which defines two portions or two regions in RAM memory unit222: an Authorized Region222A (e.g., having the first 300 kilobytes), and a Blocked Region222B (e.g., having the remaining 700 kilobytes). As the processor201executes the executable code202, an instruction in executable code202attempts to cause the processor to read data from a memory address located in the 200th kilobyte of RAM memory, which is within the Authorized Region222A. Accordingly, the MPU/MMU205authorizes and allows such read operation, and the read value is returned to the processor201. This sequence of operations is indicated by the two double-sided arrows211and212, indicating the authorized access. In contrast, another instructions in executable code202later attempts to read data from a memory address located in the 400th kilobyte of RAM memory, which is beyond or is external to the Authorized Region222A, and is within the Blocked Region222B. Accordingly the MPU/MMU205blocks and prevents such read operation, and returns a Fault or an Error or an Exception signal or message or flag to processor201. This sequence of operations is indicated by the single double-sided arrow213, and by a lack of an additional arrow or line between the MPU/MMU205and the Blocked Region222B since the access to that memory region is blocked by the MPU/MMU205and is not performed. In accordance with some implementations, a system comprises: a Memory Protection Unit (MPU) associated with a Random Access Memory (RAM) memory unit; and an MPU configuration unit, to indicate to said MPU a limited size of RAM that would be available to a particular executable code during its runtime. The MPU is (i) to allow access of said particular executable code, during its runtime, to said limited size of RAM in said RAM memory unit, and (ii) to block any access of said particular executable code, during its runtime, to a forbidden memory region which includes any other region of said RAM memory unit that is beyond said limited size of RAM. In some implementations, the system further comprises: a fault detector to detect that a run of said particular executable code causes a memory access fault due to access to said forbidden memory region; and to generate a notification that said limited size of RAM is insufficient for proper execution of said particular executable code. In some implementations, the system further comprises: a fault detector to check whether or not a run of said particular executable code causes a memory access fault due to access of said particular executable code to said forbidden memory region; a RAM sufficiency determination unit, to determine that a particular size of RAM memory is sufficient for proper execution of said particular executable code, based on lack of detection of memory access faults. In some implementations, said limited size of RAM memory, that would be accessible by said executable code, (i) is not defined in a source code of said particular executable code, and (ii) is not defined by said particular executable code. In some implementations, said limited size of RAM memory, that would be accessible by said executable code, (i) is not defined in said source code, and (ii) is not defined by said particular executable code, and (iii) is set by the MPU configuration unit prior to runtime of said particular executable code, and (iv) is enforced by the MPU during runtime of said particular executable code. In some implementations, an apparatus comprises: a Memory Protection Unit (MPU) associated with a Random Access Memory (RAM) memory unit; wherein the MPU is to enable access of an executable code during its runtime, only to an allowed portion of said RAM memory unit; wherein the MPU is to prevent access of said executable code during its runtime, to a forbidden portion of said RAM memory unit, said forbidden portion comprising all memory space that is not in said allowed portion; an Iterative Memory-Boundaries Configurator unit, to iteratively modify a configuration of said MPU, by modifying in each iteration the size of the allowed portion of RAM that would be available to said executable code during runtime. In some implementations, the apparatus comprises: a fault detector to check, in each iteration, whether or not a run of said executable code caused a memory access fault due to access of said executable code to said forbidden portion of said RAM memory unit. In some implementations, the apparatus comprises or is operatively associated with: a Minimum Required RAM Determination Unit, to determine a minimum size of RAM that is required for proper operation of said executable code, based on output received from said fault detector indicating (i) which one or more iterations caused a memory access fault, and indicating (ii) which one or more iterations did not cause a memory access fault. In some implementations, the apparatus comprises or is operatively associated with: a required RAM searcher unit, to implement a search algorithm that searches for the minimum size of RAM that is required for proper operation of said executable code, based on a plurality of iterations of running said executable code with a different limit of available RAM that is enforced by said MPU. In some implementations, the required RAM searcher unit implements a binary search algorithm to determine the minimum size of RAM that is required for proper operation of said executable code. In some implementations, in each iteration, the same image of said executable code is executed on the same processor and on the same RAM memory unit, wherein the size of RAM memory that is accessible by said executable code at runtime is modified at the beginning of each iteration. In some implementations, in each iteration, the same executable code is executed without being re-compiled and without a need to edit or modify any source code of said executable code. In some implementations, a method comprises: (a) dynamically modifying, in an iterative process comprising two or more iterations, a maximum size of Random Access Memory (RAM) that a Memory Protection Unit (MPU) authorizes an executable program code to access; (b) in each of said iterations, running said executable program code while said MPU enforces a different maximum size of RAM that the executable program code is authorized to access, and monitoring whether said executable program code attempted to access a RAM memory address that is beyond said maximum size of RAM in said iteration; (c) based on said iterations, determining a minimum size of RAM that is required for said executable program code to run without causing a memory access fault. In some implementations, the apparatus comprises: said executable program code is an executable firmware code; wherein step (b) comprises running said executable firmware code while said MPU enforces said maximum size of RAM; wherein step (c) comprises determining the minimum size of RAM that is required for said executable firmware code to run without causing a memory access fault. In some implementations, the maximum size of RAM, that the MPU enforces in each iteration of running said executable program code, is determined based on a pre-defined search algorithm. In some implementations, the maximum size of RAM, that the MPU enforces in each iteration of running said executable program code, is determined based on a pre-defined Binary Search algorithm. In some implementations, each of said iterations comprises running an image of said executable program code that is identical across all of said iterations. In some implementations, each of said iterations comprises running a same single executable program code, without modifying in each iteration any memory limit that is set in a source code of said program code. In some implementations, each of said iterations comprises running a same single executable program code, without modifying in each iteration any memory limit that is set in a source code of said program code, and without re-compiling said source code for each of said iterations, and without creating a new image of said executable program code for each of said iterations. In some implementations, the method is performed automatically based on a user command to an Integrated Development Environment (IDE) to determine the minimum RAM requirements for said executable program code, and without requiring said user to modify a source code of said executable program code. Some implementations include devices, systems, and methods of determining memory requirements and tracking memory usage. For example, a method includes: dynamically modifying, in an iterative process including two or more iterations, a maximum size of Random Access Memory (RAM) that a Memory Protection Unit (MPU) authorizes an executable program code to access. In each iteration, the method includes running that executable program code while the MPU enforces a different maximum size of RAM, and monitoring whether the executable program code attempted to access a RAM memory address that is beyond that maximum size of RAM in that iteration. Based on such iterations, the method determines a minimum size of RAM that is required for that executable program code to run without causing a memory access fault. In some implementations, calculations, operations and/or determinations may be performed locally within a single device, or may be performed by or across multiple devices, or may be performed partially locally and partially remotely (e.g., at a remote server) by optionally utilizing a communication channel to exchange raw data and/or processed data and/or processing results. Although portions of the discussion herein relate, for demonstrative purposes, to wired links and/or wired communications, some implementations are not limited in this regard, but rather, may utilize wired communication and/or wireless communication; may include one or more wired and/or wireless links; may utilize one or more components of wired communication and/or wireless communication; and/or may utilize one or more methods or protocols or standards of wireless communication. Some implementations may utilize a special-purpose machine or a specific-purpose device that is not a generic computer, or may use a non-generic computer or a non-general computer or machine. Such system or device may utilize or may comprise one or more components or units or modules that are not part of a “generic computer” and that are not part of a “general purpose computer”, for example, cellular transceiver, cellular transmitter, cellular receiver, GPS unit, location-determining unit, accelerometer(s), gyroscope(s), device-orientation detectors or sensors, device-positioning detectors or sensors, or the like. Some implementations may utilize an automated method or automated process, or a machine-implemented method or process, or as a semi-automated or partially-automated method or process, or as a set of steps or operations which may be executed or performed by a computer or machine or system or other device. Some implementations may utilize code or program code or machine-readable instructions or machine-readable code, which may be stored on a non-transitory storage medium or non-transitory storage article (e.g., a CD-ROM, a DVD-ROM, a physical memory unit, a physical storage unit), such that the program or code or instructions, when executed by a processor or a machine or a computer, cause such processor or machine or computer to perform a method or process as described herein. Such code or instructions may be or may comprise, for example, one or more of: software, a software module, an application, a program, a subroutine, instructions, an instruction set, computing code, words, values, symbols, strings, variables, source code, compiled code, interpreted code, executable code, static code, dynamic code; including (but not limited to) code or instructions in high-level programming language, low-level programming language, object-oriented programming language, visual programming language, compiled programming language, interpreted programming language, C, C++, C#, Java, JavaScript, SQL, Ruby on Rails, Go, Cobol, Fortran, ActionScript, AJAX, XML, JSON, Lisp, Eiffel, Verilog, Hardware Description Language (HDL), Register-Transfer Level (RTL), BASIC, Visual BASIC, Matlab, Pascal, HTML, HTML5, CSS, Perl, Python, PHP, machine language, machine code, assembly language, or the like. Discussions herein utilizing terms such as, for example, “processing”, “computing”, “calculating”, “determining”, “establishing”, “analyzing”, “checking”, “detecting”, “measuring”, or the like, may refer to operation(s) and/or process(es) of a processor, a computer, a computing platform, a computing system, or other electronic device or computing device, that may automatically and/or autonomously manipulate and/or transform data represented as physical (e.g., electronic) quantities within registers and/or accumulators and/or memory units and/or storage units into other data or that may perform other suitable operations. The terms “plurality” and “a plurality”, as used herein, include, for example, “multiple” or “two or more”. For example, “a plurality of items” includes two or more items. References to “one embodiment”, “an embodiment”, “demonstrative embodiment”, “various embodiments”, “some embodiments”, and/or terms such as “implementation” or “implementations”, may indicate that the embodiment(s) or implementation(s) so described may optionally include a particular feature, structure, functionality, or characteristic, but not every embodiment or implementation necessarily includes the particular feature, structure, functionality or characteristic. Furthermore, repeated use of the phrase “in one embodiment” or “in one implementation”, does not necessarily refer to the same embodiment, although it may. Similarly, repeated use of the phrase “in some embodiments” or “in some implementations”, does not necessarily refer to the same set or group of embodiments or implementations, although it may. As used herein, and unless otherwise specified, the utilization of ordinal adjectives such as “first”, “second”, “third”, “fourth”, and so forth, to describe an item or an object, merely indicates that different instances of such like items or objects are being referred to; and does not intend to imply as if the items or objects so described must be in a particular given sequence, either temporally, spatially, in ranking, or in any other ordering manner. Some implementations may be used in, or in conjunction with, various devices and systems, for example, a Personal Computer (PC), a desktop computer, a mobile computer, a laptop computer, a notebook computer, a tablet computer, a server computer, a handheld computer, a handheld device, a Personal Digital Assistant (PDA) device, a handheld PDA device, a tablet, an on-board device, an off-board device, a hybrid device, a vehicular device, a non-vehicular device, a mobile or portable device, a set-top box, a cables television box or receiver or decoder, a satellite-based television box or receiver or decoder, a consumer device, a non-mobile or non-portable device, an appliance, a wireless communication station, a wireless communication device, a wireless Access Point (AP), a wired or wireless router or gateway or switch or hub, a wired or wireless modem, a video device, an audio device, an audio-video (A/V) device, a wired or wireless network, a wireless area network, a Wireless Video Area Network (WVAN), a Local Area Network (LAN), a Wireless LAN (WLAN), a Personal Area Network (PAN), a Wireless PAN (WPAN), or the like. Some implementations may be used in conjunction with one way and/or two-way radio communication systems, cellular radio-telephone communication systems, a mobile phone, a cellular telephone, a wireless telephone, a Personal Communication Systems (PCS) device, a PDA or handheld device which incorporates wireless communication capabilities, a mobile or portable Global Positioning System (GPS) device, a device which incorporates a GPS receiver or transceiver or chip, a device which incorporates an RFID element or chip, a Multiple Input Multiple Output (MIMO) transceiver or device, a Single Input Multiple Output (SIMO) transceiver or device, a Multiple Input Single Output (MISO) transceiver or device, a device having one or more internal antennas and/or external antennas, Digital Video Broadcast (DVB) devices or systems, multi-standard radio devices or systems, a wired or wireless handheld device, e.g., a Smartphone, a Wireless Application Protocol (WAP) device, or the like. Some implementations may comprise, or may be implemented by using, an “app” or application which may be downloaded or obtained from an “app store” or “applications store”, for free or for a fee, or which may be pre-installed on a computing device or electronic device, or which may be otherwise transported to and/or installed on such computing device or electronic device. Functions, operations, components and/or features described herein with reference to one or more implementations, may be combined with, or may be utilized in combination with, one or more other functions, operations, components and/or features described herein with reference to one or more other implementations. Some implementations may comprise any possible or suitable combinations, re-arrangements, assembly, re-assembly, or other utilization of some or all of the modules or functions or components or units that are described herein, even if they are discussed in different locations or different chapters of the above discussion, or even if they are shown across different drawings or multiple drawings. While certain features of some demonstrative implementations have been illustrated and described herein, various modifications, substitutions, changes, and equivalents may occur to those skilled in the art. Accordingly, the claims are intended to cover all such modifications, substitutions, changes, and equivalents.
49,038
11860796
DETAILED DESCRIPTION Various embodiments and aspects will be described with reference to details discussed below, and the accompanying drawings will illustrate the various embodiments. The following description and drawings are illustrative and are not to be construed as limiting. Numerous specific details are described to provide a thorough understanding of various embodiments. However, in certain instances, well-known or conventional details are not described in order to provide a concise discussion of embodiments. Reference in the specification to “one embodiment” or “an embodiment” or “some embodiments” means that a particular feature, structure, or characteristic described in conjunction with the embodiment can be included in at least one embodiment. The appearances of the phrase “embodiment” in various places in the specification do not necessarily all refer to the same embodiment. It should be noted that there can be variations to the flow diagrams or the steps (or operations) described therein without departing from the embodiments described herein. For instance, the steps can be performed in parallel, simultaneously, a differing order, or steps can be added, deleted, or modified. Reference will now be made in detail to embodiments, examples of which are illustrated in the accompanying drawings. In the following detailed description, numerous specific details are set forth in order to provide a thorough understanding of the present invention. However, it will be apparent to one of ordinary skill in the art that the present invention may be practiced without these specific details. In other instances, well-known methods, procedures, components, circuits, and networks have not been described in detail so as not to unnecessarily obscure aspects of the embodiments. It will also be understood that, although the terms first, second, etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first contact could be termed a second contact, and, similarly, a second contact could be termed a first contact, without departing from the scope of the present invention. The first contact and the second contact are both contacts, but they are not the same contact. The terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the description of the invention and the appended claims, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will also be understood that the term “and/or” as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. As used herein, the term “if” may be construed to mean “when” or “upon” or “in response to determining” or “in response to detecting,” depending on the context. Similarly, the phrase “if it is determined” or “if [a stated condition or event] is detected” may be construed to mean “upon determining” or “in response to determining” or “upon detecting [the stated condition or event]” or “in response to detecting [the stated condition or event],” depending on the context. Embodiments of data processing systems, user interfaces for such devices, and associated processes for using such devices are described. In some embodiments, the data processing system may comprise a portable communication device such as a mobile telephone that also contains other functions, such as PDA and/or music player functions. Exemplary embodiments of portable multifunction devices include, without limitation, the iPhone®, iPad®, and iPod Touch® devices from Apple Computer, Inc. of Cupertino, Calif. As described above, operating systems typically include a kernel space, which has access to substantially all resources and memory on the data processing system, and a user space in which user operations are performed and which has limited access to system resources and memory. Some existing operating systems manage drivers in the kernel space of the operating system, thereby affording the drivers direct access to the hardware that the device drivers control. However, this arrangement can present security risks in certain situations. It some instances it may be beneficial to implement the primary logic of device drivers could be moved to user space, e.g., to enhance security and to isolate driver capabilities. Subject matter described herein addresses these and other issues by providing systems and methods to enable execution space agnostic device drivers by allowing arbitrary device management logic to reside in user space of an operating system, rather than in kernel space of the operating system. In some examples described herein, a toolkit is provided to enable execution space agnostic device driver functionality. The toolkit enables a significant portion of the code of a device driver to reside in the user space of an operating system, rather than in the kernel space of the operating system. The user space driver can access kernel level functionality via a kernel proxy object that resides in the kernel space. The kernel proxy object can receive messages from the user space driver and perform direct access to the hardware on behalf of the user space driver. Access functionality that was previously linked with the main driver logic resident in the kernel space may be managed in the kernel space of the operating system. Further, the driver logic that executes in the user space communicates with the access functionality in the kernel space via an IPC interface. The IPC interface is dynamically generated based on class descriptions provided in the user space driver and the IPC relay is transparent to the driver logic, thereby enabling existing drivers to be migrated into user space with very little code modification. In some device drivers, functionality can be written that is agnostic to the execution space of the actual driver logic. Driver logic that executes in user space that performs an action that requires kernel execution privileges can execute that functionality via a proxy into kernel space logic. User mode logic can access resources (e.g., memory) that is allocated in kernel mode via the use of access rights that are vended to logic with legitimate access to those resources. In one embodiment, both user mode logic and kernel mode logic make use of explicit access rights to a resource when accessing the resource, for example, via a capability model. In one embodiment, access rights are implemented transparently for user space objects by the IPC interface, for example, by exchanging the port send and receive rights that are used to implement the underlying IPC channels. FIG.1is a schematic illustration of an operating system environment100in which managing drivers in user space may be implemented, according to embodiments. In one embodiment the operating system environment100includes a kernel space110and a user space150. Driver representations122can be stored in an I/O framework catalog120catalogue, with kernel mode drivers defined as kernel extensions and user mode drivers defined as user space drivers. User space drivers can have functionality that largely resides in user space150. Use space drivers can also include logic that is execution space agnostic can be reside in either kernel space110or user space150. Some user space drivers can be configured with operational logic that is mixed between kernel space110and user space150. The driver representations122can include key-value pairs that are retrieved from the information property list of each driver. The key-value pairs can define a matching dictionary that is used to determine which driver is most suitable for each discovered device. The list of potential drivers can be narrowed by eliminating drivers of the wrong device type. The list of potential drivers can be further narrowed by eliminating drivers that do not match the specified vendor for the device. The remaining drivers may be allowed to actively probe the hardware and determine a probe score that indicates the assessment of the driver as to its suitability to drive the detected device. In one embodiment, drivers can also be assigned a priority that may favor one driver within a family over other drivers in the same family. Based on the probe score and/or priority, a driver can be selected to attempt to start the device and verify the driver's ability to drive the device. Within this matching process, user space drivers can directly compete against kernel space drivers to own a given device. In one embodiment each driver has an I/O framework object130that represents the driver in the kernel space110. In some examples the I/O framework object130maintains a communication connection180to the device driver160running in a process in user space150. When an I/O framework process associated with the I/O framework object130is to create a driver object, the I/O framework process can synchronously spawn a driver daemon launcher132to launch the driver object or asynchronously trigger the creation of the driver object for the user space device driver160. In some examples user space device drivers160specify a server name162and a user class164. The server name162is a rendezvous name for the process that hosts the driver environment in user space150and indicates to the system that a user space process needs to be launched and to wait for the process to check in with the kernel space110. The user class164specifies the name of the user space driver160. In some examples user space device drivers also specify a generic kernel class166, which is a generic kernel object that translates application programming interfaces (APIs) to the actual implementation of a class which lives in user space150. The kernel class may be a family-implemented class, or a kernel-implemented generic class. This class is responsible for bridging kernel-originated calls through to the user-level driver via automatically generated API code. The user space device driver160further comprises a core driver logic168which resides in user space150. Calls between the device driver160in user space150and the kernel space110may be handled via proxy objects. A proxy object170in user space150can be used to pass API messages to the kernel space110. A proxy object134in kernel space may be used to pass API messages to the proxy object170in user space150. The proxying is performed transparently to the driver logic implementation. In some examples the proxy messages between the kernel space110and the user space150are relayed via an inter-process communication (IPC) link such as, for example, the Mach IPC process, which enables a full-featured capability model to be implemented for drivers. Other inter-process communication techniques may also be used. Before core driver logic168in user space150is permitted to access any object on the other side of a proxy object170, an explicit capability to access that object must be granted. Drivers and clients of those drivers can be mutually untrusted. Mach IPC is implemented using messages exchanged over Mach ports, and messages can contain rights to access a port. Each Mach port can be associated with one receive right and one or more send rights and send-once rights, thereby creating a unidirectional communication channel that allows one receiver and one or more senders. A receive right names a queue associated with a port and authorizes the holder to dequeue messages from the queue (i.e., read from the port). The receive right also allows the holder to create send and send-once rights. Send and send-once rights designate a queue and authorize the holder of the right to enqueue messages (e.g., send to the port). Receive rights to a port can be sent in a message that is transmitted via another port. Messages can be exchanged over proxy endpoints only if the appropriate rights have been vended by the kernel. In some examples an OSAction object may encapsulate a callback between processes. Internally, the OSAction object holds the target object instance and method with a reference constant. The driver160that is sent the action only sees an object by ID, and does not have access to the calling process's data. It can only invoke the action by calling a DoAsyncReadWriteCompletion( ) method with API-specific arguments. Kernel logic in kernel space110owns the actual OSAction object. The kernel can be configured such that only drivers which have been issued an I/O can complete it and that clients only ever complete I/O operations that they have been issued. Integrity may be secured by the kernel for the contents of the OSAction object via Mach IPC semantics, although embodiments are not limited to the use of Mach IPC. Wrapping callbacks in OSAction with an opaque ID to which the kernel controls access prevents tampering and allows clients and drivers to be mutually untrusted. In some examples a build tool generates the boiler plate headers and code needed to marshal arguments, generating code for both kernel space110and user space150versions of a defined API. In the kernel, methods may be defined as a single dispatch function and non-virtual methods that can be added to existing classes. The kernel can consume one pad slot in the OSObjectBase class (i.e., the very top) to be used for the Dispatch( ) method. In user space150an OSObject is defined to try to avoid fragile base class problems and avoid the custom linker currently used for kernel C++ binary compatibility. The only virtual method is the Dispatch( ) method and it is implemented in all classes. Each class defines a single instance variable (ivar), which is a pointer to the specific instance variables for that version of the class implementation. Protected access to superclass ivars isn't allowed other than by accessor methods. The size of the object therefore never changes. This allows superclasses to change without needing to recompile subclasses. Class and method names may be hashed to arrive at a globally unique ID for each method in each class. This avoids the fragile base class problem by relying on the strength of the chosen hash algorithm to ensure there are no ID collisions. Furthermore, IDs only need to be unique within a particular class hierarchy because the RPC to an object will first arrive there based on a Mach send right to the C++ object and, once there, look at the payload to find the destination method's ID. FIG.2is a schematic illustration of an operating environment200in which managing drivers in user space may be implemented, according to embodiments. In one embodiment the environment includes a kernel210, a first user space driver host230, and a second user space driver host250. The kernel210can include an IOUserServer212which receives interrupts214, timer events216, and kernel client remote procedure calls219. The kernel210can include an IOUserServer212that represents a process hosting any number of I/O framework objects. The IOUserServer212is an IOUserClient subclass and is not associated with any particular device. Instead, the IOUserServer212can provide a bridge between functionality provided by the kernel210and functionality performed by processes in user space. The kernel210also includes a concrete IOMemory Descriptor218, which can be accessed via proxy10memory descriptors238,258. The first user space driver host230may comprise logic to implement a thread pool32, a work loop234, a provider IOService236, and a proxy10memory descriptor238. The second user space driver host250comprises a proxy provider252, a client IOService256, and a proxy10memory descriptor258. In one embodiment, the first user space driver host230and second user space driver host250can operate cooperatively, using a service provider/service client relationship. In some examples the IOUserServer212receives interrupts214, timer events216, and kernel client remote procedure calls219and sends messages associated with those events to driver logic that resides in host processes in user space. Access to memory in the kernel210is gated through proxies. Before logic in the user space is allowed to access memory, a right to access such memory must be provided by the kernel210. Callbacks between processes can be encapsulated within an action object that facilitates communication between the processes. Source code to handle the proxy message relayed over IPC is automatically generated based on class descriptions. The generated source code can be used to dispatch from a virtual method to a corresponding implementation of the method defined explicitly for user space access. Kernel mode classes can define implementations for the virtual methods that bridge the user space API with the existing Kernel space API. A process that hosts one or more drivers can open a connection to the kernel210using a rendezvous name. The process can then handle remote procedure calls (RPCs) to the objects hosted by the process. Objects can be hosted in separate processes, the same processes, or a combination thereof. The decision as to which objects are hosted in the same or separate processes can be made without impact to the code of the affected drivers, as all communication is performed over well-defined interfaces. The IPC mechanism can automatically determine if communication is to occur between drivers within the same process and can perform direct function calls instead of sending IPC messages. The host process may optionally be completely externally controlled and used to host drivers that do not have access to standard system APIs, allowing a driver to be completely isolated from the underlying system, with communication occurring entirely though proxies over IPC to other objects, which can monitor and secure the operations performed by such drivers. In one embodiment the user space driver host process can include a thread pool comprised of one or more worker threads responsible for RPC. A thread in the pool232calls into the kernel210with the result data from the last RPC that the driver has executed and receives a new message if any are waiting. Exemplary messages can include interrupt and timer sources. If there are no messages waiting, the thread will block and go idle, or exit if there is >1 thread already waiting for this process and a policy to minimize wasted threads is in place. By contrast, if a thread picks up a message and there are no remaining threads to become the designated waiter, a new thread is launched so there is always at least one thread available to receive messages. If an I/O framework object (e.g., I/O framework object130) receives an RPC, either in response to a method call to a classic I/O framework kernel object, or a call from another process, the I/O framework object will place the data in the waiting thread. If there are no waiting threads, the I/O framework object will block waiting for a thread to trap into the kernel. Interrupts and timers can be registered with the IOUserServer212via an IODispatch object that represents a serial queue. These sources fire in the kernel, and wake any thread currently waiting for messages. In some examples the increased efficiency of running drivers in user space may allow threads for those drivers to run at a lower runtime priority than previous implementations. In normal operation, user space runtime priorities are lower than user space priorities. In some examples it is possible to offer kernel runtime priorities to threads in privileged driver processes if the need arises. However, priority deflation is generally a desired outcome, and may be required to some degree to compensate for the expected decrease in overall efficiency of a userland solution. It is possible to run low-priority I/O at much lower runtime priorities, for more of the total I/O stack, than is possible with previous driver solutions, which enables the broad use of low-power driver states. Having described various structures of a data processing system which may be adapted to implement managing drivers in user space, operating aspects will be explained with reference toFIG.3. FIG.3is a flowchart that illustrates a method300manage user space components of a device driver, according to embodiments. Method300includes operation305, in which a hardware device is discovered on a system bus. For example, the hardware device may be discovered when it is communicatively connected to the data processing system which includes the operating system. Alternatively, the hardware device may be discovered during a boot process. At operation310, an appropriate user space driver is matched to the hardware device using a driver matching mechanism. At operation315a user space daemon associated with the selected user space driver is launched. At operation320an IPC communication link is established between proxies. The software logic to enable the IPC link may be dynamically generated based at least in part on one or more class descriptions associated with the user space driver daemon. The IPC link can be established and used transparently to the logic of the user space driver daemon. Functions, methods, and resources that would be dynamically linked if the driver were compiled as a kernel space driver can be linked via dynamically generated IPC interfaces when the driver is compiled as a user space driver. User space functionality can then be linked over the IPC communication link via programmatically invokable proxy interfaces. At operation325access rights to access a memory buffer in the kernel space are received, for example, from a process executing in the kernel. In some examples obtaining an access right may involve initiating a call from the first daemon to the kernel space and processing one or more messages received in a callback from the kernel space. The messages may be encapsulated in the callback from the kernel space in an object, and the access rights may be granted in the kernel space. At operation330a memory access request is relayed from a user space daemon to the access memory buffer. Thus, the operations depicted inFIG.3enable device drivers with operational logic that resides in user space to access memory resources that reside in kernel space. FIG.4is a schematic illustration of an operating environment400in which operational logic of a device driver may be implemented within user space, according to embodiments. The operating environment400also includes a kernel420and a user space440execution environments. The operating environment400shown is for an exemplary PCI-based network device (e.g., Ethernet) with driver logic that resides user space440, and is exemplary as to one embodiment. Other embodiments are not limited as to the type of device that can be driven. Kernel420comprises an I/O framework catalogue422, an IOPCI device object424, an IONetwork object426, an IOUserNetwork object428, an IOUserServer object430, and an IO memory descriptor (IOMD)432which references memory buffer434. The I/O framework catalogue422can be a variant of the I/O framework catalog120ofFIG.1. The I/O framework catalogue422maintains entries for all available drivers on a system. When a device is discovered in the system, the kernel can request a list of all drivers of the device's family from the I/O framework catalog422and perform the matching operation422to select a device driver. The IOPCI device object424defines an access point and communication channel for devices connected to a PCI bus. In one embodiment, the IOPCI device object424is a nub object that facilitates functionality such as arbitration and power management. In one embodiment, the IOPCI device facilitates driver matching for any device that is attached to a PCI bus within a system. The IOPCI device object424can facilitate access to device memory. For example, the IOPCI device object424can access a device address resolution table (DART)410, which may be a component of a memory controller of a system or device. The DART410can enable memory buffers allocated in the kernel420to be mapped into the address space of a connected device. Different nub objects can be used for different interfaces. For example, an IOUSB device object may be used to facilitate communication with USB devices. The IOMD432describes how a stream of data should be laid into memory or extracted from memory. The IOMD432can represent a segment of memory that holds the data involved in an I/O transfer and can be specified as one or more physical or virtual address ranges. The IMD allows objects within a driver stack to map a memory buffer into various user mode or kernel mode address spaces to facilitate the transfer of information out of or into that memory. The IONetwork object426can facilitate communication between the networking stack436and the device driver that is to drive an attached networking device. An application444that is to access the networking device can communicate via the networking stack436. The underlying implementation of the driver (e.g., user space, kernel space) is abstracted from the application444. The IONetwork object426may communicate with or be a superclass of the IOUserNetwork object428. The IOUserNetwork object428is a kernel space object that bridges user space driver logic with the IONetwork object426. The IOUserNetwork object428connects with the user space logic via the IOUserServer object430. In operation, the kernel420, facilitated by the IOPCI device object424, can perform a matching process to match drivers to devices. To launch the selected driver, the kernel420can create the IOUserNetwork object428and the IOUserServer object430. The IOUserServer object430can asynchronously launch the user space device driver daemon442in the user space440. The user space device driver daemon442can receive details about the matched device from the IONetwork object426and access the matched device via a proxy to the IOPCI device object424(e.g., S:IOPCI). The user space device driver daemon442can then probe and start the attached device. To send or receive data via the attached device, the IOUserNetwork object428can allocate a memory buffer434, which can be defined by the IOMD432. The user space device driver daemon442is provided access to the memory buffer434sending an access capability for the memory to the user space device driver daemon442via the IOUserServer object430. The user space device driver daemon442can create a proxy to the IOMD432(e.g., S:IOMD). An access right to the S:IOMD proxy can then be sent to the IOPCI device object424via the S:IOPCI proxy, which can cause the memory buffer434to be mapped into the address space of the device. Using the techniques described above, the user space device driver daemon442can communicate with attached devices (e.g., via the IOPCI device object424) and facilitate communication between the attached device and kernel allocated memory (e.g., memory buffer434) via the use of proxy connections that transparently translate programmatic invocations into inter-process communication (IPC) calls. Access to kernel allocated resources are gated via access rights that can be exchanged between processes. In one embodiment, the access rights are the send and receive rights that enable the underlying functionality of the IPC channel. IPC Communication Link FIG.5-6illustrate elements of the IPC communication link that can be used to bridge user mode driver logic with kernel mode driver logic. The IPC communication link can also be used to enable cross-process communication between user mode drivers in different processes. The program logic to construct the IPC communication link can be dynamically generated when a driver is compiled as a user space driver. Instead of dynamically linking the driver to various kernel mode libraries and objects, program code is generated to enable remote invocation of functionality over IPC via proxies. FIG.5illustrates elements of an IPC communication link that can facilitate execution of user space driver logic, according to one embodiment. In one embodiment, remote invocation messages550can be sent between an integrated IPC runtime interface514and the integrated IPC runtime service524using an inter-process communication channel as a transport medium. The remote invocation messages550enable the remote invocation of methods and objects across processes using proxy execution state. In one embodiment the IPC connection over which the remote invocation messages550are transmitted is managed by session managers530,540in each side of the connection. The session managers530,540can create an IPC connection, configure event queues537,547and event handlers548, and establish proxy connections for each plugin (e.g., remote proxy536, forwarding proxy546). The session managers530,540can send and receive remote invocation messages550and directly perform operations based on those messages or insert workloads into dispatch queues532,542. Remote proxies of objects can be invoked programmatically as through the remote proxy were local. In one embodiment, dispatch queue532and dispatch queue542can represent proxy views of the same logical queue, where the logical queue has state information that is reflected via state information534,544at each endpoint. In one embodiment, state information534,544includes local execution state and proxy state for remote invocations. In one embodiment, cross-process functionality can be performed via the use of dispatch queues that can be used to exchange self-contained blocks of functionality between processes. The IPC connection can be used to enable a dispatch queue532within the integrated IPC runtime interface514can exchange workload requests with a dispatch queue542within the integrated IPC runtime service. State information534,534associated with the dispatch queues532,542can also be relayed over the IPC connection. In one embodiment, dispatch queues532,542and state information534,144, and other data associated with each end of the IPC connection are synchronized via proxies on each end of the IPC connection. The IPC runtime interface514and integrated IPC runtime service524can be used to implement specific programming language functionality. In one embodiment, functionality provided by the Objective-C programming language or the Swift programming language can be enabled. For example, Objective-C blocks and Swift closures can be exchanged over the IPC connection as remote invocation messages550. Additionally, Swift completion handlers can also be used to provided program logic that can automatically run after the completion of other requested functionality. Objective-C and Swift are provided as example languages supported by one embodiment and the calling semantics of any programming language can be supported. For example, the C and C++ programming language can also be supported. FIG.6is a block diagram of program logic to enable remote invocation of service functionality at a client application, according to an embodiment. In one embodiment, a client process610can remotely invoke functionality provided by a service620over a protocol652. The protocol652can be used to define functionality that will be remotely accessible to the client process610over the IPC connection650. The forwarding proxy546links the protocol652with the service implementation622. The service implementation622implements the underlying functionality that is accessed by the client process610. The remote proxy536in the client process610links the protocol652with the remote instance612of the service functionality that is accessed by the client process610. The program logic of the client process610can create a remote instance612of the service functionality by converting the remote proxy536to the data type of the service interface exposed via the protocol652and the forwarding proxy546within the service620. The remote instance can then be invoked in the same manner as the service implementation622within the service620. In one embodiment, the remote implementation models for objects can vary, and objects can be implemented remotely across processes using one of multiple models. In one embodiment an object can be snapshotted, such that a persistent view of the object can be sent across the IPC connection650when the object is remotely accessed. In one embodiment an object can be impersonated, such that an opaque identifier (e.g., UUID, integer) can be used to represent a conceptual object on each side of the IPC connection650. Programming interfaces can reference the opaque identifier to explicitly select an object to access. Impersonation can be used to remotely present an entire class interface. If a class implements a protocol and the protocol is implemented over the interface, remote function calls can be made that automatically traverse the IPC connection650. In one embodiment, objects can be configured to perform custom serialization and deserialization of data on that traverses the IPC connection650, which allows the streaming of data over the IPC connection650. In one embodiment, objects can be configured that can interpose program logic before data is transmitted over the IPC connection650, which can be used to enable special handling of data or function calls. For example, if a group of asynchronous messages are to be transmitted over the IPC connection, interposing program logic can be configured that bundles the asynchronous messages before transmission. In one embodiment, interposing logic can be configured as part of the protocol652which relays data over the IPC connection650. Interposing logic can be of particular use to the client process610, which use an interposer to intercept and adapt any function call made by the client process610to the service implementation622. FIG.7is a method700remotely invoking programmatic functionality between processes, according to an embodiment. Method700can be implemented by a processor of a computing system based on instructions provided by the dynamically generated IPC logic described herein. Method700enables an object provided by a service module to be remotely invoked by a client application as though the object is a local object. Method700can be used to enable the programmatic interaction between user mode driver logic and kernel mode functions, methods, and resources. Method700can also be used to enable the programmatic interaction between user mode driver logic across user mode processes. In one embodiment, method700includes operation702, which establishes an IPC session from a client application to a service process on a computing device. At operation704, method700can access a protocol API from the client application that enables access to a service provided by the service process, the protocol API associated with a remote proxy of an implementation of the service. At operation706, method700can programmatically create a remote instance of the implementation of the via the remote proxy to the implementation of the service. At operation708, method730can invoke functionality provided by the service at the client application. Programmatic remote access to a variety of services can be enabled, including any data, function, or method within the service process. Exemplary API Structure Embodiments described herein include one or more application programming interfaces (APIs) in an environment in which calling program code interacts with other program code that is called through one or more programming interfaces. Various function calls, messages, or other types of invocations, which further may include various kinds of parameters, can be transferred via the APIs between the calling program and the code being called. In addition, an API may provide the calling program code the ability to use data types or classes defined in the API and implemented in the called program code. An API allows a developer of an API-calling component (which may be a third-party developer) to leverage specified features provided by an API-implementing component. There may be one API-calling component or there may be more than one such component. An API can be a source code interface that a computer system or program library provides to support requests for services from an application. An operating system (OS) can have multiple APIs to allow applications running on the OS to call one or more of those APIs, and a service (such as a program library) can have multiple APIs to allow an application that uses the service to call one or more of those APIs. An API can be specified in terms of a programming language that can be interpreted or compiled when an application is built. In some embodiments, the API-implementing component may provide more than one API, each providing a different view of or with different aspects that access different aspects of the functionality implemented by the API-implementing component. For example, one API of an API-implementing component can provide a first set of functions and can be exposed to third party developers, and another API of the API-implementing component can be hidden (not exposed) and provide a subset of the first set of functions and also provide another set of functions, such as testing or debugging functions which are not in the first set of functions. In other embodiments, the API-implementing component may itself call one or more other components via an underlying API and thus be both an API-calling component and an API-implementing component. An API defines the language and parameters that API-calling components use when accessing and using specified features of the API-implementing component. For example, an API-calling component accesses the specified features of the API-implementing component through one or more API calls or invocations (embodied for example by function or method calls) exposed by the API and passes data and control information using parameters via the API calls or invocations. The API-implementing component may return a value through the API in response to an API call from an API-calling component. While the API defines the syntax and result of an API call (e.g., how to invoke the API call and what the API call does), the API may not reveal how the API call accomplishes the function specified by the API call. Various API calls are transferred via the one or more application programming interfaces between the calling (API-calling component) and an API-implementing component. Transferring the API calls may include issuing, initiating, invoking, calling, receiving, returning, or responding to the function calls or messages; in other words, transferring can describe actions by either of the API-calling component or the API-implementing component. The function calls or other invocations of the API may send or receive one or more parameters through a parameter list or other structure. A parameter can be a constant, key, data structure, object, object class, variable, data type, pointer, array, list or a pointer to a function or method or another way to reference a data or other item to be passed via the API. Furthermore, data types or classes may be provided by the API and implemented by the API-implementing component. Thus, the API-calling component may declare variables, use pointers to, use or instantiate constant values of such types or classes by using definitions provided in the API. Generally, an API can be used to access a service or data provided by the API-implementing component or to initiate performance of an operation or computation provided by the API-implementing component. By way of example, the API-implementing component and the API-calling component may each be any one of an operating system, a library, a device driver, an API, an application program, or other module (it should be understood that the API-implementing component and the API-calling component may be the same or different type of module from each other). API-implementing components may in some cases be embodied at least in part in firmware, microcode, or other hardware logic. In some embodiments, an API may allow a client program to use the services provided by a Software Development Kit (SDK) library. In other embodiments, an application or other client program may use an API provided by an Application Framework. In these embodiments, the application or client program may incorporate calls to functions or methods provided by the SDK and provided by the API or use data types or objects defined in the SDK and provided by the API. An Application Framework may in these embodiments provide a main event loop for a program that responds to various events defined by the Framework. The API allows the application to specify the events and the responses to the events using the Application Framework. In some implementations, an API call can report to an application the capabilities or state of a hardware device, including those related to aspects such as input capabilities and state, output capabilities and state, processing capability, power state, storage capacity and state, communications capability, etc., and the API may be implemented in part by firmware, microcode, or other low-level logic that executes in part on the hardware component. The API-calling component may be a local component (i.e., on the same data processing system as the API-implementing component) or a remote component (i.e., on a different data processing system from the API-implementing component) that communicates with the API-implementing component through the API over a network. It should be understood that an API-implementing component may also act as an API-calling component (i.e., it may make API calls to an API exposed by a different API-implementing component) and an API-calling component may also act as an API-implementing component by implementing an API that is exposed to a different API-calling component. The API may allow multiple API-calling components written in different programming languages to communicate with the API-implementing component (thus the API may include features for translating calls and returns between the API-implementing component and the API-calling component); however, the API may be implemented in terms of a specific programming language. An API-calling component can, in one embedment, call APIs from different providers such as a set of APIs from an OS provider and another set of APIs from a plug-in provider and another set of APIs from another provider (e.g., the provider of a software library) or creator of the other set of APIs. FIG.8is a block diagram illustrating an exemplary API architecture, which may be used in some embodiments of the invention. As shown inFIG.8, the API architecture800includes the API-implementing component810(e.g., an operating system, a library, a device driver, an API, an application program, software, or other module) that implements the API820. The API820specifies one or more functions, methods, classes, objects, protocols, data structures, formats and/or other features of the API-implementing component that may be used by the API-calling component830. The API820can specify at least one calling convention that specifies how a function in the API-implementing component receives parameters from the API-calling component and how the function returns a result to the API-calling component. The API-calling component830(e.g., an operating system, a library, a device driver, an API, an application program, software or other module), makes API calls through the API820to access and use the features of the API-implementing component810that are specified by the API820. The API-implementing component810may return a value through the API820to the API-calling component830in response to an API call. It will be appreciated that the API-implementing component810may include additional functions, methods, classes, data structures, and/or other features that are not specified through the API820and are not available to the API-calling component830. It should be understood that the API-calling component830may be on the same system as the API-implementing component810or may be located remotely and accesses the API-implementing component810using the API820over a network. WhileFIG.8illustrates an API-calling component830interacting with the API820, it should be understood that other API-calling components, which may be written in different languages (or the same language) than the API-calling component830, may use the API820. The API-implementing component810, the API820, and the API-calling component830may be stored in a machine-readable medium, which includes any mechanism for storing information in a form readable by a machine (e.g., a computer or other data processing system). For example, a machine-readable medium includes magnetic disks, optical disks, random-access memory; read only memory, flash memory devices, etc. FIG.9A-9Bare block diagrams of exemplary API software stacks900,910, according to embodiments.FIG.9Ashows an exemplary API software stack900in which processes902can make calls to Service A or Service B using Service API and to Operating System904using an OS API. Additionally, Service A and Service B can make calls to Operating System904using several OS APIs. The processes902, in one embodiment, are multiple processes operating in concert to enable a multi-process application as described herein. FIG.9Bshows an exemplary API software stack910including Process1(902A), Process2(902B), Service1(905), Service2(906), and Operating System904. As illustrated, Service2has two APIs, one of which (Service2API1) receives calls from and returns values to Application1and the other (Service2API2) receives calls from and returns values to Application2. Service1(which can be, for example, a software library) makes calls to and receives returned values from OS API1, and Service2(which can be, for example, a software library) makes calls to and receives returned values from both OS API1and OS API2. Application2makes calls to and receives returned values from OS API2. Service1can be, for example, a mobile UI framework as described herein, while Service2can be a host UI framework as described herein. Service1API and Service2API can be APIs implemented via a variant of the integrated IPC runtime interface described herein, which can enable interoperability between Process1and Process2. Additional Computing Device Architectures FIG.10is a block diagram of a device architecture1000for a mobile or embedded device, according to an embodiment. The device architecture1000includes a memory interface1002, a processing system1004including one or more data processors, image processors and/or graphics processing units, and a peripherals interface1006. The various components can be coupled by one or more communication buses or signal lines. The various components can be separate logical components or devices or can be integrated in one or more integrated circuits, such as in a system on a chip integrated circuit. The memory interface1002can be coupled to memory1050, which can include high-speed random-access memory such as static random-access memory (SRAM) or dynamic random-access memory (DRAM) and/or non-volatile memory, such as but not limited to flash memory (e.g., NAND flash, NOR flash, etc.). Sensors, devices, and subsystems can be coupled to the peripherals interface1006to facilitate multiple functionalities. For example, a motion sensor1010, a light sensor1012, and a proximity sensor1014can be coupled to the peripherals interface1006to facilitate the mobile device functionality. One or more biometric sensor(s)1015may also be present, such as a fingerprint scanner for fingerprint recognition or an image sensor for facial recognition. Other sensors1016can also be connected to the peripherals interface1006, such as a positioning system (e.g., GPS receiver), a temperature sensor, or other sensing device, to facilitate related functionalities. A camera subsystem1020and an optical sensor1022, e.g., a charged coupled device (CCD) or a complementary metal-oxide semiconductor (CMOS) optical sensor, can be utilized to facilitate camera functions, such as recording photographs and video clips. Communication functions can be facilitated through one or more wireless communication subsystems1024, which can include radio frequency receivers and transmitters and/or optical (e.g., infrared) receivers and transmitters. The specific design and implementation of the wireless communication subsystems1024can depend on the communication network(s) over which a mobile device is intended to operate. For example, a mobile device including the illustrated device architecture1000can include wireless communication subsystems1024designed to operate over a GSM network, a CDMA network, an LTE network, a Wi-Fi network, a Bluetooth network, or any other wireless network. In particular, the wireless communication subsystems1024can provide a communications mechanism over which a media playback application can retrieve resources from a remote media server or scheduled events from a remote calendar or event server. An audio subsystem1026can be coupled to a speaker1028and a microphone1030to facilitate voice-enabled functions, such as voice recognition, voice replication, digital recording, and telephony functions. In smart media devices described herein, the audio subsystem1026can be a high-quality audio system including support for virtual surround sound. The I/O subsystem1040can include a touch screen controller1042and/or other input controller(s)1045. For computing devices including a display device, the touch screen controller1042can be coupled to a touch sensitive display system1046(e.g., touchscreen). The touch sensitive display system1046and touch screen controller1042can, for example, detect contact and movement and/or pressure using any of a plurality of touch and pressure sensing technologies, including but not limited to capacitive, resistive, infrared, and surface acoustic wave technologies, as well as other proximity sensor arrays or other elements for determining one or more points of contact with a touch sensitive display system1046. Display output for the touch sensitive display system1046can be generated by a display controller1043. In one embodiment, the display controller1043can provide frame data to the touch sensitive display system1046at a variable frame rate. In one embodiment, a sensor controller1044is included to monitor, control, and/or processes data received from one or more of the motion sensor1010, light sensor1012, proximity sensor1014, or other sensors1016. The sensor controller1044can include logic to interpret sensor data to determine the occurrence of one of more motion events or activities by analysis of the sensor data from the sensors. In one embodiment, the I/O subsystem1040includes other input controller(s)1045that can be coupled to other input/control devices1048, such as one or more buttons, rocker switches, thumb-wheel, infrared port, USB port, and/or a pointer device such as a stylus, or control devices such as an up/down button for volume control of the speaker1028and/or the microphone1030. In one embodiment, the memory1050coupled to the memory interface1002can store instructions for an operating system1052, including portable operating system interface (POSIX) compliant and non-compliant operating system or an embedded operating system. The operating system1052may include instructions for handling basic system services and for performing hardware dependent tasks. In some implementations, the operating system1052can be a kernel. The memory1050can also store communication instructions1054to facilitate communicating with one or more additional devices, one or more computers and/or one or more servers, for example, to retrieve web resources from remote web servers. The memory1050can also include user interface instructions1056, including graphical user interface instructions to facilitate graphic user interface processing. Additionally, the memory1050can store sensor processing instructions1058to facilitate sensor-related processing and functions; telephony instructions1060to facilitate telephone-related processes and functions; messaging instructions1062to facilitate electronic-messaging related processes and functions; web browser instructions1064to facilitate web browsing-related processes and functions; media processing instructions1066to facilitate media processing-related processes and functions; location services instructions including GPS and/or navigation instructions1068and Wi-Fi based location instructions to facilitate location based functionality; camera instructions1070to facilitate camera-related processes and functions; and/or other software instructions1072to facilitate other processes and functions, e.g., security processes and functions, and processes and functions related to the systems. The memory1050may also store other software instructions such as web video instructions to facilitate web video-related processes and functions; and/or web shopping instructions to facilitate web shopping-related processes and functions. In some implementations, the media processing instructions1066are divided into audio processing instructions and video processing instructions to facilitate audio processing-related processes and functions and video processing-related processes and functions, respectively. A mobile equipment identifier, such as an International Mobile Equipment Identity (IMEI)1074or a similar hardware identifier can also be stored in memory1050. Each of the above identified instructions and applications can correspond to a set of instructions for performing one or more functions described above. These instructions need not be implemented as separate software programs, procedures, or modules. The memory1050can include additional instructions or fewer instructions. Furthermore, various functions may be implemented in hardware and/or in software, including in one or more signal processing and/or application specific integrated circuits. FIG.11is a block diagram of a computing system1100, according to an embodiment. The illustrated computing system1100is intended to represent a range of computing systems (either wired or wireless) including, for example, desktop computer systems, laptop computer systems, tablet computer systems, cellular telephones, personal digital assistants (PDAs) including cellular-enabled PDAs, set top boxes, entertainment systems or other consumer electronic devices, smart appliance devices, or one or more implementations of a smart media playback device. Alternative computing systems may include more, fewer and/or different components. The computing system1100can be used to provide the computing device and/or a server device to which the computing device may connect. The computing system1100includes bus1135or other communication device to communicate information, and processor(s)1110coupled to bus1135that may process information. While the computing system1100is illustrated with a single processor, the computing system1100may include multiple processors and/or co-processors. The computing system1100further may include memory1120, which may be random access memory (RAM) or another dynamic storage device coupled to the bus1135. The memory1120may store information and instructions that may be executed by processor(s)1110. The memory1120may also be main memory that is used to store temporary variables or other intermediate information during execution of instructions by the processor(s)1110. The computing system1100may also include read only memory (ROM)1130and/or another data storage device1140coupled to the bus1135that may store information and instructions for the processor(s)1110. The data storage device1140can be or include a variety of storage devices, such as a flash memory device, a magnetic disk, or an optical disc and may be coupled to computing system1100via the bus1135or via a remote peripheral interface. The computing system1100may also be coupled, via the bus1135, to a display device1150to display information to a user. The computing system1100can also include an alphanumeric input device1160, including alphanumeric and other keys, which may be coupled to bus1135to communicate information and command selections to processor(s)1110. Another type of user input device includes a cursor control1170device, such as a touchpad, a mouse, a trackball, or cursor direction keys to communicate direction information and command selections to processor(s)1110and to control cursor movement on the display device1150. The computing system1100may also receive user input from a remote device that is communicatively coupled via one or more network interface(s)1180. The computing system1100further may include one or more network interface(s)1180to provide access to a network, such as a local area network. The network interface(s)1180may include, for example, a wireless network interface having antenna1185, which may represent one or more antenna(e). The computing system1100can include multiple wireless network interfaces such as a combination of Wi-Fi, Bluetooth®, near field communication (NFC), and/or cellular telephony interfaces. The network interface(s)1180may also include, for example, a wired network interface to communicate with remote devices via network cable1187, which may be, for example, an Ethernet cable, a coaxial cable, a fiber optic cable, a serial cable, or a parallel cable. In one embodiment, the network interface(s)1180may provide access to a local area network, for example, by conforming to IEEE 802.11 b and/or IEEE 802.11 g standards, and/or the wireless network interface may provide access to a personal area network, for example, by conforming to Bluetooth standards. Other wireless network interfaces and/or protocols can also be supported. In addition to, or instead of, communication via wireless LAN standards, network interface(s)1180may provide wireless communications using, for example, Time Division, Multiple Access (TDMA) protocols, Global System for Mobile Communications (GSM) protocols, Code Division, Multiple Access (CDMA) protocols, Long Term Evolution (LTE) protocols, and/or any other type of wireless communications protocol. The computing system1100can further include one or more energy sources1105and one or more energy measurement systems1145. Energy sources1105can include an AC/DC adapter coupled to an external power source, one or more batteries, one or more charge storage devices, a USB charger, or other energy source. Energy measurement systems include at least one voltage or amperage measuring device that can measure energy consumed by the computing system1100during a predetermined period of time. Additionally, one or more energy measurement systems can be included that measure, e.g., energy consumed by a display device, cooling subsystem, Wi-Fi subsystem, or other frequently used or high-energy consumption subsystem. In some embodiments, the hash functions described herein can utilize specialized hardware circuitry (or firmware) of the system (client device or server). For example, the function can be a hardware-accelerated function. In addition, in some embodiments, the system can use a function that is part of a specialized instruction set. For example, the processor can use an instruction set which may be an extension to an instruction set architecture for particular a type of microprocessors. Accordingly, in an embodiment, the system can provide a hardware-accelerated mechanism for performing cryptographic operations to improve the speed of performing the functions described herein using these instruction sets. In addition, the hardware-accelerated engines/functions are contemplated to include any implementations in hardware, firmware, or combination thereof, including various configurations which can include hardware/firmware integrated into the SoC as a separate processor, or included as special purpose CPU (or core), or integrated in a coprocessor on the circuit board, or contained on a chip of an extension circuit board, etc. It should be noted that the term “approximately” or “substantially” may be used herein and may be interpreted as “as nearly as practicable,” “within technical limitations,” and the like. In addition, the use of the term “or” indicates an inclusive or (e.g. and/or) unless otherwise specified. In the foregoing description, example embodiments of the disclosure have been described. It will be evident that various modifications can be made thereto without departing from the broader spirit and scope of the disclosure. The specification and drawings are, accordingly, to be regarded in an illustrative sense rather than a restrictive sense. The specifics in the descriptions and examples provided may be used anywhere in one or more embodiments. The various features of the different embodiments or examples may be variously combined with some features included and others excluded to suit a variety of different applications. Examples may include subject matter such as a method, means for performing acts of the method, at least one machine-readable medium including instructions that, when performed by a machine cause the machine to perform acts of the method, or of an apparatus or system according to embodiments and examples described herein. Additionally, various components described herein can be a means for performing the operations or functions described herein. Embodiments described herein provide a predictive failure analysis method and service that enables design-time error and exception handling techniques to be supplemented or assisted by a predictive failure analysis system. In some embodiments, the predictive failure analysis system enables the dynamic injection of software routines into error and event handlers to enable the error and event handlers to respond to potential software failures without requiring software developers to have anticipated such errors at design time. One embodiment provides an electronic device, comprising a non-transitory machine-readable medium to store instructions; one or more processors to execute the instructions; and a memory coupled to the one or more processors, the memory to store the instructions which, when executed by the one or more processors, cause the one or more processors to receive injection of dynamic error detection logic into the instructions, the dynamic error detection logic including a failure predictor to publish a failure prediction based on a stream of observed events on the electronic device. One embodiment provides for a non-transitory machine-readable medium storing instructions which, when executed by one or more processors of an electronic device, cause the one or more processors to perform operations comprising storing an input event as a candidate for failure event analysis; detecting a predictive failure trend in stored input event log data; generating a plurality of predicted tables of failure knowledge data; mapping a predictive failure trend to the plurality of predicted tables; and issuing a failure event to an observer in response to detection of a match between the predictive failure trend and the failure knowledge data. One embodiment provides for a data processing system comprising one or more processors; and a memory coupled to the one or more processors, the memory storing instructions which, when executed by the one or more processors, cause the data processing system to perform operations to receive injection of dynamic error detection logic into the instructions, the dynamic error handling logic including an error handling update to indicate a response to a predicted failure; receive a set of events indicative of the predicted failure; and respond to the set of events according to the error handling update. Other features of the present embodiments will be apparent from the accompanying drawings and from the detailed description above. Accordingly, the true scope of the embodiments will become apparent to the skilled practitioner upon a study of the drawings, specification, and following claims.
64,840
11860797
DETAILED DESCRIPTION Peripheral device protocols or standards such as Compute Express Link (CXL) allow for cache coherency between the peripheral device and a processor cache. To do so, the peripheral device issues transaction requests directed to system physical addresses of memory. Some systems use a confidential compute architecture that restricts access to particular portions of memory to assigned entities. For example, particular portions of memory are owned by particular entities (e.g., virtual machines or a hypervisor) and those portions of memory are inaccessible to other entities. Typically these checks are performed by a memory management unit (MMU) during address translation to a system physical address. As peripheral devices are unable to perform these translations, peripheral devices are unable to perform transactions targeting system physical addresses in a confidential compute environment. To that end, the present specification sets forth various implementations for peripheral device protocols in confidential compute architectures. In some implementations, a method of peripheral device protocols in confidential compute architectures includes: receiving a first address translation request from a peripheral device supporting a first protocol, wherein the first protocol supports cache coherency between the peripheral device and a processor cache. The method also includes determining that a confidential compute architecture is enabled; and providing, in response to the first address translation request, a response including an indication to the peripheral device to not use the first protocol. In some implementations, providing the indication to the peripheral device causes the peripheral device to use a second protocol that does not support cache coherency. In some implementations, the first protocol includes a first Compute Express Link (CXL) protocol. In some implementations, the second protocol includes a second Compute Express Link (CXL) protocol. In some implementations, the method further includes receiving a second address translation request from the peripheral device, determining that the confidential compute architecture is disabled, and providing a response to the second address translation request allowing use of the first protocol. In some implementations, the method further includes translating an address associated with the first transaction request to a guest physical address, wherein the response further includes the guest physical address. In some implementations, the first address translation request is received via the second protocol. The present specification also describes various implementations of a system for peripheral device protocols in confidential compute architectures. Such a system includes a peripheral device and an apparatus operatively coupled to the peripheral device. The apparatus performs steps including receiving a first address translation request from a peripheral device supporting a first protocol, wherein the first protocol supports cache coherency between the peripheral device and a processor cache. The steps also include determining that a confidential compute architecture is enabled; and providing, in response to the first address translation request, a response including an indication to the peripheral device to not use the first protocol. In some implementations, providing the indication to the peripheral device causes the peripheral device to use a second protocol that does not support cache coherency. In some implementations, the first protocol includes a first Compute Express Link (CXL) protocol. In some implementations, the second protocol includes a second Compute Express Link (CXL) protocol. In some implementations, the steps further include receiving a second address translation request from the peripheral device, determining that the confidential compute architecture is disabled, and providing a response to the second address translation request allowing use of the first protocol. In some implementations, the steps further include translating an address associated with the first transaction request to a guest physical address, wherein the response further includes the guest physical address. In some implementations, the first address translation request is received via the second protocol. Also described in this specification are various implementations of a computer program product for peripheral device protocols in confidential compute architectures. Such a computer program product is disposed upon a non-transitory computer readable medium and includes computer program instructions that, when executed, cause a computer system to perform steps including receiving a first address translation request from a peripheral device supporting a first protocol, wherein the first protocol supports cache coherency between the peripheral device and a processor cache. The method also includes determining that a confidential compute architecture is enabled; and providing, in response to the first address translation request, a response including an indication to the peripheral device to not use the first protocol. In some implementations, providing the indication to the peripheral device causes the peripheral device to use a second protocol that does not support cache coherency. In some implementations, the first protocol includes a first Compute Express Link (CXL) protocol. In some implementations, the second protocol includes a second Compute Express Link (CXL) protocol. In some implementations, the steps further include receiving a second address translation request from the peripheral device, determining that the confidential compute architecture is disabled, and providing a response to the second address translation request allowing use of the first protocol. In some implementations, the steps further include translating an address associated with the first transaction request to a guest physical address, wherein the response further includes the guest physical address. The following disclosure provides many different implementations, or examples, for implementing different features of the provided subject matter. Specific examples of components and arrangements are described below to simplify the present disclosure. These are, of course, merely examples and are not intended to be limiting. For example, the formation of a first feature over or on a second feature in the description that follows include implementations in which the first and second features are formed in direct contact, and also include implementations in which additional features be formed between the first and second features, such that the first and second features are not in direct contact. Further, spatially relative terms, such as “beneath,” “below,” “lower,” “above,” “upper,” “back,” “front,” “top,” “bottom,” and the like, are used herein for ease of description to describe one element or feature's relationship to another element(s) or feature(s) as illustrated in the figures. Similarly, terms such as “front surface” and “back surface” or “top surface” and “back surface” are used herein to more easily identify various components, and identify that those components are, for example, on opposing sides of another component. The spatially relative terms are intended to encompass different orientations of the device in use or operation in addition to the orientation depicted in the figures. FIG.1is a block diagram of a non-limiting example device100. The example device100can be implemented as a variety of computing devices, including mobile devices, personal computers, peripheral hardware components, gaming devices, set-top boxes, and the like. One skilled in the art will appreciate that, in some implementations, the device100is implemented using multiple computing devices in combination. In addition to the description set forth below, the functionality of the device100is described in further detail in the methods described inFIGS.3-5. The device100includes one or more processors102. The processors102each include one or more cores of functional circuitry, such as functional circuitry fabricated on semiconductive materials. In some implementations, the processors102include a central processing unit (CPU). In some implementations, the processors102include a platform security processor102that performs various checks, operations, and the like associated with implementing a confidential computer architecture as will be described in further detail below. Each processor102includes a cache116(e.g., a processor cache116). In some implementations, the cache116includes one or more levels of cache memory. The device100executes one or more virtual machines104a-n. Each virtual machine104a-nis an emulated or simulated instance of a physical computing device executed within the device100. Also executed in the device100is a hypervisor106. The hypervisor106manages the creation, execution, and termination of virtual machines104a-n. As an example, the hypervisor106manages the allocation and freeing of computational resources in the creation and termination of virtual machines104a-n, such as memory resources, disk space, processing resources, and the like. The device100also includes one or more peripheral interfaces108. The peripheral interface108is a port or socket into which a peripheral device110is coupled in order to create an operative and communicative connection between the device100and the peripheral device110. The peripheral device110includes any peripheral component as can be appreciated, such as network interfaces, parallel accelerators such as graphics processing units or machine learning accelerators, and the like. As an example, the peripheral interface108includes a peripheral component interface express (PCIe) socket. In this example, the peripheral device110is capable of using a device communication protocol that allows cache coherency between the peripheral device110and the device100(e.g., between a cache of the peripheral device110and a memory112or cache116of the device100). As an example, the peripheral device110maintains a cache corresponding to some portion of memory112. The peripheral device110is able to write to the cache, and the device100is able to write to the cache of the peripheral device110. Changes to the peripheral device110cache are able to be written back to the memory112, written to corresponding entries in the device100, and the like. As another example of a cache coherency operation, the peripheral device110performs a read-for-ownership operation which obtains the latest copy of a portion of memory from memory112or cache116, while simultaneously invalidating all other cached copies of the portion of memory and making the obtained copy an exclusive copy able to be written in local cache for the peripheral device110. In some implementations, the peripheral device110is able to use Compute Express Link (CXL), a standard that includes a protocol for peripheral device110communication that allows for such cache coherency (e.g., the CXL.cache protocol). The peripheral device110is also capable of using other device communication protocols that do not provide for cache coherency. As an example, the peripheral device110is capable of communicating with the device100using PCIe. As another example, the peripheral device110is capable of communicating with the device using CXL.io, a CXL protocol that does not support cache coherency. In other words, the peripheral device110is capable of using both cache coherent and non-cache coherent protocols through a same PCIe peripheral interface108. The device100implements a confidential compute architecture. A confidential compute architecture is a technology to keep the data and memory112used by virtual machines104a-nisolated from other virtual machines104a-m, and also isolated from the hypervisor106. Thus, data stored in a particular portion of memory112used by a given virtual machine104a-nis isolated from access by other virtual machines104a-nand the hypervisor106. In a multitenant system where multiple tenants share the use of a same device100or system of devices100to execute their respective virtual machines104a-n, this prevents one tenant from compromising the data of other tenants. One approach for a confidential compute architecture is the use of secure nested paging. Particular portions of memory112are assigned for use by a particular virtual machine104a-n(e.g., on a per-page basis or according to another degree of granularity). When a command or operation from a particular virtual machine104a-nattempts to access a particular portion of memory112, a check is performed to determine if that virtual machine104a-nhas been assigned access to that portion of memory112. For example, a table or data structure will maintain (e.g., on a per-page basis) which system physical addresses are owned by which virtual machines104a-n. After a command by a virtual machine104a-nto access a particular guest virtual address of memory112, after translation to a system physical address, a memory management unit (MMU)114or other portion of logic will access the data structure to determine if the virtual machine104a-nhas access to that system physical address. For example, a guest virtual address is first translated to a guest physical address (e.g., by the MMU114or by a virtualized MMU114implemented in the virtual machine104a-n). The guest physical address is then translated by the MMU114to a system physical address. As used herein, a guest virtual address is a memory address in a virtual address space of a virtual machine104a-n. A guest physical address is an address in the physical address space of a virtual machine104a-n. A system virtual address is an address of a native virtual address space of the device100. A system physical address is an address in the native physical address space of the device100. In order to maintain cache coherency (e.g., using CXL), a peripheral device110must use physical addresses (e.g., system physical addresses) to access the physically addressed caches116of the processors102. This introduces complications when a confidential compute architecture using secure nested paging is implemented. For example, the peripheral device110is unable to perform the translations from a virtual address (e.g., a guest virtual address) to a system physical address in order to access the physically addressed caches116of the processors102. Were the peripheral device110to provide a virtual address to the device100for translation by the MMU114to a system physical address, and a system physical address provided back to the peripheral device110, it would be difficult to later invalidate the translation if necessary. As another concern, if the peripheral device110is malicious or compromised, the peripheral device110will ignore the results of address translation and emit a request for an arbitrary system physical address violating the confidential compute properties. Accordingly, there is a conflict between the use of cache coherent peripheral device protocols provided by CXL in a confidential compute architecture using secure nested paging. Accordingly, assume that an address translation request is received from a peripheral device110supporting a protocol capable of cache coherency between the peripheral device110and processor102cache116. For example, in some implementations, the address translation request is a request to translate a guest virtual address to a physical address for later access (e.g., a system physical address or guest physical address). In some implementations, the transaction request is generated by a particular virtual machine104a-naccessing the peripheral device110. As an example, the peripheral device110is visible or otherwise accessible to the particular virtual machine104a-n. In some implementations, the address translation request includes a flag or bit that is set indicating that the peripheral device110is capable of supporting the cache coherent protocol. For example, in some implementations, the address translation request is provided via a non-cache coherent protocol but includes a flag or identifier indicating that the peripheral device110can support the cache coherent protocol. In response to the address translation request, the device100determines whether a confidential compute architecture is enabled. For example, a processor102such as a dedicated platform security processor102accesses particular configuration data or other indicators as to whether the confidential compute architecture is enabled. Continuing with this example, assume that a parameter or configuration attribute is set during boot or startup of the device100that indicates whether the confidential compute architecture is enabled. Where enabled, the processor102sends a command or indication to the MMU114that will cause the MMU114to perform secure nested paging checks during address translation (e.g., from a guest physical address or guest virtual address to a system physical address). Where the confidential compute architecture is enabled, a response to the address translation request will include an indication to the peripheral device110to use a second protocol that does not implement or support cache coherency instead of the first protocol that does support cache coherency. As an example, the device100provides, to the peripheral device110, an indication to use PCIe or CXL.io instead of CXL.cache. In some implementations, the response includes a guest physical address corresponding to an address included in the address translation request (e.g., a guest physical address translated from a guest virtual address in the translation request). Thus, subsequent transaction requests generated by the peripheral device110will target the guest physical address provided in the response to the address translation request. Such subsequent transaction requests will be provided according to the second protocol (e.g., PCIe or CXL.io). Requests directed to the guest physical address will be translated by the MMU114into a system physical address, thereby allowing the MMU114to perform the secure nested paging checks against the system physical addresses. In some implementations, in response to receiving the indication, the peripheral device110stores some data or other indication to use a second protocol that does not implement or support cache coherency instead of the first protocol that does support cache coherency. In some implementations, this indication is stored or maintained in volatile memory. Thus, the indication is stored so long as the peripheral device110maintains a connection and draws power from the device. In other implementations, this indication is stored in non-volatile memory. Accordingly, in such implementations, this indication persists across connection or disconnection events with the device100. For example, if the peripheral device110is disconnected and reconnected to the device100, the peripheral device100will continue to use the second protocol after reconnection to the device100. As another example, in some implementations, the peripheral device110will continue to use the second protocol after connection to a different device100until receiving a command or instruction to use the first protocol. One skilled in the art will appreciated that the approaches described herein allow for peripheral devices110able to use cache coherent protocols to still be used in a confidential compute architecture by switching to another non-cache coherent protocol. Moreover, one skilled in the art will appreciate that the approaches described herein will allow for such peripheral devices110to use their cache coherent protocols should the confidential compute architecture be disabled. Although the above discussion describes restricting the use of a protocol that allows for cache coherency between the peripheral device110and the processor102cache116, one skilled in the art will appreciate that the approaches described herein are applicable to preventing the use of any standard or protocol whereby a peripheral device110accesses system addressed memory using system physical addresses. Moreover, although the above discussion describes restricting the use of a protocol that allows for cache coherency between the peripheral device110and the processor102cache116, one skilled in the art will appreciate that in some implementations the peripheral device110determines whether the confidential compute architecture is enabled on the device100. For example, in some implementations, the peripheral device110queries the device100to determine if the confidential compute architecture is enabled. As another example, in some implementations, the peripheral device110causes the confidential compute architecture to be enabled or disabled (e.g., by the peripheral device110or in response to connection by the peripheral device110). Accordingly, in some implementations, where the confidential compute architecture is determined to be enabled, the peripheral device110restricts the use of the cache coherent protocol. In some implementations, the device100ofFIG.1is implemented as computer200. The computer200includes least one processor202. In addition to at least one processor202, the computer200ofFIG.2includes random access memory (RAM)204which is connected through a high-speed memory bus206and bus adapter208to processor202and to other components of the computer200. Stored in RAM204is an operating system210, virtual machines104a-nand a hypervisor106. The operating system210in the example ofFIG.2is shown in RAM204, but many components of such software typically are stored in non-volatile memory also, such as, for example, on data storage212, such as a disk drive. The computer200ofFIG.2includes disk drive adapter216coupled through expansion bus218and bus adapter208to processor202and other components of the computer200. Disk drive adapter216connects non-volatile data storage to the computer200in the form of data storage212. Such disk drive adapters include Integrated Drive Electronics (‘IDE’) adapters, Small Computer System Interface (‘SCSI’) adapters, and others as will occur to those of skill in the art. In some implementations, non-volatile computer memory is implemented as an optical disk drive, electrically erasable programmable read-only memory (so-called ‘EEPROM’ or ‘Flash’ memory), RAM drives, and so on, as will occur to those of skill in the art. The example computer200ofFIG.2includes one or more input/output (‘I/O’) adapters220. I/O adapters implement user-oriented input/output through, for example, software drivers and computer hardware for controlling output to display devices such as computer display screens, as well as user input from user input devices222such as keyboards and mice. The example computer200ofFIG.2includes a video adapter224, which is an example of an I/O adapter specially designed for graphic output to a display device226such as a display screen or computer monitor. Video adapter224is connected to processor202through a high-speed video bus228, bus adapter208, and the front side bus230, which is also a high speed bus. The exemplary computer200ofFIG.2includes a communications adapter232for data communications with other computers and for data communications with a data communications network. Such data communications are carried out serially through RS-232connections, through external buses such as a Universal Serial Bus (‘USB’), through data communications networks such as IP data communications networks, and/or in other ways as will occur to those of skill in the art. Communications adapters232implement the hardware level of data communications through which one computer sends data communications to another computer, directly or through a data communications network. Such communication adapters232include modems for wired dial-up communications, Ethernet (IEEE 802.3) adapters for wired data communications, and 802.11 adapters for wireless data communications. For further explanation,FIG.3sets forth a flow chart illustrating an example method for peripheral device protocols in confidential compute architectures. The method ofFIG.3is implemented, for example, in a device100. The method ofFIG.3includes receiving302a first address translation request (e.g., a request304) from a peripheral device110supporting a first protocol. The first protocol supports cache coherency between the peripheral device100and a processor102cache116. The first protocol supports cache coherency between the peripheral device100and the processor102cache116in that the device100has access to read and write from cache on the peripheral device100, and portions of both the peripheral device110cache and the processor102cache116each correspond to portions of the device100memory112. As an example, the first protocol includes a Compute Express Link (CXL) protocol such as CXL.cache. In some implementations, the request304is associated with a particular virtual machine104a-n. For example, the request304is generated by the peripheral device110in response to a command or operation from a virtual machine104a-nhaving access to the peripheral device110. In some implementations, the request304includes a request to translate a virtual address (e.g., a guest virtual address) to a physical address (e.g., a system physical address). For example, as the peripheral device110uses system physical addresses to target physically addressed portions of cache116to maintain cache coherency, the peripheral device110will request a translation of the virtual address to the system physical address to perform subsequent transactions. The method ofFIG.3also includes determining306that a confidential compute architecture is enabled. The confidential compute architecture is an operative technique that isolates portions of memory112used by particular virtual machines104a-nfrom other virtual machines104a-nin the hypervisor. As an example, the confidential compute architecture includes using secure nested paging whereby particular portions (e.g., pages) of system physically addressed memory112are assigned to or owned by particular virtual machines104a-n. A data structure maintaining associations between virtual machines104a-nand pages of memory112is accessed (e.g., by an MMU114) during address translations to system physical addresses. For example, a guest virtual address provided for translation is translated to a guest physical address and then to a system physical address. As another example, a guest physical address provided for translation (e.g., translated from a guest virtual address by a virtualized MMU114in a virtual machine104a-n) is provided to the MMU114for translation to a system physical address. The system physical address is then used to query a data structure or table to identify the entity, if any, that owns or has access to the system physical addressed portion memory112. In some implementations, determining306that the confidential compute architecture is enabled includes determining whether a particular configuration parameter or setting (e.g., in the MMU114, in the hypervisor106, in an operating system, and the like) is set to indicate that the confidential compute architecture is enabled. The method ofFIG.3also includes providing308, in response to the first address translation request (e.g., the request304), a response310including an indication to the peripheral device110to not use the first protocol. In some implementations, the response310indicates another protocol to use instead of the first protocol. For example, the other protocol includes a protocol that does not support cache coherency between the peripheral device110and the processor102cache116such as PCIe or CXL.io. In some implementations, the response310does not indicate another protocol. Accordingly, the peripheral device110will determine another supported protocol to use instead of the first protocol. In some implementations, the request304is received via the second protocol. In some implementations, the response310is provided via the second protocol. In some implementations, the response310includes a guest physical address corresponding to a guest virtual address included in the request304. For example, where the request304includes a translation request for a guest virtual address, the response310includes a guest physical address corresponding to the guest virtual address. In other words, instead of providing a system physical address in response to the request304, a guest physical address is provided to the peripheral device110. Thus, the peripheral device110is able to use the guest physical address as a target for subsequent transactions or memory operations. Such guest physical addresses are then translated by the MMU114into system physical addresses for subsequent secure nested paging checks. The approaches described for peripheral device protocols in confidential compute architectures with respect toFIG.1are also described as methods in the flowcharts ofFIGS.3-5. Accordingly, for further explanation,FIG.4sets forth a flow chart illustrating an example method for peripheral device protocols in confidential compute architectures. The method ofFIG.4is similar toFIG.3in that the method ofFIG.4also includes receiving302a first address translation request (e.g., a request304) from a peripheral device110supporting a first protocol, determining306that a confidential compute architecture is enabled, and providing308, in response to the first address translation request (e.g., the request304), a response310including an indication to the peripheral device110to not use the first protocol. In some implementations, the device100enables the confidential compute architecture consistently by default. In some implementations, the device100transitions between implementing and not implementing the confidential compute architecture. For example, in some implementations, the confidential compute architecture is enabled by a user or other entity after being disabled. As another example, the peripheral device110causes the device100to enable or disable the confidential compute architecture (e.g., in response to connection to the device100, in response to a command or signal from the peripheral device110to the device, and the like). Accordingly, the method ofFIG.4is different fromFIG.3in that the method ofFIG.4also includes receiving402a second address translation request (e.g., a request404) from the peripheral device110and determining406that the confidential compute architecture is disabled. For example, assume that the second request404is received before the first request304. Further assume that, at the time the second request404was received, the confidential compute architecture had not been enabled. The method ofFIG.4also includes providing408a response410to the second address translation request (e.g., the request404) allowing first protocol (e.g., the cache coherent protocol). In response to the second request404, an address translation operation is performed. For example, in some implementations, a translation from a virtual address to a system physical address is performed. The response410then includes the system physical address. As the confidential compute architecture is disabled, the peripheral device110is able to use the system physical address to perform operations to maintain cache coherency between the processor102cache116and the peripheral device110. In some implementations, the response410will include a flag, bit, or other operand indicating that the first protocol (e.g., the cache coherent protocol) is able to be used for subsequent transactions targeting the system physical address included in the response410. AlthoughFIG.4depicts steps402-408being performed before step302, whereby a confidential compute architecture is enabled after being disabled, in some implementations the steps402-408are performed after step308, whereby a confidential compute architecture is disabled after being enabled. Moreover, althoughFIG.4describes a same peripheral device110as was described inFIG.3, in some implementations the peripheral devices110shown inFIG.4are different peripheral devices110. As an example, the confidential compute architecture of the device100is enabled or disabled depending on which peripheral device110is attached. One skilled in the art will appreciate that the method ofFIG.4illustrates that a peripheral device110is able to use a cache coherent protocol while the confidential compute architecture is disabled. On enablement of the confidential compute architecture, the peripheral device110is still usable via another protocol and does not need to be removed from the device100. One skilled in the art will also appreciate that if the confidential compute architecture is disabled after being enabled, in some implementations, the peripheral device110is able to instead use the first protocol to process subsequent requests. For further explanation,FIG.5sets forth a flow chart illustrating an example method for peripheral device protocols in confidential compute architectures. The method ofFIG.5is similar toFIG.3in that the method ofFIG.5also includes receiving302a first address translation request (e.g., a request304) from a peripheral device110supporting a first protocol, determining306that a confidential compute architecture is enabled, and providing308, in response to the first address translation request (e.g., the request304), a response including an indication to the peripheral device110to not use the first protocol. The method ofFIG.5is different fromFIG.3in that the method ofFIG.5also includes translating502an address associated with the first address translation request (e.g., the request304) to a guest physical address. In some implementations, the guest physical address is translated from a guest virtual address included in the request304. As an example, the guest virtual address is translated to the guest physical address by the MMU114or a virtualized MMU114in a virtual machine104a-n. The translated guest physical address is then included in the response310to the request304. As an example, in some implementations, the first address translation request is a request to translate a guest virtual address to a system physical address. Due to the confidential compute architecture being enabled, the peripheral device110does not have access to certain areas of memory. In order to control access to these areas, the device100requires that memory accesses being used by peripheral device110to be directed to guest physical addresses. These guest physical addresses are then used by the device100to determine if the peripheral device100has access under the confidential compute architecture. Accordingly, the response to the first address translation request includes a guest physical address instead of the requested system physical address. In view of the explanations set forth above, readers will recognize that the benefits of peripheral device protocols in confidential compute architectures include performance of a computing system by allowing the use of peripheral devices capable of cache coherent device protocols in a system implementing a confidential compute architecture. Exemplary implementations of the present disclosure are described largely in the context of a fully functional computer system for peripheral device protocols in confidential compute architectures. Readers of skill in the art will recognize, however, that the present disclosure also can be embodied in a computer program product disposed upon computer readable storage media for use with any suitable data processing system. Such computer readable storage media can be any storage medium for machine-readable information, including magnetic media, optical media, or other suitable media. Examples of such media include magnetic disks in hard drives or diskettes, compact disks for optical drives, magnetic tape, and others as will occur to those of skill in the art. Persons skilled in the art will immediately recognize that any computer system having suitable programming means will be capable of executing the steps of the method of the disclosure as embodied in a computer program product. Persons skilled in the art will recognize also that, although some of the exemplary implementations described in this specification are oriented to software installed and executing on computer hardware, nevertheless, alternative implementations implemented as firmware or as hardware are well within the scope of the present disclosure. The present disclosure can be a system, a method, and/or a computer program product. The computer program product can include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present disclosure. The computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device. The computer readable storage medium can be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. A non-exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing. A computer readable storage medium, as used herein, is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire. Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network. The network can include copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. A network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device. Computer readable program instructions for carrying out operations of the present disclosure can be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++ or the like, and conventional procedural programming languages, such as the “C” programming language or similar programming languages. The computer readable program instructions can execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer can be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection can be made to an external computer (for example, through the Internet using an Internet Service Provider). In some implementations, electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) can execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present disclosure. Aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to implementations of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions. These computer readable program instructions can be provided to a processor of a general-purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions can also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein includes an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram block or blocks. The computer readable program instructions can also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus or other device to produce a computer implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks. The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various implementations of the present disclosure. In this regard, each block in the flowchart or block diagrams can represent a module, segment, or portion of instructions, which includes one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block can occur out of the order noted in the figures. For example, two blocks shown in succession can, in fact, be executed substantially concurrently, or the blocks can sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions. It will be understood from the foregoing description that modifications and changes can be made in various implementations of the present disclosure. The descriptions in this specification are for purposes of illustration only and are not to be construed in a limiting sense. The scope of the present disclosure is limited only by the language of the following claims.
44,139
11860798
DETAILED DESCRIPTION An approach for a storage controller is described with reference toFIG.1which allows the storage controller to handle requests to read and write data on a persistent storage device. The persistent storage device is a system for storing data in a persistent manner and may comprise one or more drives in different configurations. The storage controller110is in communication with a persistent storage device120. The persistent storage device120is a system for storing data in a persistent manner. Although the persistent storage device is shown as a single component, in practice it may comprise multiple drives (such as hard disk drives or solid drive drives) arranged into groups and may be distributed over a number of storage nodes. Data is stored across the drives and may have error correcting encoding applied to reduce the risk of data loss if a component of the persistent storage device120fails. Data is stored on the persistent storage devices120in blocks. A block is a unit of data of predetermined size, such as 4 KiB (4096 bytes). The storage controller110is configured to use logical block addressing (LBA) when receiving requests to interact with the persistent storage device120. LBA provides that each block of data stored on the persistent storage device120is identified by a unique integer LBA value. The storage controller110may translate an LBA value to determine which physical location on which persistent storage device the data is actually stored on. The storage controller110is further in communication with a cache130. The cache comprises non-volatile memory chips configured to operate as a non-volatile cache. This may involve the use of flash memory alone or in combination with conventional volatile memory. The non-volatile memory chips may be configured as non-volatile dual in-line memory modules (NVDIMM). While the storage controller is described as a single component, in practice the different functions of the storage controller may be split across different entities. For example, interactions with the cache130may occur through a cache controller independent of the storage controller110. NUMA NUMA (Non-Uniform Memory Access) is a multiprocessing computing system where at least memory access time depends on the memory location relative to each processor. In some implementations memory access time depends on the memory location relative to particular cores of a processor. Further access time to particular I/O controllers and the peripherals connected to the I/O controllers can be relative to a particular processor or particular cores of a processor. A NUMA node may be a single CPU (central processing unit), a group of CPUs, a CPU core or a group of CPU cores. FIG.2shows an example NUMA system. The system200comprises at least two CPUs210,220, memory associated with each CPU (local memory)211,221, an I/O Controller230and one or more peripherals connected to the I/O controller. The peripherals may include at least one GPU (graphics processing unit)250and persistent storage device260. The CPUs210,220are connected by an interconnect270. The interconnect270allows CPUs210,220to access the local memory associated with the other CPU. For example, CPU210uses the interconnect270to access memory221associated with CPU220. Various implementations of NUMA systems exist.FIG.3shows an alternative implementation. The system300comprises at least two CPUs310,320, memory associated with each CPU311,321(local memory), two I/O controllers331and332and one or more peripherals connected to each I/O controller331,332. The peripherals may include at least one GPU (graphics processing unit)351,352and persistent storage devices361,362associated with each respective I/O controller331,332. The CPUs310,320are connected by an interconnect370. The interconnect370allows CPUs310,320to access the local memory of the other CPU. For example, CPU310uses the interconnect370to access memory321associated with CPU320. FIG.4shows another alternative implementation. The system400comprises at least two CPUs410,420, memory associated with each CPU411,421(local memory), I/O controller412and422associated with each CPU (local I/O controller) and one or more peripherals connected to each I/O controller412,422(local peripherals). For simplicity the only peripherals shown inFIG.4are persistent storage devices416,426(local persistent storage devices). However other peripherals may be used in the implementation shown inFIG.4. The CPUs410,420are connected by an interconnect470. The interconnect470allows CPUs410,420to access the local memory and local peripherals associated and with the other CPU. For example, CPU410uses the interconnect470to access memory421associated with CPU420. CPU410also uses the interconnect470to access local persistent storage device426associated with CPU420. FIG.5shows yet another alternative implementation. The system500comprises at least one CPU505having a plurality of CPU cores510,520,530and540, memory associated with each CPU core511,521,531and541(local memory), I/O controllers512,522,532and542(local I/O controllers) and one or more peripherals connected to each I/O controller (local peripherals). For simplicity the only peripherals shown inFIG.5are persistent storage devices516,526,536and546(local persistent storage devices). However other peripherals may be used in the implementation shown inFIG.5. The CPU cores510,520,530and540are connected by an interconnect570. The interconnect570allows CPU cores510,520,530and540to access local memory and local peripherals associated and with the other CPU cores510,520,530and540. For example, CPU core510uses the interconnect570to access local memory521associated with CPU core520. In a further example CPU core520uses the interconnect570to access local persistent storage device546associated with CPU540. WhileFIG.5has been illustrated with respect to CPU cores it can equally be applicable to multiple CPUs. Further each CPU core510,520,530, and540has been illustrated as having a local I/O controller512,522,532and542associated with the respective core510,520,530, and540, alternatively a group of CPU cores may share an I/O controller. In another alternative a group of CPU cores510,520,530, and540may share local memory. In a similar manner a group of CPU cores may share access to a shared local persistent storage device. As can be seen in the illustrated implementations of NUMA system, there can be a number of paths from the respective CPUs or CPU cores to persistent storage devices. Data Access The storage controller is configured to administer access data (reading and writing data) from the persistent storage devices.FIG.6shows an example approach through which the storage controller can access data. At step601the storage controller obtains a list of data paths to one or more persistent storage devices through a plurality of NUMA nodes having associated CPU or CPU cores, memory and I/O controllers. The storage controller obtains the list of data paths by requesting from the system the data paths to the persistent storage devices. At step602the storage controller associates access performance information with each of the data paths in the list. Access performance information may include the last update time of the access performance information, the latency of the data path, an average bandwidth indicator of the data path, the current load on the data path, and a reliability indicator of the data path. At step603the storage controller receives a request to access one of the persistent storage devices. The access request may be a request to read data or to write data or to both read and write data. The request comprises an indication of the data to be read or written and an LBA value to indicate the location on one persistent storage device of the data to be read or the location on one persistent storage device that the data should be stored. At step604the storage controller calculates the preferred data path to the persistent storage device to be accessed using the access performance information. The storage controller calculates the preferred path using the available access performance information including one or more of the latency of the data path, an average bandwidth indicator of the data path, the current load on the data path, and a reliability indicator of the data path. At step605the storage controller accesses the persistent storage device to be accessed using the preferred data path. At step606the storage controller updates the access performance information of the used data path. Access performance information that is updated may include the last update time of the access performance information, the latency of the data path, an average bandwidth indicator of the data path, the current load on the data path, and a reliability indicator of the data path. In some embodiments the access performance information is not updated for the used data path every time an access path is used. The access performance information may only be updated periodically. In the event a data path has an error the storage controller updates the access performance information to indicate the error and returns610to step604. The storage controller then recalculates the preferred data path to the persistent storage device to be accessed using the updated access performance information, including the error information. Access Path Performance Information The storage controller may be configured to periodically test access performance and update access performance information. The frequency that the storage controller periodically tests access performance and update access performance information may be a set period and may depend on the number of access requests or depend on the system load. Further any other method for selecting the period that the storage controller periodically tests access performance and updates access performance information may be selected, including a combination of the above methods for setting the period. On system start up the storage controller may test all data paths and set or update access performance information for each data path. The access performance information may be stored on system shutdown and the storage controller may obtain the stored access performance information on system startup. FIG.7shows an example approach by which the storage controller can test access. At step701the storage controller identifies data paths for testing. At step702the storage controller tests the identified data paths. At step703the storage controller updates the access performance information of the tested data path. Data paths may be identified for test by calculating for each access path the age of the access performance information using the last update time. The last update time with typically be stored as the system access time but may be stored in any other suitable format. The storage controller using the age of the access performance information can identify paths for testing as those data paths having access performance information exceeding an age threshold. The age threshold may be a set age or alternatively the age threshold may be a calculated age such that the system identifies a percentage of all data paths as aged. The percentage may be any percentage but will typically be less than 50% of the data paths. Alternatively, the storage controller may identify data paths for testing by selecting a percentage of all data paths for testing. Further the storage controller may identify data paths for testing by selecting all the data paths that have an error for testing. Interpretation A number of methods have been described above. Any of these methods may be embodied in a series of instructions, which may form a computer program. These instructions, or this computer program, may be stored on a computer readable medium, which may be non-transitory. When executed, these instructions or this program cause a processor to perform the described methods. Where an approach has been described as being implemented by a processor, this may comprise a plurality of processors. That is, at least in the case of processors, the singular should be interpreted as including the plural. Where methods comprise multiple steps, different steps or different parts of a step may be performed by different processors. The steps of the methods have been described in a particular order for ease of understanding. However, the steps can be performed in a different order from that specified, or with steps being performed in parallel. This is the case in all methods except where one step is dependent on another having been performed. The term “comprises” and other grammatical forms is intended to have an inclusive meaning unless otherwise noted. That is, they should be taken to mean an inclusion of the listed components, and possibly of other non-specified components or elements. While the present invention has been explained by the description of certain embodiments, the invention is not restricted to these embodiments. It is possible to modify these embodiments without departing from the spirit or scope of the invention.
13,233
11860799
DETAILED DESCRIPTION Overview Processors and memory work in tandem to provide features to users of computers and other electronic devices. Generally, an electronic device can provide enhanced features, such as high-resolution graphics or artificial intelligence, as a processor and memory operate more quickly together in a complementary manner. Some applications, like those for AI analysis and virtual-reality graphics, can also demand increasing amounts of memory. Such applications use increasing amounts of memory to more accurately model and mimic human thinking and the physical world. Processors and memories can be secured to a printed-circuit board (PCB), such as a rigid or flexible motherboard. The PCB can include sockets for accepting at least one processor and one or more memories. Wiring infrastructure that enables communication between two or more components can also be disposed on at least one layer of the PCB. This PCB, however, provides a finite area for the sockets and the wiring infrastructure. Some PCBs include multiple sockets that are each shaped as a linear slot and designed to accept a double-inline memory module (DIMM). These sockets can be fully occupied by DIMMs while a processor is still able to utilize more memory. In such situations, the system is capable of performing better if additional memory were available to the processor. Printed circuit boards may also include at least one peripheral component interconnect (PCI) express (PCI Express®) (PCIe or PCI-E) slot. A PCIe slot is designed to provide a common interface for various types of components that may be coupled to a PCB. Compared to some older standards, PCIe can provider higher rates of data transfer or a smaller footprint on the PCB, including both greater speed and smaller size. Accordingly, certain PCBs enable a processor to access a memory device that is connected to the PCB via a PCIe slot. In some cases, accessing a memory solely using a PCIe protocol may not offer as much functionality, flexibility, or reliability as is desired. In such cases, another protocol may be layered on top of the PCIe protocol. An example of another, higher-level protocol is the Compute Express Link™ (CXL) protocol. The CXL protocol can be implemented over a physical layer that is governed by the PCIe protocol. The CXL protocol can provide a memory-coherent interface that offers high-bandwidth or low-latency data transfers, including data transfers having both higher bandwidth and lower latency. Various electronic devices, such as a mobile phone having a processor that is part of a system-on-chip (SoC) or a cloud-computing server having dozens of discrete processing units, may employ memory that is coupled to a processor via a CXL-based interconnect. For clarity, consider an apparatus with a host device that is coupled to a memory device via a CXL-based interconnect. The host device can include a processor and a controller (e.g., a host-side controller) that is coupled to the interconnect. The memory device can include another controller (e.g., a memory-side controller) that is coupled to the interconnect and one or more memory arrays to store information in SRAM, DRAM, flash memory, and so forth. During operation, the host-side controller issues memory requests to the memory-side controller over the interconnect. The memory request may be or may include a read request or a write request. The memory-side controller receives the memory request via the interconnect and directly or indirectly uses the memory arrays to fulfill the memory request with a memory response. Thus, the memory-side controller sends the memory response to the host-side controller over the interconnect. To fulfill a read request, the memory-side controller returns the requested data with the memory response. As part of fulfilling a write request, the memory-side controller can provide notice that the write operation was successfully completed by transmitting an acknowledgement as the memory response (e.g., with a message such as a subordinate-to-master no-data response completion (S2M NDR Cmp) message). To increase bandwidth and reduce latency, the memory-side controller can include at least one request queue that may accumulate multiple memory requests (e.g., multiple read requests or multiple write requests) received from the host-side controller. In other words, the host-side controller can send a “subsequent” memory request before receiving a memory response corresponding to a “previous” memory request. This can ensure that the memory device is not waiting idly for another memory request that the host-side controller has already prepared. This technique can also better utilize the interconnect by transmitting the subsequent memory request before the memory response for the previous memory request is ready. The request queue at the memory-side controller may, however, have space for a finite quantity of entries. If the host-side controller overflows the request queue at the memory-side controller, memory accessing can be slowed, and the overflow may even cause data faults. In other words, without a mechanism to control the flow of memory access requests from the host-side controller to the memory-side controller, memory bandwidth or latency can be degraded. Further, an overwhelmed request queue may even cause errors to occur. One approach to modulating (e.g., moderating) the flow of memory requests involves using credits. The host-side controller can be granted a particular quantity of credits. A maximum credit quantity may be based, for instance, on a size of the request queue of the memory-side controller. If the host-side controller currently has, or “possesses,” at least one credit, then the host-side controller can issue a memory request to the memory-side controller over the interconnect. On the other hand, if the host-side controller has depleted the granted supply of credits, the host-side controller waits until at least one credit has been replenished before issuing another memory request. The memory-side controller can be responsible for replenishing credits. The memory-side controller can indicate to the host-side controller that one or more credits have been replenished, or “returned,” using a communication across the interconnect. For example, a memory response that includes read data or a write acknowledgement can also include a credit return indication. In some cases, the memory-side controller returns a credit responsive to a memory request being removed from the request queue at the memory-side controller. This approach to a credit system can prevent the request queue at the memory-side controller from overflowing and causing an error condition. This approach may not, however, prevent memory bandwidth from being reduced or latency from increasing due to an oversupply of the total memory requests present at the memory device. This is because a memory device includes “downstream” or “backend” memory components as well as the memory-side controller. In addition to the memory-side controller, the memory device includes one or more memory arrays and may include other components to facilitate memory request processing. For example, the memory device may include at least one “internal” interconnect and one or more memory controllers, which are coupled to the memory arrays to control access thereto. Any of these components may include at least one respective queue, such as an additional memory request queue. For instance, each memory controller of two memory controllers may include a respective memory request queue of two memory request queues. Responsive to the memory-side controller removing a memory request from its request queue, the memory-side controller forwards the memory request to a downstream or backend component, such as one of the memory controllers. The receiving memory controller may be accumulating memory requests in its respective request queue. This accumulation may occur, for instance, due to a relatively slower memory array that is unable to process requests at the rate at which the requests are being received from the memory-side controller. Thus, the memory-side controller may return a credit to the host-side controller even though un-serviced memory requests are “piling up” within the memory device. Thus, request queues throughout the memory device may become saturated with memory requests. Allowing the request queues of the memory controllers, or of other backend components, to become saturated can lower the bandwidth throughput of the memory system. The saturation can also increase a length of the latency period between when the memory device accepts a memory request from the host device and when the memory device provides the corresponding memory response. Consequently, returning a credit to the host-side controller each time a memory request is removed from the request queue at the memory-side controller may adversely impact memory system performance. Further, the memory device may include one or more memory response queues. A memory response queue can be present at the memory-side controller or any of the backend components of the memory device, like a memory controller or a memory array. Oversaturating the response queues can also decrease bandwidth and increase latency. A response queue can become “backed up” if, for instance, an internal interconnect or an external interconnect is too busy or is oversubscribed. For example, the “external” interconnect extending between the host device and the memory device may be oversubscribed by the host device or by other devices (e.g., another PCIe device) that are coupled to the external interconnect. Additionally or alternatively, a relatively fast memory array may be providing memory responses faster than the memory device can empty them from one or more response queues thereof. In such cases, an unbridled credit-return system can cause at least one response queue of the memory device to become filled. A full response queue can further slow the memory device sufficiently to adversely impact bandwidth and latency. Decreased processing bandwidth and increased latency for a memory device may be categorized as poor performance. Slow or otherwise poor memory performance can cause system-wide problems and create user dissatisfaction. Especially if the poor performance conflicts with advertised performance capabilities or a published technical specification, such as a quality-of-service (QoS) indication, the user may blame the manufacturer of the memory device. This can happen even if the host device or another device that is coupled to the interconnect is contributing to the bandwidth and latency issues by overusing the shared external interconnect. Further, this misplaced blame can occur if the host device is sending too many memory requests to the memory device due to an inadequate credit-based communications scheme. To address this situation, and at least partly ameliorate it, this document describes example approaches to managing the flow of memory requests using a credit-based system. In some implementations, a memory-side controller of a memory device can monitor a quantity of memory requests that are present in the memory device, including in a memory array, a memory controller, or another backend memory component. The memory-side controller can modulate one or more credits being returned to a host-side controller based on the quantity of memory requests that are present in the memory device. Thus, credit returns may be conditioned on how many memory requests are outstanding in the memory device and not only on those memory requests that are within a memory request queue of the memory-side controller. In other implementations, a memory-side controller of a memory device can include a request queue to store received memory requests and to output multiple memory requests to backend memory components. The memory-side controller can also include a response queue to receive from the backend memory components multiple memory responses corresponding to the multiple memory requests. The memory-side controller can additionally include credit logic having a counter to store a value. The credit logic can adjust the value of the counter to track memory requests. In some cases, the credit logic increments the value responsive to receiving a request for a memory operation (e.g., from a host device) and decrements the value responsive to receipt of a memory response of the multiple memory responses from the backend memory components. The credit logic can manage credit returns to a host-side controller based on the value stored in the counter and at least one threshold. To do so, the credit logic may block credit returns responsive to the value stored in the counter being greater than the at least one threshold. The credit logic may further permit credit returns responsive to the value stored in the counter being less than the at least one threshold. Multiple thresholds, such as a first threshold and a second threshold, may be used for finer control and/or to address a potential hysteresis effect. In these manners, the value of the counter can indicate a quantity of memory requests that are outstanding at the memory device. By employing one or more of these implementations, a memory device can obtain greater control over the flow of memory requests received from a host device. The memory device can modulate a quantity of outstanding memory requests and/or a rate at which the memory requests are received from the host device over time. By throttling the arrival of the memory requests, a memory device can avoid becoming so saturated with memory requests that bandwidth or latency is adversely impacted. Thus, using the techniques described herein, manufacturers can produce memory devices that are better able to provide some specified quality of service in terms of bandwidth or latency. Although some implementations are described above in terms of a memory request and a memory device performing certain techniques, other device types may alternatively perform the techniques with requests generally. Examples of non-memory implementations are described further herein. Example Operating Environments FIG.1illustrates, at100generally, example apparatuses102that can implement memory request modulation. The apparatus102can be realized as, for example, at least one electronic device. Example electronic-device implementations include an internet-of-things (IoTs) device102-1, a tablet device102-2, a smartphone102-3, a notebook computer102-4(or a desktop computer), a passenger vehicle102-5(or other vehicle), a server computer102-6, a server cluster102-7that may be part of cloud computing infrastructure or a data center, and any portion thereof (e.g., a printed circuit board (PCB) or module component of a device). Other examples of the apparatus102include a wearable device, such as a smartwatch or intelligent glasses; an entertainment device, such as a set-top box or streaming dongle, a smart television, a gaming device, or virtual reality (VR) goggles; a motherboard or blade of a server; a consumer appliance; a vehicle or drone, or the electronic components thereof; industrial equipment; a security or other sensor device; and so forth. Each type of electronic device or other apparatus can include one or more components to provide some computing functionality or feature that is enabled or enhanced by the hardware or techniques that are described herein. In example implementations, the apparatus102can include at least one host device104, at least one interconnect106, and at least one memory device108. The host device104can include at least one processor114, at least one cache memory116, and at least one controller118. The memory device108may include at least one controller110and at least one memory112. The memory112may be realized with one or more memory types. The memory112may be realized, for example, with a dynamic random-access memory (DRAM) die or module, including with a three-dimensional (3D) stacked DRAM device, such as a high bandwidth memory (HBM) device or a hybrid memory cube (HMC) device. DRAM may include, for instance, synchronous DRAM (SDRAM) or double data rate (DDR) DRAM (DDR DRAM). The memory112may also be realized using static random-access memory (SRAM). Thus, the memory device108may operate as a main memory or a cache memory, including as both. Additionally or alternatively, the memory device108may operate as storage memory. In such cases, the memory112may be realized, for example, with a storage-class memory type, such as one employing 3D XPoint™ or phase-change memory (PCM), flash memory, a magnetic hard disk, or a solid-state drive (e.g., a Non-Volatile Memory Express® (NVMe®) device). Regarding the host device104, the processor114can be coupled to the cache memory116, and the cache memory116can be coupled to the controller118. The processor114can also be coupled to the controller118directly or indirectly (e.g., via the cache memory116as depicted). The host device104may include other components to form, for instance, a system-on-a-chip or a system-on-chip (SoC). The processor114may include or comprise a general-purpose processor, a central processing unit (CPU), a graphics processing unit (GPU), a neural network engine or accelerator, an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA) integrated circuit (IC), a communication processor (e.g., a modem or baseband processor), an SoC, and so forth. In operation, the controller118can provide a high-level or logical interface between the processor114and at least one memory device, such as a memory that is external to the host device104. The controller118can, for example, receive memory requests from the processor114and provide the memory requests to an external memory with appropriate formatting, packaging, timing, reordering, and so forth. The controller118can forward to the processor114responses to the memory requests that the controller118receives from the external memory. The controller118may communicate with multiple memory devices, or other types of devices—some of which may include one or more memory components, over one or more interconnects, such as the interconnect106. Regarding connections that are external to the host device104, the host device104can be coupled to the memory device108via the interconnect106. The memory device108may be coupled to, or may include, a main memory or a storage memory, including both in some cases. Another device, such as a cache memory or a switch, may be coupled between the host device104and the memory device108and may be part of or separate from the interconnect106. The depicted interconnect106, as well as other interconnects (not shown) that communicatively couple together various components, enables data to be transferred between two or more components of the various components. Interconnect examples include a bus, a switching fabric, a crossbar, one or more wires that carry voltage or current signals, and so forth. Each interconnect may be implemented as a unidirectional interconnect or a bidirectional interconnect. The interconnect106can be implemented as a parallel propagation pathway. For example, the interconnect106can include at least one command and address bus and at least one data bus, each of which carries multiple bits of a particular item of information (e.g., a data byte) simultaneously each clock period. Alternatively, the interconnect106can be implemented as a serial propagation pathway that carries one bit of a particular item of information each clock cycle. For instance, the interconnect106can comport with a PCIe standard, such as version 4, 5, 6, or a future version. The interconnect106may include multiple serial propagation pathways, such as multiple lanes in a PCIe implementation. The components of the apparatus102that are depicted inFIG.1represent an example computing architecture that may include a hierarchical memory system. A hierarchical memory system can include memories at different levels, with each level having a memory with a different speed, capacity, or volatile/nonvolatile characteristic. Thus, the memory device108may be described in terms of forming at least part of a main memory of the apparatus102. The memory device108may, however, form at least part of a cache memory, a storage memory, an SoC, and so forth of an apparatus102. Although various implementations of the apparatus102are depicted inFIG.1and described herein, an apparatus102can be implemented in alternative manners. For example, the host device104may include multiple cache memories, including multiple levels of cache memory, or may omit a cache memory. A memory, such as the memory device108, may have a respective “internal” or “local” cache memory (not shown). In some cases, the host device104may omit the processor114and/or include other logic. Generally, the illustrated and described components may be implemented in alternative ways, including in distributed or shared memory systems. A given apparatus102may also include more, fewer, or different components than those depicted inFIG.1or described herein. The host device104and any of the various memories may be realized in multiple manners. In some cases, the host device104and the memory device108may be located on separate blades or racks in a server computing environment. In other cases, the host device104and the memory device108can both be disposed on, or physically supported by, a same printed circuit board (PCB) (e.g., a rigid or flexible motherboard or PCB assembly). The host device104and the memory device108may also be integrated on a same IC or fabricated on separate ICs but packaged together. A memory device108may also be coupled to multiple host devices104via one or more interconnects106and may be able to respond to memory requests from two or more of the multiple host devices104. Each host device104may include a respective controller118, or the multiple host devices104may share a common controller118. An example computing system architecture with at least one host device104that is coupled to a memory device108is described below with reference toFIG.2. With continuing reference toFIG.1, however, the interconnect106may propagate one or more communications. The host device104and the memory device108may exchange at least one memory request/memory response120. For example, the controller118may transmit a memory request to the controller110over the interconnect106. Thus, the controller110may transmit a corresponding memory response to the controller118over the interconnect106. In some cases, the interconnect106is operated in accordance with a credit-based protocol. Accordingly, credit-related information122may be exchanged between the host device104and the memory device108. For instance, the controller110may transmit a credit return to the controller118to enable the controller118to transmit another memory request. Thus, the host device104and the memory device108can communicate using a credit-based protocol. The controller110of the memory device108can include credit logic124, and the controller118of the host device104can include credit logic126. In example implementations, the credit logic124and/or the credit logic126can facilitate communication over the interconnect106using at least one protocol that operates based on credits. A credit-based protocol can use tokens or another quantity-based permissions scheme to authorize an initiator and/or a target to communicate with the target and/or the initiator, respectively. For example, the controller118may transmit a communication (e.g., a memory request) over the interconnect106to the controller110responsive to possessing at least one credit that “authorizes” the transmission. This transmission, however, “consumes” the at least one credit. Examples of credit-based protocols are described below with reference toFIGS.4and5. In example implementations, the credit logic124of the controller110can moderate the flow of communications from the controller118. To do so, the credit logic124can modulate the frequency or rate at which the credit logic124returns credits to the credit logic126of the controller118. Withholding credit returns can slow, or even stop, the transmission of memory requests from the host device104to the memory device108if the memory device108becomes oversaturated with in-progress memory requests. Example techniques for modulating such credit returns are described herein to increase the bandwidth for memory-request processing or to decrease memory response latency, including to achieve both. Example implementations are described further with reference toFIGS.6-9. In some implementations, the apparatus102operates with one or more protocols over the interconnect106. The apparatus102can operate, for example, a Compute Express Link™ (CXL) protocol across the interconnect106. In at least some of these cases, the apparatus102can overlay the CXL protocol on top of a PCIe protocol for the physical layer. Thus, the controller118can comport with a CXL standard or a PCIe standard, including comporting with both. Similarly, the controller110can comport with a CXL standard or a PCIe standard, including with both. Examples of credit-based aspects of at least one version of a CXL standard are described below with reference toFIGS.4and5. Other circuitry, techniques, and mechanisms are also described below. Next, however, example computing architectures with one or more processors and a memory device are described next. FIG.2illustrates an example computing system200that can implement aspects of memory request modulation with a memory device. In some implementations, the computing system200includes at least one memory device108, at least one interconnect106, and at least one processor202. The memory device108can include, or be associated with, at least one memory array206, at least one interface204, and at least one controller110. The at least one controller110can be communicatively coupled to the memory array206via at least one interconnect208(e.g., an “internal” interconnect). The memory array206and the controller110may be components that are integrated on a single semiconductor die or that are located on separate semiconductor dies (e.g., but still coupled to or disposed on a same PCB). Each of the memory array206or the controller110may also be distributed across multiple dies. The memory device108can correspond, for example, to one or more of a cache memory, main memory, or storage memory of the apparatus102ofFIG.1. Thus, the memory array206can include an array of memory cells. These memory cells can include, but are not limited to, memory cells of Static Random-Access Memory (SRAM), Dynamic Random-Access Memory (DRAM), Synchronous DRAM (SDRAM), three-dimensional (3D) stacked DRAM, Double Data Rate (DDR) memory, low-power Dynamic Random-Access Memory (DRAM), Low-Power Double Data Rate (LPDDR) Synchronous Dynamic Random-Access Memory (SDRAM), phase-change memory (PCM), or flash memory. The controller110can include any one or more of a number of components that can be used by the memory device108to perform various operations. These operations can include communicating with other devices, managing performance, modulating memory access rates, and performing memory read or write operations. For example, the controller110can include at least one register212, at least one instance of request logic214, at least one instance of response logic216, and at least one instance of credit logic124. The register212may be implemented, for example, as one or more registers that can store information to be used by the controller110, by another part of the memory device108, or by a part of a host device104, such as a controller118as depicted inFIG.1. A register212may store, for instance, a maximum credit level, a parameter controlling a communication-flow modulation process using credits (e.g., a threshold, like threshold618ofFIG.6, or a flag controlling a state of a gate or switch, like gate612ofFIG.6or switch716ofFIG.7), and so forth. The request logic214can process one or more memory requests, such as by formulating a request, directing a request to a next or final destination, or performing a memory access operation (e.g., a read or a write operation). The response logic216can prepare at least one memory response, such as by obtaining requested data or generating a write acknowledgement. The credit logic124can modulate the flow of memory requests across the interconnect106using credits, which are described further below, including with reference toFIG.4. Although depicted separately, the components of the controller110may be nested with respect to each other, may be at least partially overlapping with another component, and so forth. The interface204can couple the controller110or the memory array206directly or indirectly to the interconnect106. As shown inFIG.2, the register212, the request logic214, the response logic216, and the credit logic124can be part of a single component (e.g., the controller110). In other implementations, one or more of the register212, the request logic214, the response logic216, or the credit logic124may be implemented as separate components, which can be provided on a single semiconductor die or disposed across multiple semiconductor dies. These components of the controller110may be individually or jointly coupled to the interconnect106via the interface204. The interconnect106may be implemented with any one or more of a variety of interconnects that communicatively couple together various components and enable commands, addresses, messages, packets, and/or other information and data to be transferred between two or more of the various components (e.g., between the memory device108and any of the one or more processors202). The information and data may be propagated over the interconnect106“directly” or using some form of encapsulation or packaging, such as with packets, frames, or flits. Although the interconnect106is represented with a single line or arrow inFIG.2, the interconnect106may include at least one bus, at least one switching fabric, at least one crossbar, one or more wires or traces that carry voltage or current signals, at least one switch, one or more buffers, at least one lane, and so forth. In some aspects, the memory device108may be realized as a “separate” physical component relative to the host device104(ofFIG.1) or any of the processors202. Examples of physical components that may be separate include, but are not limited to, a printed circuit board (PCB), which can be rigid or flexible; a memory card; a memory stick; and a memory module, including a single in-line memory module (SIMM), a dual in-line memory module (DIMM), or a non-volatile memory express (NVMe) module. Thus, separate physical components may be located together within a same housing of an electronic device or a memory product, or such physical components may be distributed over a server rack, a data center, and so forth. Alternatively, the memory device108may be packaged or integrated with other physical components, including a host device104or a processor202, such as by being disposed on a common PCB, combined together in a single device package, or integrated into an SoC of an apparatus. As shown inFIG.2, the one or more processors202may include a computer processor202-1, a baseband processor202-2, and an application processor202-3, which are coupled to the memory device108through the interconnect106. The processors202may each be, or may form a part of, a CPU, a GPU, an SoC, an ASIC, an FPGA, or the like. In some cases, a single “processor” can comprise multiple processing cores or resources, each dedicated to different functions, such as modem management, applications, graphics, central processing, neural network acceleration, or the like. In some implementations, the baseband processor202-2may include or be coupled to a modem (not shown inFIG.2) and may be referred to as a modem processor. The modem and/or the baseband processor202-2may be coupled wirelessly to a network via, for example, cellular, Wi-Fi®, Bluetooth®, ultra-wideband (UWB), near field, or another technology or protocol for wireless communication. In various implementations, the processors202may be connected to different memories in different manners. For example, the processors202may be connected directly to the memory device108(e.g., via the interconnect106as shown). Alternatively, one or more of the processors202may be indirectly connected to the memory device108, such as over a network connection, through one or more other devices or components, and/or using at least one other interconnect. Each processor202may be realized similarly to the processor114ofFIG.1. Accordingly, a respective processor202can include or be associated with a respective controller, like the controller118depicted inFIG.1. Alternatively, two or more processors202may access the memory device108using a shared or system controller118. In any of such cases, the controller118may include credit logic126(e.g., ofFIG.1). Each processor202may also be separately connected to a respective memory. As shown, the computer processor202-1may be coupled to at least one DIMM210that is inserted into a DIMM slot of a motherboard. The DIMM210can be coupled to a memory controller (not shown), which may be part of the computer processor202-1. The apparatuses and methods that are described herein may be appropriate for memory that is designed for use with a PCIe bus. Thus, the described principles may be incorporated into a memory device with a PCIe interface. Further, the memory device can communicate over the interconnect106by overlaying a CXL protocol on the physical PCIe interface. An example of a memory standard that relates to CXL is promulgated by the Compute Express Link consortium and may include versions 1.0, 1.1, 2.0, and future versions. Thus, the host device104or the memory device108, including both in some cases, may comport with at least one CXL standard. Accordingly, some terminology in this document may draw from one or more of these standards or versions thereof for clarity. The described principles, however, are also applicable to memories that comport with other standards, including earlier versions or future versions of such standards, and to memories that do not adhere to a public standard. Examples of systems that may include a PCIe interface and a CXL protocol overlay are described next with reference toFIG.3. FIG.3illustrates examples of a system300that can include a host device104and a memory device108that are coupled together via an interconnect106. The system300may form at least part of an apparatus102as shown inFIG.1. As illustrated inFIG.3, the host device104includes a processor114and a controller118, which can be realized with at least one initiator302. Thus, the initiator302can be coupled to the processor114or to the interconnect106(including to both), and the initiator302can be coupled between the processor114and the interconnect106. Examples of initiators302may include a leader, a primary, a master, a requester or requesting component, a main component, and so forth. In the illustrated example system300, the memory device108includes a controller110, which can be realized with at least one target304. The target304can be coupled to the interconnect106. Thus, the target304and the initiator302can be coupled to each other via the interconnect106. Examples of targets304may include a follower, a secondary, a slave, a subordinate, a responder or responding component, a subsidiary component, and so forth. The memory device108also includes a memory112. The memory112can be realized with at least one memory module or chip or with a memory array206(ofFIG.2) or another component, such as a DRAM310, as is described below. In example implementations, the initiator302includes at least one link controller312, and the target304includes at least one link controller314. The link controller312or the link controller314can instigate, coordinate, cause, or otherwise participate in or control signaling across a physical or logical link realized by the interconnect106in accordance with one or more protocols. The link controller312may be coupled to the interconnect106. The link controller314may also be coupled to the interconnect106. Thus, the link controller312can be coupled to the link controller314via the interconnect106. Each link controller312or314may, for instance, control communications over the interconnect106at a link layer or at one or more other layers of a given protocol. Communication signaling may include, for example, a request316, a response318, and so forth. The memory device108may further include at least one interconnect306and at least one memory controller308(MC308). Within the memory device108, and relative to the target304, the interconnect306, the memory controller308, and/or the DRAM310(or other memory component) may be referred to as a “backend” or “downstream” component of the memory device108. In some cases, the interconnect306is internal to the memory device108and may operate the same as or differently from the interconnect106. Thus, the memory device108can include at least one memory component. As shown, the memory device108may include multiple memory controllers308-1and308-2and/or multiple DRAMs310-1and310-2. Although two of each are shown, the memory device108may include one or more than two memory controllers and/or one or more than two DRAMs. For example, a memory device108may include 4 memory controllers and 16 DRAMs, such as 4 DRAMs per memory controller. The memory112or memory components of the memory device108are depicted as DRAM as an example only, for one or more of the memory components may be implemented as another type of memory. For instance, the memory components may include nonvolatile memory like flash or PCM. Alternatively, the memory components may include other types of volatile memory like SRAM. Thus, the memory device108may include a dynamic random-access memory (DRAM) array, a static random-access memory (SRAM) array, or a nonvolatile memory array. A memory device108may also include any combination of memory types. In some cases, the memory device108may include the target304, the interconnect306, the at least one memory controller308, and the at least one DRAM310within a single housing or other enclosure. The enclosure, however, may be omitted or may be merged with one for the host device104, the system300, or an apparatus102(ofFIG.1). The interconnect306can be disposed on a PCB. Each of the target304, the memory controller308, and the DRAM310may be fabricated on at least one IC and packaged together or separately. The packaged IC(s) may be secured to or otherwise supported by the PCB (or PCB assembly) and may be directly or indirectly coupled to the interconnect306. The components of the memory device108may, however, be fabricated, packaged, combined, and/or housed in other manners. As illustrated inFIG.3, the target304, including the link controller314thereof, can be coupled to the interconnect306. Each memory controller308of the multiple memory controllers308-1and308-2can also be coupled to the interconnect306. Accordingly, the target304and each memory controller308of the multiple memory controllers308-1and308-2can communicate with each other via the interconnect306. Each memory controller308is coupled to at least one DRAM310. As shown, each respective memory controller308of the multiple memory controllers308-1and308-2is coupled to at least one respective DRAM310of the multiple DRAMs310-1and310-2. Each memory controller308of the multiple memory controllers308-1and308-2may, however, be coupled to a respective set of multiple DRAMs or other memory components. Each memory controller308can access at least one DRAM310by implementing one or more memory access protocols to facilitate reading or writing data based on at least one memory address. The memory controller308can increase bandwidth or reduce latency for the memory accessing based on a type of the memory or an organization of the memory components, such as the multiple DRAMs. The multiple memory controllers308-1and308-2and the multiple DRAMs310-1and310-2can be organized in many different manners. For example, each memory controller308can realize one or more memory channels for accessing the DRAMs. Further, the DRAMs can be manufactured to include one or more ranks, such as a single-rank or a dual-rank memory module. Each DRAM310(e.g., at least one DRAM IC chip) may also include multiple banks, such as 8 or 16 banks. This document now describes examples of the host device104accessing the memory device108. The examples are described in terms of a general memory access (e.g., a memory request) which may include a memory read access (e.g., a memory read request for a data retrieval operation) or a memory write access (e.g., a memory write request for a data storage operation). The processor114can provide a memory access request352to the initiator302. The memory access request352may be propagated over a bus or other interconnect that is internal to the host device104. This memory access request352may be or may include a read request or a write request. The initiator302, such as the link controller312thereof, can reformulate the memory access request352into a format that is suitable for the interconnect106. This reformulation may be performed based on a physical protocol or a logical protocol (including both) applicable to the interconnect106. Examples of such protocols are described below. The initiator302can thus prepare a request316and transmit the request316over the interconnect106to the target304. The target304receives the request316from the initiator302via the interconnect106. The target304, including the link controller314thereof, can process the request316to determine (e.g., extract, decode, or interpret) the memory access request. Based on the determined memory access request, the target304can forward a memory request354over the interconnect306to a memory controller308, which is the first memory controller308-1in this example. For other memory accesses, the targeted data may be accessed with the second DRAM310-2through the second memory controller308-2. Thus, the first memory controller308-1receives the memory request354via the internal interconnect306. The first memory controller308-1can prepare a memory command356based on the memory request354. The first memory controller308-1can provide the memory command356to the first DRAM310-1over an interface or interconnect appropriate for the type of DRAM or other memory component. The first DRAM310-1receives the memory command356from the first memory controller308-1and can perform the corresponding memory operation. Based on the results of the memory operation, the first DRAM310-1can generate a memory response362. If the memory request316is for a read operation, the memory response362can include the requested data. If the memory request316is for a write operation, the memory response362can include an acknowledgement that the write operation was performed successfully. The first DRAM310-1can provide the memory response362to the first memory controller308-1. The first memory controller308-1receives the memory response362from the first DRAM310-1. Based on the memory response362, the first memory controller308-1can prepare a memory response364and transmit the memory response364to the target304via the interconnect306. The target304receives the memory response364from the first memory controller308-1via the interconnect306. Based on this memory response364, and responsive to the corresponding request316, the target304can formulate a response318for the requested memory operation. The response318can include read data or a write acknowledgement and be formulated in accordance with one or more protocols of the interconnect106. To respond to the memory request316from the host device104, the target304of the memory device108can transmit the response318to the initiator302over the interconnect106. Thus, the initiator302receives the response318from the target304via the interconnect106. The initiator302can therefore respond to the “originating” memory access request352, which is from the processor114in this example. To do so, the initiator302prepares a memory access response366using the information from the response318and provides the memory access response366to the processor114. In these manners, the host device104can obtain memory access services from the memory device108using the interconnect106. Example aspects of an interconnect106are described next. The interconnect106can be implemented in a myriad of manners to enable memory-related communications to be exchanged between the initiator302and the target304. Generally, the interconnect106can carry memory-related information, such as data or a memory address, between the initiator302and the target304. In some cases, the initiator302or the target304(including both) can prepare memory-related information for communication across the interconnect106by encapsulating such information. The memory-related information can be encapsulated or incorporated into, for example, at least one packet (e.g., at least one flit). One or more packets may include at least one header with information indicating or describing the content of each packet. In example implementations, the interconnect106can support, enforce, or enable memory coherency for a shared memory system, for a cache memory, for combinations thereof, and so forth. Thus, the memory device108can operate in a cache coherent memory domain in some cases. Additionally or alternatively, the interconnect106can be operated based on a credit allocation system. Thus, the initiator302and the target304can communicate using, for example, a credit-based flow control mechanism320. Possession of a credit can enable an entity, such as the initiator302, to transmit another memory request316to the target304. The target304may return credits to “refill” a credit balance at the initiator302. The credit logic124of the target304or the credit logic126of the initiator302(including both instances of credit logic working together in tandem) can implement a credit-based communication scheme across the interconnect106. Example aspects of credit-based communication protocols are described below with reference toFIGS.4and5. The system300, the initiator302of the host device104, or the target304of the memory device108may operate or interface with the interconnect106in accordance one or more physical or logical protocols. For example, the interconnect106may be built in accordance with a Peripheral Component Interconnect Express® (PCIe or PCI-E) standard. Applicable versions of the PCIe standard may include 1.x, 2.x, 3.x, 4.0, 5.0, 6.0, and future or alternative versions of the standard. In some cases, at least one other standard is layered over the physical-oriented PCIe standard. For example, the initiator302or the target304can communicate over the interconnect106in accordance with a Compute Express Link™ (CXL) standard. Applicable versions of the CXL standard may include 1.x, 2.0, and future or alternative versions of the standard. Thus, the initiator302and/or the target304may operate so as to comport with a PCIe standard and/or a CXL standard. A device or component may comprise or operate in accordance with a CXL Type 1, 2, or 3 device. A CXL standard may operate based on credits, such as request credits, response credits, and data credits. Example aspects of credit types, credit allocation, credit usage, and flow control via credits are described next with reference toFIGS.4and5. Example Techniques and Hardware FIG.4illustrates, at400generally, examples of controllers for an initiator302and a target304that can communicate across an interconnect106that employs a credit-based protocol. The initiator302can include a link controller312, and the target304can include a link controller314. As also shown inFIGS.1and3, the link controller312can include credit logic126, and the link controller314can include credit logic124. The credit logic124and the credit logic126can support implementations of a credit-based flow control mechanism320that authorizes or permits one or more communications based on possession of at least one credit. In example implementations, the link controller312or the link controller314, including both link controllers, can communicate across the interconnect106in accordance with a credit-based protocol. A credit-based protocol can be realized using, for instance, the credit-based flow control mechanism320. To do so, the credit logic126can monitor a quantity of one or more credits412, and the credit logic124can monitor one or more credits414. Generally, the credit logic126permits the link controller312to transmit a communication, such as a request316, to the link controller314based on the one or more credits412. Transmitting the request316may use or “consume” one credit of the one or more credits412. Based on the one or more credits414that are to be returned, the credit logic124at the link controller314can modulate the rate of transmission from the link controller312by managing the transmission of at least one credit return420. The credit return420can replenish an indicated quantity of the one or more credits412at the credit logic126. This credit usage is described further below. As illustrated inFIG.4, the link controller312can include at least one request queue402, at least one arbiter404, at least one response queue406, and at least one instance of the credit logic126. The link controller314can include at least one request queue452, at least one arbiter454, at least one response queue456, and at least one instance of the credit logic124. In some cases, a request queue402or452may be split into a read path and a write path. Thus, the request queue402may include at least one read queue408and at least one write queue410. Similarly, the request queue452may include at least one read queue458and at least one write queue460. In example operations for an initiator302, the link controller312can receive a memory access request352at the request queue402. The request queue402routes the request into the read queue408or the write queue410based on whether the memory access request is for a read operation or a write operation, respectively. The arbiter404controls access to the interconnect106based on instructions or commands from the credit logic126. The credit logic126authorizes the arbiter404to transmit a request316over the interconnect106based on possession of the one or more credits412. For example, the credit logic126may permit the arbiter404to transmit one request316per one available credit412(e.g., a one-to-one ratio of transmissions and credits). If the credit logic126does not currently possess any credits412, the arbiter404can be prevented from transmitting a request316(e.g., by the credit logic126blocking or not authorizing such a transmission). The response queue406can buffer multiple responses318received from the link controller314via the interconnect106. Each response318may include a least one memory response (e.g., with read data or a write acknowledgment) or at least one credit return420. Thus, a response318may include a memory response and a credit return420. For a memory response, the response queue406buffers the response until the response queue406can provide the memory access response366to the processor114(ofFIGS.1and3). For a credit return420, the response queue406, including associated logic, can forward the quantity of credits returned420to the credit logic126. Alternatively, separate circuitry may provide the credit return420to the credit logic126. Based on the credit return420, the credit logic126can replenish at least a portion of the credits412. Continuing with example operations, but for a target304, the link controller314can receive the request316at the request queue452. The request queue452can then route the request316into the read queue458or the write queue460depending on whether it is a read request or a write request, respectively. The arbiter454can select a read request from the read queue458or a write request from the write queue460for transmission as the memory request354to a downstream component, such as a memory controller. Responsive to transmission of a memory request354, which corresponds to a request316that was stored in the request queue452, the arbiter454notifies the credit logic124that the request316has been transmitted to a downstream component of the memory device108(e.g., ofFIGS.1-3). Accordingly, the credit logic124can add a credit414to the collection of one or more credits414that are earmarked to be returned to the link controller312. Thus, the credit logic124can track (e.g., maintain a record of) how many credits414can be returned to the credit logic126because the link controller314has forwarded a corresponding request316from the request queue452to a downstream component of the memory device108. The credit logic124can communicate with the response queue456responsive to the presence of credits414that are to be returned to the credit logic126. When a memory response364is received at the response queue456, the response queue456can store the memory response364. In conjunction with transmitting the memory response364as a response318to the link controller312, the response queue456can include at least one credit return420(e.g., in a same FLIT or other packet). The credit return420can indicate a quantity of one or more credits414that are being returned to the credit logic126to increase the quantity of credits412. In these manners, the link controller314can use the credit-based protocol to control (e.g., block, gate, modulate, or moderate) the flow of requests316from the link controller312. This can enable the link controller314to prevent the request queue452from overflowing from receiving too many requests316(e.g., from receiving requests316faster than the requests can be forwarded to downstream memory components). Additionally or alternatively, a credit-based protocol can also be used to control the flow of responses318from the link controller314to the link controller312. The response queue456of the link controller314may be blocked from transmitting a response318unless the credit logic124has a “response” credit (not separately shown inFIG.4) to authorize such a response transmission. These response credits may be different from the one or more “request” credits414relating to the requests316. In such scenarios, the credit logic126of the link controller312may return the response credits to the credit logic124responsive to issuances of memory access responses366from the response queue406. Hence, the initiator302and the target304can implement the credit-based flow control mechanism320bidirectionally. Various approaches can be employed for a credit-based communication protocol. For example, a credit may correspond to a transmission across an interconnect, to a packet, to a flit, or to a request or response. A single credit may correspond to a single instance of any of the preceding examples or to multiple instances of such examples. In some cases a transmission may include multiple requests and/or multiple responses, such as by encapsulating them into a packet or flit. In some systems, a credit may correspond generally to any type of request or response so that, e.g., an initiator can transmit any kind of request or response if the initiator possesses a credit. Additionally or alternatively, a credit may be specific to one or more types of requests or responses or other communications. Examples of communication types include read-related requests and write-related requests. Credits may also be particular to whether or not data is allowed to be included in the corresponding transmission. These and other communication traits may be further combined to create still-more specific types of credits. By way of example, but not limitation, this document describes some implementations in terms of a credit protocol employed by certain C×L systems. Generally, the credit-based flow control mechanism for CXL can employ “backpressure” against a host device if one or more buffers of the memory device are full and therefore cannot receive any more requests (or any more responses on the return path). In some example systems, there can be three types of credits on an initiator device or a target device to control the flow of traffic between them. These three credit types can be represented by ReqCrd, DataCrd, and RspCrd. More specifically, these three examples are a request credit (ReqCrd), a data credit (DataCrd), and a response credit (RspCrd). This document now describes example traffic classifications. For communications from the initiator to the target (e.g., from a host device to a memory device), two traffic classifications are:REQ: Request without Data—generally Read Requests. These can be controlled using ReqCrd.RwD: Request with Data—generally Write Requests. These can be controlled using DataCrd. For communications from the target to the initiator (e.g., from the memory device to the host device), two traffic classifications are:DRS: Response with Data—generally Read Responses. These can be controlled using DataCrd.NDR: Response without Data—generally Write Acknowledgements. These can be controlled using RspCrd. These example CXL terms can be applied to the general system ofFIG.4. At a host device, which can be represented by the initiator302, the credit logic126decrements the ReqCrd value (e.g., a quantity for the one or more credits412) responsive to forwarding a FLIT (e.g., a flit with one read request) across the interconnect106to the target304. If the ReqCrd value reaches zero, the credit logic126causes the arbiter404to cease sending FLITs (e.g., the credit logic126blocks transmission of further read requests). At a memory device, which can be represented by the target304, the link controller314processes the received FLIT. The arbiter454forwards a request316that was included in the FLIT to backend memory as a read or write memory request354(e.g., a read request for a ReqCrd example). The credit logic124increments the ReqCrd value (e.g., the quantity of the collection of credits414that are to be returned) responsive to the forwarding of the memory request354. The link controller314return the request credits (ReqCrd) accumulated at the credit logic124to the credit logic126with at least one response318. This credit return420may be associated with a decrement of the ReqCrd at the credit logic124and an increment of the ReqCrd at the credit logic126. In some locations of this document, “credits” and credit-related communications may be described with reference to a CXL standard. Nonetheless, implementations of memory request modulation, as described in this document, can apply to and benefit other credit-based systems that operate in a similar or analogous manner. FIG.5illustrates, at500generally, examples of credit-based feedback loops to control communication flows between two or more devices. In example implementations, the two or more devices can include a host device104and a memory device108. The two or more devices may comport with at least one CXL standard that includes memory requests and memory responses. Four example credit-based feedback loops510,520,530, and540are shown. Each credit-based feedback loop includes an active or affirmative communication, such as a memory request or a memory response, and an associated credit return. The affirmative communication can include a read or write request or a read or write response. The credit return can correspond to a request credit (ReqCrd), a response credit (RspCrd), or a data credit (DataCrd). In a first example, the credit-based feedback loop510includes a read request512and a request credit514. In operation, the host device104transmits the read request512to the memory device108. In response to the link controller314forwarding the read request512to one or more downstream components of the memory device108, the credit logic124can return the request credit514to the initiator302. Responsive to return of the request credit514, the credit logic126adds another request credit to the request credit repository or count516. While the request credit count516is greater than zero (or the request credit repository516is nonempty), the credit logic126can permit the link controller312to transmit another read request512. In this manner, the link controller314of the target304can provide feedback or backpressure to the link controller312of the initiator302to control (e.g., block, slow, increase/decrease, or otherwise modulate) a flow of the read requests512. In a second example, the credit-based feedback loop520includes a write request522and a data credit524. In operation, the host device104transmits the write request522to the memory device108. In response to the link controller314forwarding the write request522to one or more downstream components of the memory device108, the credit logic124can return the data credit524to the initiator302. Responsive to return of the data credit524, the credit logic126adds another data credit to the data credit repository or count526. While the data credit count526is greater than zero (or the data credit repository526is nonempty), the credit logic126can permit the link controller312to transmit another write request522. In this manner, the link controller314of the target304can provide feedback or backpressure to the link controller312to control (e.g., block, slow, increase/decrease, or otherwise modulate) a flow of the write requests522. The first and second examples above relate to the target304controlling a communication flow (e.g., of memory requests) from the initiator302. The credit-based feedback loops can, however, operate in the opposite direction. The third and fourth examples below relate to the initiator302controlling a communication flow (e.g., of memory responses) from the target304. In a third example, the credit-based feedback loop530includes a read response532and a data credit534. In operation, the memory device108transmits the read response532to the host device104. In response to the link controller312forwarding the read response532to one or more upstream components of the host device104(e.g., to a processor114ofFIG.3), the credit logic126returns the data credit534to the credit logic124. Responsive to return of the data credit534, the credit logic124adds another data credit to the data credit repository or count536. While the data credit count536is greater than zero, the credit logic124can permit the link controller314to transmit another read response532. In this manner, the link controller312of the initiator302can provide feedback or backpressure to the link controller314of the target304to control (e.g., block, slow, increase/decrease, or otherwise modulate) a flow of the read responses532. In a fourth example, the credit-based feedback loop540includes a write response542and a response credit544. In operation, the memory device108transmits the write response542to the host device104. In response to the link controller312forwarding the write response542to one or more upstream components of the host device104(e.g., to a processor114ofFIG.3), the credit logic126returns the response credit544to the credit logic124. Responsive to return of the response credit544, the credit logic124adds another response credit to the response credit repository or count546. While the response credit count546is greater than zero, the credit logic124can permit the link controller314to transmit another write response542. In this manner, the link controller312of the initiator302can provide feedback or backpressure to the link controller314of the target304to control (e.g., block, slow, increase/decrease, or otherwise modulate) a flow of the write responses542. The credit-based feedback loops described above enable an initiator302or a target304to control a quantity or rate of received memory responses or memory requests, respectively. For the memory device108, this control may relate to ensuring that a queue at the target304(e.g., the request queue452) does not overflow. If the decision to return a credit to the initiator302is based solely on a memory request being forwarded out of the request queue452, memory requests may become too prevalent in backend components, such as an interconnect306, a memory controller308, or a DRAM310(e.g., each ofFIG.3). For the host device104, this control may relate to ensuring that a queue at the initiator302(e.g., the response queue406) does not overflow. If the decision to return a credit to the target304is based solely on a memory response being forwarded out of the response queue406, memory responses may become too prevalent in upstream components, such as the processor114, a memory controller thereof, or an interconnect of the host device. To at least alleviate the potential overcrowding of communications beyond the queues identified above, such as overcrowding in the backend components of the memory device108, the techniques described herein can be implemented. Certain ones of these techniques monitor memory requests that are present at the memory device. For example, a counter can include a value indicative of a quantity of memory requests that are outstanding at the memory device, including those pending in the downstream components. The return of credits can be delayed based on the monitoring and/or the value of the counter to slow or possibly stop the transmission of additional requests. Example implementations for memory request modulation are described below with reference toFIGS.6-9. FIG.6illustrates example architectures600to control a communication flow between two or more devices in accordance with certain implementations for memory request modulation. As shown on the left (as depicted inFIG.6), a controller118includes at least one request queue402, at least one response queue406, and at least one instance of credit logic126. The credit logic126can include at least one gate602and a credit repository or credit counter604. As shown on the right, a controller110includes at least one request queue452, at least one response queue456, and at least one instance of credit logic124. The credit logic124can include at least one gate612, at least one instance of credit return logic614, and at least one outstanding memory request counter616. In example implementations, the controller118can transmit a request316from the request queue402based on a state of the gate602. If the gate602is open (e.g., if a corresponding switch is closed), the controller118can transmit a request316to the controller110. On the other hand, if the gate602is closed (e.g., if a corresponding switch is opened), the controller118is blocked or prevented from transmitting a request316. The state of the gate602can be controlled by the condition or value of the credit repository or credit counter604. If the credit repository604is empty or if the credit counter604has a value that is less than one, the gate602is closed by a control signal652to prevent transmission of requests. In contrast, if the credit repository604has at least one credit or if the credit counter604has a value greater than zero, the gate602is opened by the control signal652to permit or allow the transmission of requests316. As described above with reference toFIG.4, responsive to receipt of a credit return420, a credit is added to the credit repository604, or the value of the credit counter604is incremented. If multiple credits are returned in a single response318, the credit logic126may add multiple credits “at once” to the credit repository604, or the credit logic126may increment the value of the credit counter604by an amount greater than one. At a target304(e.g., ofFIGS.3-5), the controller110adds a received request316to the request queue452. To process a memory request, the controller110transmits a request316to a downstream or backend memory component as a memory request354. The credit logic124notifies the credit return logic614of this transmission. The credit return logic614can include a counter (e.g., a second counter (not shown) of the credit logic124) that has a value indicative of a quantity of one or more credit returns that are ready to be transmitted to the controller118. Responsive to the controller110removing a request316from the request queue452, the credit return logic614can allocate another credit to be returned as a credit return420. To do so, the credit return logic614can increment the second counter. By establishing an appropriate quantity of credits for the system, this aspect of a credit-based flow control protocol can ensure that a maximum capacity of the request queue452is not exceeded. This aspect may not, however, adequately protect the memory device from oversubscribing backend memory components. For example, the “internal” interconnect306(ofFIG.3) may become too busy, one or more queues of the memory controllers308-1and308-2may become overfilled, and/or the multiple DRAMs310-1and310-2may be unable to fulfill the memory requests as fast as there are delivered to them. To protect the backend memory components of the memory device108from becoming oversaturated, the credit logic124can operate the illustrated components to manage how quickly and/or how frequently credits are returned at420to the controller118. The credit logic124can condition the return of credits at least partially on receipt of a memory response364from a backend memory component. For example, the credit logic124can permit a credit return420responsive to receipt of a memory response364. To reduce the likelihood that a component, such as the interconnect106(e.g., ofFIGS.1-4) or a downstream memory controller, is rendered idle unnecessarily, the credit logic124can flexibly condition the return of credits on the receipt of memory responses364. In some cases, the credit logic124can use the outstanding memory request counter616and at least one threshold618, such as a first threshold618-1and a second threshold618-2. The outstanding memory request counter616can track a quantity of memory requests that are present or extant on the memory device108, that are currently being processed by the memory device108, and/or that are pending within the memory device108. The credit logic124can condition the issuance of credit returns420on the quantity stored by the outstanding memory request counter616and the at least one threshold618. In some implementations, the credit logic124permits at least one credit return420to be sent to the controller118based on the outstanding memory request counter616and the threshold618. In example operations, the credit logic124increments a value620stored by the outstanding memory request counter616at654responsive to the receipt of each request316. The credit return logic614can enable a credit to be returned at656responsive to the memory request354being issued from the request queue452. However, the gate612can block the delivery of the credit return420. A state of the gate612may be open or closed, and the state can be established by a control signal660. If the gate612is open (e.g., a corresponding switch is closed), the credit logic124can permit at least one credit return420to pass for transmission to the controller118. On the other hand, if the gate612is closed (e.g., a corresponding switch is opened), the controller110is blocked or prevented from transmitting a credit return420. The state of the gate612can be controlled responsive to the value620of the outstanding memory request counter616and based on the threshold618. As described above, the value620can be increased at654responsive to receipt of a request316. To enable the value620to represent a quantity of outstanding memory requests of the memory device, the credit logic124can decrease (e.g., decrement) the value620of the counter at658responsive to receipt of a memory response364. Thus, the value620of the outstanding memory request counter616can track the quantity of memory requests that are pending within the memory device108. The credit logic124can compare the value620to the at least one threshold618to provide a control signal at660to the gate612to establish a closed state or an open state thereof. For example, if the value620is below the threshold618, the credit logic124can keep the gate612open to permit credit returns420to flow from the credit return logic614to the credit logic126of the controller118. If, however, the value620is above the threshold618, the credit logic124can close the gate612to prevent credit returns420from flowing from the credit return logic614to the credit logic126. Over some time period, as memory responses364are received from backend memory components, the value620decreases due to the decrement signal658. Responsive to the value620, which is indicative of the quantity of outstanding memory requests, falling below the threshold618, the credit logic124can reopen the gate612. In some implementations, the credit logic124operates using multiple thresholds, such as a first threshold618-1and a second threshold618-2. Consider an example in which the second threshold618-2is greater than the first threshold618-1. In operation, the credit logic124can block the release of the credit return420based on the value620of the counter616being greater than the second threshold618-2. The credit logic124can permit the release of the credit return420based on the value620of the counter616being less than the first threshold618-1. To avoid cycling between blocking and permitting transmissions of credit returns, the transmission can be conditional on a recent trend of the value620while the value is between the first and second thresholds618-1and618-2(e.g., while the value is in a “middle” zone between two thresholds). For instance, the transmission can be conditioned on if the value620more recently crossed the first threshold618-1and is increasing or more recently crossed the second threshold618-2and is decreasing. In operation, the credit logic124can block the release of the credit return420based on the value620of the counter616being less than the second threshold618-2but greater than the first threshold618-1and responsive to the value620falling below the second threshold618-2from being above the second threshold618-2(e.g., responsive to the value620crossing the second threshold618-2while decreasing). The credit logic124can permit the release of the credit return420based on the value620of the counter616being less than the second threshold618-2but greater than the first threshold618-1and responsive to the value620climbing above the first threshold618-1from being below the first threshold618-1(e.g., responsive to the value620crossing the first threshold618-1while increasing). This multiple threshold approach also provides the memory device additional time to “catch up” on producing memory responses after the value620falls below the second threshold618-2before the credit logic124begins releasing credit returns again. Using these techniques, the credit logic124can modulate how quickly or how frequently requests316are received from the controller118based on how “busy” the backend memory components of the memory device108are. These techniques can enable the memory device108to avoid becoming overwhelmed and/or oversubscribed and, therefore, enable the memory device108to provide some specified quality of service. To provide finer control over the memory request modulation and/or to avoid an undesirable hysteresis effect (e.g., oscillation of the state of the gate612) due to using a single threshold, the at least one threshold618can be implemented with multiple thresholds. Example approaches with multiple thresholds are described next with reference toFIG.7. FIG.7illustrates other example architectures700to control a communication flow between two or more devices in accordance with certain implementations for memory request modulation. As illustrated, the architectures700include multiple flit handlers752,754,756, and758that process FLITs, such as by creating or interpreting a FLIT. At the controller118, the credit logic126can include at least one credit counter702, at least one comparator704, and at least one switch706. At the controller110, the credit logic124can include the counter616(e.g., the outstanding memory request counter616), at least one credit return counter712, at least one comparator714, at least one switch716, at least one register718(e.g., a first register718-1and a second register718-2), and at least one flag720. In example implementations, the flit handlers752and756produce FLITs, and the flit handlers754and758unpack FLITs. Thus, the flit handler752of the controller118can transmit a request FLIT760, and the flit handler754of the controller110can receive the request FLIT760. Analogously, the flit handler756of the controller110can transmit a response FLIT762, and the flit handler758of the controller118can receive the response FLIT762. Responsive to receipt of a response FLIT762, the flit handler758can forward a response to the response queue406and provide one or more credit returns420to the credit counter702of the credit logic126. The credit logic126can maintain a count of available credits (e.g., request credits516or data credits526ofFIG.5) using the credit counter702. The comparator704can compare a current count from the credit counter702to a set value, such as zero. If the count is greater than zero, the credit logic126can close the switch706to permit requests to flow. If the count is not greater than zero, the credit logic126can open the switch706to block requests from flowing. If requests are flowing, the flit handler752can prepare a request FLIT760and indicate to the credit logic126that the count of the credit counter702is to be decremented responsive to transmission of a request. Accordingly, while the controller118possesses at least one credit, the flit handler752can transmit a request FLIT760to the flit handler754of the controller110. The flit handler754can unpack the request FLIT760, forward a memory request to the request queue452, and notify the credit logic124that the value620of the counter616is to be incremented. Alternatively, the request queue452, or associated logic, can notify the credit logic124that the value620of the counter616is to be incremented. Responsive to a request being forwarded from the request queue452, the credit logic124increments a count of the credit return counter712. Each register718can store a corresponding or respective threshold618. The first register718-1may store the first threshold618-1, and the second register718-2may store the second threshold618-2. The flag720can include at least one bit indicative of whether the switch716is to be in a closed state, which permits credit returns, or an open state, which blocks credit returns. In example operations at the controller110, the comparator714can compare the value620to the first threshold618-1and the second threshold618-2as described herein. In some cases, the first and second thresholds618-1and618-2can operate as low and high watermarks, respectively, as described below. In response to one or more comparisons performed by the comparator714, the credit logic124clears or sets the flag720. The credit logic124may control the state of the switch716based on the flag720. If the switch716is open, no credits are being returned. If the switch716is closed, one or more credit returns420can be transmitted in at least one response FLIT762by the flit handler756based on the count of the credit return counter712. The credit logic124can reduce the count of the credit return counter712as credits are returned while the switch716is closed. In example implementations using two thresholds, when a quantity of outstanding memory requests (e.g., read requests) on the memory device crosses a high-level watermark value (e.g., indicating that the memory device has become oversubscribed), the credit logic124withholds the returning of credits (e.g., at least one REQCRD return for a read request) back to the host device. The credit logic124again starts sending credits (e.g., the REQCRD returns) back to the host device when the quantity of outstanding requests (e.g., read requests) on the memory device drops below the low-level watermark value. These watermarks can be adjusted based on slower or faster backend memory subsystems or, in some cases, contemporaneously based on a current latency. By modulating (e.g., limiting or moderating) the rate at which credits at the host device get replenished, described techniques can throttle the host's ability to send traffic to the memory device. This throttling can reduce queuing latency on the memory device request buffer or the memory device response buffer, including both. The one or more registers718may be realized as read-only or as read/write registers. In some implementations, the first register718-1may correspond to a low watermark threshold (e.g., be a CXL_Device_Request_Outstanding_Low_Watermark_REG). The second register718-2may correspond to a high watermark threshold (e.g., be a CXL_Device_Request_Outstanding_High_Watermark_REG). The counter616can keep track of a quantity of outstanding requests at the device (e.g., function as a CXL_Device_Request_Outstanding_COUNTER). The flag720may correspond to a Boolean value indicative of whether credit returns are being withheld (e.g., function as a Withhold_Returning_Credit_Flag). Thus, the Boolean flag can track if a credit can be returned or not as part of a response FLIT762that is leaving the memory device. For instance, the flag values can be set as follows: a one (1)=enable withholding, and a zero (0)=disable withholding. If CXL_Device_Request_Outstanding_COUNTER>CXL_Device_Request_Outstanding_High_Watermark_REG, then the flag is set to one (1). If CXL_Device_Request_Outstanding_COUNTER<CXL_Device_Request_Outstanding_LowWatermark_REG, then the flag is set to zero (0). In some implementations, to initialize the credit logic124, the credit logic124can read the CXL_Device_Request_Outstanding_High_Watermark_REG to obtain a value for the high watermark beyond which the memory device is to start back-pressuring the host device. The credit logic124can also read the CXL_Device_Request_Outstanding_LowWatermark_REG to obtain the low watermark below which the memory device is to return to “normal” operation with respect to returning credits to the host device. If a new request is received by the memory device, the credit logic124can increment the CXL_Device_Request_Outstanding_COUNTER. If a response leaves the memory device for the host device, the credit logic124can decrement the CXL_Device_Request_Outstanding_COUNTER. Generally, if a quantity of outstanding requests is greater than the value specified in the high watermark register, the credit logic124can withhold the returning of credits back to the host device. Because the credit logic124can delay the return of credits to the host device, the rate at which the host can send new requests to the memory device can be reduced. The withholding of credit returns may be effectuated by setting the Withhold_Returning_Credit_Flag to one (1). Thus, the credit return420is gated in this condition by the switch716, and the flit handler756does not attach a credit return to the response FLIT762. On the other hand, if the quantity of outstanding requests is less than the value specified in the low watermark register, the credit logic124can send credit returns420back to the host device at a normal rate. This allows the host device to again send traffic to the memory device at normal rate. The return to a normal credit return rate may be effectuated by removing the gating performed by the switch716by setting the Withhold_Returning_Credit_Flag to zero (0). Thus, the high-level and low-level watermark registers reflect the higher and lower cut-off levels, respectively, beyond which (e.g., above which or below which, respectively) the memory device backpressures the host device, or the memory device allows the host device to resume normal operation, respectively. Note, however, that the high register value specifies a starting point at which the memory device begins back-pressuring the host. The host device may still possess one or more credits and may, therefore, continue sending more requests until the host exhausts its currently remaining credits. The watermarks can be adjusted to account for slower backend memory subsystems (e.g., by keeping the low watermark to a low value) or faster backend memory subsystems (e.g., by keeping the high watermark high and the low watermark high) to allow more traffic to come from the host device. The credit logic124may also adjust these values during operation based on a measured latency or bandwidth/throughput. The credits for the example architectures600and700can correspond at least to any of the credits described above, such as request credits, data credits, or response credits. The requests316may correspond, for instance, to read requests or write requests in a memory context. The responses318may correspond, for instance, to read responses or write responses in a memory context. Nonetheless, the principles described with reference toFIG.6are applicable to other types of credits, communications, and/or environments. Also, although certain concepts are described herein in the context of CXL Type 3 devices (“Memory Expanders”), the described techniques can be applied to other CXL device types and/or to non-CXL devices. Further, the described principles are applicable to environments generally having credit-based communications. For example, a transmitter or initiator component may transmit requests besides memory requests. Similarly, a receiver or target component may receive “general” requests instead of or in addition to memory requests. Accordingly, the credit logic124may monitor the presence of pending requests at the target for non-memory requests, such as a computational request (e.g., for a cryptographic, AI accelerator, or graphics computation), a communications request (e.g., for transmitting or receiving a packet over some interface or network), and so forth. The described techniques can ensure that other types of targets— besides memory devices—do not become oversubscribed if the corresponding requests are pending “too long” in the backend of the other targets while a request queue at a controller is at least partially empty. Example Methods This section describes example methods with reference to the flow chart(s) and flow diagram(s) ofFIGS.8and9for implementing memory request modulation. These descriptions may also refer to components, entities, and other aspects depicted inFIGS.1-7, which reference is made only by way of example. FIG.8illustrates a flow chart for an example process800that implements request modulation. The process800can include blocks802-816. The process800may be performed by, for instance, a target device that is in communication with an initiator device. In a memory environment, for example, the target device may be realized with a memory device, and an initiator device may be realized with a host device. The target device can additionally or alternatively be realized with a communication device (e.g., a modem), an accelerator device (e.g., for AI operations), a graphics device (e.g., a graphics card), and so forth. The initiator can be any device or component that is requesting a service or operation from the target. At block802, a target initializes circuitry to implement request modulation. For example, the target can clear at least one request or response queue, load at least one register, access (e.g., read or write) at least one nonvolatile register, set at least one counter value (e.g., to zero), and the like. In a memory environment, for instance, a controller110of a memory device108may empty a request queue452and a response queue456, may access or load a register718-1or718-2, or may set the value620of the counter616to zero (0). At block804, the target monitors an interconnect for receipt of a request from an initiator and/or determines if a request is received. If a request is received from the initiator, the target increments a counter at806. The counter can include a value indicative of a quantity of requests that are outstanding at the target, including a device thereof. The value may indicate how many requests are pending at a device corresponding to the target, how many requests are received but not yet processed, how many requests are received but not responded to yet, how many requests are in-progress in downstream or backend components of the target device, combinations thereof, and so forth. In some cases, a request may be considered outstanding until a response is transmitted to the initiator. Alternatively, a request may cease to be considered outstanding once an “internal” response is received at the controller of the initiator from a backend component. Outstanding requests, however, may be characterized differently depending on the environment, and/or the thresholds may be determined variously depending on how the outstanding requests are characterized. For example, a threshold may be set higher in a system that includes the responses still present in a response queue at the target's controller as compared to a threshold in a system that excludes such responses from an outstanding request count. Responsive to incrementing the counter at block806, or if a request is not received from the initiator at block804, the flow of the process800can continue at block808. At block808, the target can compare a value of the counter to at least one threshold. If the counter is less than the threshold, then at block810the target can permit transmission of at least one credit return to the initiator. Thus, if the target has any available or pending credits to be returned to the initiator, the target can send those one or more credits “back” to the initiator. In some cases, the target can transmit at least one credit return to the initiator as part of a response for the initiator. After permission is provided (e.g., a gate is opened or allowed to remain open) at block810, the process800can continue at block814. If, however, at block808a value of the counter is not determined to be less than the at least one threshold (or the counter is determined to be greater than the at least one threshold), then at block812the target can block transmission of one or more pending credit returns to the initiator. For example, logic can close a gate or cause a gate to continue to be closed if the value of the counter exceeds a threshold value relating to a number of requests that can be pending in the target device without unreasonably impacting latency. Additionally or alternatively, logic that authorizes or controls the return of credits may be paused or stopped while the counter is greater than the threshold. After the target activates the blocking of credit return transmissions, the process800can continue at block814. At block814, the target determines if a response has been or is being received from a backend component of the target device. Examples of backend components include memory components (e.g., a memory controller or a memory array), a modem or communication interface, a processor or accelerator (e.g., for graphics, neural networks, or cryptographic operations), and so forth. If another response has been received from a backend component, the target can decrement the counter at block816. Thus, a value indicative of an outstanding request is increased responsive to an arrival of a request from an initiator (at block806) and is decreased responsive to an arrival of a response from a backend component (at block816). This can entail decrementing the counter responsive to receipt of a response from the backend component. The decrementing can further (additionally or alternatively) be responsive to transmitting the response from the target device to the initiator device. After or responsive to the counter being decremented at block816, the process800can continue at block804. Likewise, if no response is received from a backend component at block814, the process800can also continue at block804. Thus, the target can continually monitor the status of requests and responses at blocks804and814, respectively. In response to the determinations and/or comparisons of blocks804and814, the target can update the value of the counter at blocks806and816, respectively. Further, the target can compare the counter to at least one threshold at block808and then permit or block the return of credits at blocks810and812, respectively, based on the comparison at block808. If multiple thresholds are employed (e.g., as described above with reference toFIG.6or7or below with reference toFIG.9), the blocking or permitting can be based on comparisons to the multiple thresholds and responsive to the directional passing (e.g., increasing or decreasing of the counter) across a given threshold. The acts shown inFIG.8may be performed in other orders and/or in partially or fully overlapping manners or in conjunction with the acts ofFIG.9. FIG.9illustrates a flow diagram for an example process900that implements memory request modulation. At block902, a controller of a memory device adjusts a value stored in a counter, with the value indicative of a quantity of memory requests that are outstanding at the memory device. For example, a controller110of a memory device108can adjust a value620stored in a counter616, with the value620indicative of a quantity of memory requests that are outstanding at the memory device108. The controller110may, for instance, increment (e.g., increase a value of) an outstanding memory request counter616responsive to arriving memory requests316and decrement (e.g., decrease the value of) the outstanding memory request counter616responsive to arriving memory responses364and/or decrement the outstanding memory request counter616responsive to transmitted memory responses318. In some cases, the value620of the counter616can be indicative of a quantity of memory requests316that are pending in the memory device108. For example, the value620of the counter616can be indicative of at least a quantity of requests316for a memory operation that have been added to a request queue452and not yet had a corresponding response364added to a response queue456. A request316may cease to be outstanding relative to the memory device108responsive to receipt of the memory response364at the controller110of a target304, responsive to adding the memory response364to the response queue456, responsive to removing the memory response364from the response queue456, and/or responsive to transmitting the corresponding memory response318from the controller110to a controller118of an initiator302. At block904, the controller of the memory device transmits a credit return based on the value of the counter and at least one threshold. For example, the controller110of the memory device108can transmit a credit return420based on the value620of the counter616and at least one threshold618. The transmission may be based on a comparison including the value620and the at least one threshold618. The controller110may permit transmission of the credit return420responsive to the value620of the counter616being less than the at least one threshold618. On the other hand, the controller110may block transmission of the credit return420responsive to the value620of the counter616being greater than the at least one threshold618. In some cases, the controller110can transmit the credit return420based on the value620of the counter616, a first threshold618-1, and a second threshold618-2. For example, the controller110can compare the value620of the counter616to the first threshold618-1and to the second threshold618-2. Based on the value620of the counter616being less than the first threshold618-1, the controller110can permit transmission of the credit return420. Based on the value620of the counter616being greater than the second threshold618-2, the controller110can block transmission of the credit return420. If employing multiple thresholds, the controller110can manage credit returns in a middle zone between two thresholds in the following example manners. The controller110may permit transmission of the credit return420based on the value620of the counter616being less than the second threshold618-2but greater than the first threshold618-1and responsive to the value620climbing above the first threshold618-1from being below the first threshold618-1(e.g., the value620crosses the first threshold618-1while increasing). Further, the controller110may block transmission of the credit return420based on the value620of the counter616being less than the second threshold618-2but greater than the first threshold618-1and responsive to the value620falling below the second threshold618-2from being above the second threshold618-2(e.g., the value620crosses the second threshold618-2while decreasing). For the flow chart(s) and flow diagram(s) described above, the orders in which operations are shown and/or described are not intended to be construed as a limitation. Any number or combination of the described process operations can be combined or rearranged in any order to implement a given method or an alternative method. Operations may also be omitted from or added to the described methods. Further, described operations can be implemented in fully or partially overlapping manners. Aspects of these methods may be implemented in, for example, hardware (e.g., fixed-logic circuitry or a processor in conjunction with a memory), firmware, software, or some combination thereof. The methods may be realized using one or more of the apparatuses or components shown inFIGS.1to7, the components of which may be further divided, combined, rearranged, and so on. The devices and components of these figures generally represent hardware, such as electronic devices, packaged modules, IC chips, or circuits; firmware or the actions thereof; software; or a combination thereof. Thus, these figures illustrate some of the many possible systems or apparatuses capable of implementing the described methods. Unless context dictates otherwise, use herein of the word “or” may be considered use of an “inclusive or,” or a term that permits inclusion or application of one or more items that are linked by the word “or” (e.g., a phrase “A or B” may be interpreted as permitting just “A,” as permitting just “B,” or as permitting both “A” and “B”). Also, as used herein, a phrase referring to “at least one of” a list of items refers to any combination of those items, including single members. For instance, “at least one of a, b, or c” can cover a, b, c, a-b, a-c, b-c, and a-b-c, as well as any combination with multiples of the same element (e.g., a-a, a-a-a, a-a-b, a-a-c, a-b-b, a-c-c, b-b, b-b-b, b-b-c, c-c, and c-c-c, or any other ordering of a, b, and c). Further, items represented in the accompanying figures and terms discussed herein may be indicative of one or more items or terms, and thus reference may be made interchangeably to single or plural forms of the items and terms in this written description. CONCLUSION Although implementations for memory request modulation have been described in language specific to certain features and/or methods, the subject of the appended claims is not necessarily limited to the specific features or methods described. Rather, the specific features and methods are disclosed as example implementations for memory request modulation.
100,107
11860800
DETAILED DESCRIPTION Recent advances in materials, devices, and integration technology, can be leveraged to provide memory-centric compute topologies. Such topologies can realize advances in compute efficiency and workload throughput, for example, for applications constrained by size, weight, or power requirements. The topologies can be used to facilitate low-latency compute near, or inside of, memory or other data storage elements. The approaches can be particularly well-suited for various compute-intensive operations with sparse lookups, such as in transform computations (e.g., fast Fourier transform computations (FFT)), or in applications such as neural networks or artificial intelligence (AI), financial analytics, or simulations or modeling such as for computational fluid dynamics (CFD), Enhanced Acoustic Simulator for Engineers (EASE), Simulation Program with Integrated Circuit Emphasis (SPICE), and others. Systems, devices, and methods discussed herein can include or use memory-compute systems with processors, or processing capabilities, that are provided in, near, or integrated with memory or data storage components. Such systems are referred to generally herein as compute-near-memory (CNM) systems. A CNM system can be a node-based system with individual nodes in the systems coupled using a system scale fabric. Each node can include or use specialized or general purpose processors, and user-accessible accelerators, with a custom compute fabric to facilitate intensive operations, particularly in environments where high cache miss rates are expected. In an example, each node in a CNM system can have a host processor or processors. Within each node, a dedicated hybrid threading processor can occupy a discrete endpoint of an on-chip network. The hybrid threading processor can have access to some or all of the memory in a particular node of the system, or a hybrid threading processor can have access to memories across a network of multiple nodes via the system scale fabric. The custom compute fabric, or hybrid threading fabric (HTF), at each node can have its own processor(s) or accelerator(s) or memory(ies) and can operate at higher bandwidth than the hybrid threading processor. Different nodes in a compute-near-memory system can be differently configured, such as having different compute capabilities, different types of memories, different interfaces, or other differences. However, the nodes can be commonly coupled to share data and compute resources within a defined address space. In an example, a compute-near-memory system, or a node within the system, can be user-configured for custom operations. A user can provide instructions using a high-level programming language, such as C/C++, that can be compiled and mapped directly into a dataflow architecture of the system, or of one or more nodes in the CNM system. That is, the nodes in the system can include hardware blocks (e.g., memory controllers, atomic units, other customer accelerators, etc.) that can be configured to directly implement or support user instructions to thereby enhance system performance and reduce latency. In an example, a compute-near-memory system can be particularly suited for implementing a hierarchy of instructions and nested loops (e.g., two, three, or more, loops deep, or multiple-dimensional loops) or other parallel or concurrent instructions. A standard compiler can be used to accept high-level language instructions and, in turn, compile directly into the dataflow architecture of one or more of the nodes. For example, a node in the system can include the HTF. The HTF can execute in a user space of the CNM system and can initiate its own threads or sub-threads, which can operate in parallel. Each thread can map to a different loop iteration to thereby support multi-dimensional loops. With the capability to initiate such nested loops, among other capabilities, the CNM system can realize significant time savings and latency improvements for compute-intensive operations. The present inventors have recognized, among other things, that a problem to be solved can include compiling compute kernels for execution using a reconfigurable compute architecture. The inventors have further recognized that the problem can include optimizing resource usage and reducing latency throughout the compute architecture during execution of one or multiple threads. In an example, a solution to these and other problems can include or use the HTF of the CNM system to execute various compute kernels. The solution can further include mapping kernel operations to HTF resources based on a cost function that can be evaluated for different operation and resource pairs. In an example, user instructions or programs can define compute kernels for execution using the CNM system. The compute kernels can be parsed and reconstructed in the form of a graph, such as a directed acyclic graph (DAG). The compute resources of the CNM system, such as comprising portions of the HTF, can be represented as a resource graph. Techniques discussed herein can include or use a search algorithm to map the kernel graph or DAG to the resource graph. The search algorithm can include, for example, a branch-and-bound algorithm that evaluates and selects resources based on a cost function that is evaluated for each different pair of particular kernel operations and resources. The cost function can consider, for example, capability, capacity, utilization, latency, power consumption, or other factors, for each resource under consideration for performing a particular portion of the kernel. In an example, a hardware structure of the HTF discussed herein can facilitate compilation of complicated compute kernels, such as including kernels with nested loops. For example, tiles comprising the HTF can be connected using a reconfigurable compute fabric that helps reduce or remove constraints as to which tile-based compute resource, among the multiple available resources, is selected for particular operations in a data flow. Accordingly, the HTF structure itself can help provide flexibility to the compiler. For example, each tile includes synchronous fabric inputs and outputs, asynchronous fabric inputs and outputs, and one or more passthrough channels for low latency communication among tiles in a node. Data flows can thus be mapped to resources on neighboring or non-neighboring tiles using a combination of the different inputs, outputs, and passthrough channels of multiple tiles. In other words, due at least in part to the different communication paths between and among tiles in the HTF, a greater number of low latency paths can be established to realize flows than would be available using other arrays of compute resources without such paths. Since a relatively greater number of flows can be available, such flows can be further optimized by balancing other system-level interests such as resource utilization, performance, and power consumption. A compute-near-memory system, or nodes or tiles of a compute-near-memory system, can include or use various memory devices, controllers, and interconnects, among other things. In an example, the system can comprise various interconnected nodes and the nodes, or groups of nodes, can be implemented using chiplets. Chiplets are an emerging technique for integrating various processing functionality. Generally, a chiplet system is made up of discrete chips (e.g., integrated circuits (ICs) on different substrate or die) that are integrated on an interposer and packaged together. This arrangement is distinct from single chips (e.g., ICs) that contain distinct device blocks (e.g., intellectual property (IP) blocks) on one substrate (e.g., single die), such as a system-on-a-chip (SoC), or discretely packaged devices integrated on a board. In general, chiplets provide production benefits than single die chips, including higher yields or reduced development costs.FIG.6AandFIG.6B, discussed below, illustrate generally an example of a chiplet system such as can comprise a compute-near-memory system. In some examples, a compute-near-memory system is programmed to arrange components of a reconfigurable compute fabric, such as the hybrid threading fabric (HTF) described herein, into one or more synchronous flows. The reconfigurable compute fabric comprises one or more hardware flow controllers and one or more hardware compute elements that can be arranged to form one or more synchronous flows, as described herein. A compute element comprises a compute element memory and a processor or other suitable logic circuitry forming a compute pipeline for processing received data. In some examples, a compute element comprises multiple parallel processing lanes, such as single instruction multiple data (SIMD) processing lanes. A compute element can further comprise circuitry for sending and receiving synchronous and asynchronous messages to flow controllers, other compute elements, and other system components, as described herein. Example compute elements are described herein with respect to HTF tiles, such as the tiles504,510,512ofFIG.5, among others. A flow controller can include a processor or other logic circuitry for managing a synchronous flow, as described herein. The flow controller comprises circuitry for sending synchronous and asynchronous messages to compute elements, other flow controllers, and other system components, as described herein. In some examples, a flow controller is implemented using a tile base of one or more of the HTF tiles504,510,512described herein. A synchronous flow can include hardware arranged in a reconfigurable compute fabric that comprises a hardware flow controller and an ordered synchronous data path comprising one or more hardware compute elements. A synchronous flow can execute one or more threads of work. To execute a thread, the hardware components of the synchronous flow pass synchronous messages and execute a predetermined set of operations in the order of the synchronous flow. The present inventors have recognized, among other things, that a problem to be solved can include avoiding latency in threads or compiled HTF compute kernels, such as can result from occupied or blocked routing paths in a flow. A solution to the problem can include or use one or more delay registers at each tile in the fabric. In an example, the solution includes pipelined delay and output registers wherein the delay registers are physical replicas of each corresponding output register. The delay registers can provide a buffering location for compute results, such as can be stored for one or more clock cycles before further processing or before output to another tile or fabric location. Such buffering by the delay register or registers can help free up the corresponding connected output register, through which new or other results can be routed, thereby enhancing throughput for each tile in the HTF. In an example, a thread is completed when all of the compute elements of the synchronous flow complete their programmed operations in the predetermined order of the synchronous flow. When a thread is completed, a pipeline of synchronous messages will have propagated between the various compute elements in the predetermined order of the synchronous flow, beginning at the flow controller. Because the arrangement is synchronous, the completion of a thread may occur in a fixed amount of time (e.g., a predictable number of clock cycles from when the flow controller initiates the synchronous flow). Arrangements of HTF to include synchronous flows may facilitate parallel processing. For example, a flow controller for a synchronous flow need not wait for one thread to complete before initiating an additional thread. Consider an example synchronous flow including a flow controller and multiple compute elements. The flow controller initiates a first thread by providing a synchronous message to the first compute element of the synchronous flow. The first compute element can perform its processing and direct a second synchronous message to the next compute element, and so on. After the first compute element completes its processing and directs the synchronous message to the next compute element, the flow controller can initiate an additional thread at the synchronous flow, for example, by providing an additional synchronous message to the first compute element. The dedicated delay and output registers can help coordinate timing of one or more components of the same flow or of parallel flows. Parallelization of synchronous flows at a reconfigurable compute fabric can use compute elements that operate at a predefined cadence or Spoke Count, such as using the various tiles described herein. For example, a compute element may use a predetermined number of clock cycles to perform various operations, such as receiving synchronous messages, performing processing operations, sending synchronous messages, etc. The compute element can be configured to receive a new synchronous message and begin operations for a thread while operations from a previous thread are still propagating through a different compute element. The new thread can be a different thread of the same synchronous flow of the previous thread or can be a thread of a different synchronous flow. A synchronous flow can use an asynchronous fabric of the reconfigurable compute fabric to communicate with other synchronous flows and/or other components of the reconfigurable compute fabric using asynchronous messages. For example, a flow controller may receive an asynchronous message from a dispatch interface and/or from another flow controller instructing the flow controller to begin a thread at or using a synchronous flow. The dispatch interface may interface between the reconfigurable compute fabric and other system components. Also, in some examples, a synchronous flow may send an asynchronous message to the dispatch interface to indicate completion of a thread. Asynchronous messages can be used by synchronous flows to access memory. For example, the reconfigurable compute fabric can include one or more memory interfaces. Memory interfaces are hardware components that can be used by a synchronous flow or components thereof to access an external memory that is not part of the synchronous flow. A thread executed at a synchronous flow can include sending a read and/or write request to a memory interface. Because reads and writes are asynchronous, the thread that initiates a read or write request to the memory interface may not receive the results of the request. Instead, the results of a read or write request can be provided to a different thread executed at a different synchronous flow. Delay and output registers in one or more of the tiles can help coordinate and maximize efficiency of a first flow, for example, by precisely timing engagement of particular compute resources of one tile with arrival of data relevant to the first flow. The registers can help enable the particular compute resources of the same tile to be repurposed for flows other than the first flow, for example while the first flow dwells or waits for other data or operations to complete. Such other data or operations can depend on one or more other resources on the fabric. In an example, a reconfigurable compute fabric can use a first synchronous flow for initiating a read request and a second synchronous flow for receiving the results of the read request. A first thread at the first synchronous flow can send an asynchronous read request message to a memory interface. The first thread can also send an asynchronous continue-type message to the flow controller of the second synchronous flow, where the continue message indicates the read request. The memory interface acquires the requested data from the memory and directs the read data to an appropriate compute element of the second synchronous flow. The compute element then directs an asynchronous message to the second flow controller indicating that the data has been received. In some examples, the memory interface provides the read data directly to the second flow controller. After receiving an indication that the read data has been received, the second flow controller initiates a thread at the second synchronous flow to further process the result of the read request. In some examples, a reconfigurable compute fabric, such as the HTF described herein, is used to execute one or more loops, such as a set of nested loops. To execute a loop, the reconfigurable compute fabric can use flow controllers and compute elements arranged into one or more synchronous flows, as described herein. For example, the flow controller for a synchronous flow can initiate a thread at the synchronous flow for each iteration of a loop. Consider the simple example loop given by code segment [1] below: fori=1,10{ x[i]=x[i−1]*2; saveMem=x[i]; }  [1] A flow controller may begin the example loop by initiating a first thread at the synchronous flow for an i=1 loop iteration. In this example, an initial value for x[i−1] is passed by the flow controller to the first compute element with the payload data of the initial synchronous message. The compute element or elements of the synchronous flow determines a value for x[1] and returns the value for x[1] to the flow controller as a synchronous or asynchronous message. The flow controller then initiates a second thread at the synchronous flow for the i=2 loop iteration, passing the returned value of x[1] as x[i−1] in a synchronous message. This process continues until all iterations of the loop are completed and a value for x[10] is returned. The example loop above uses a single synchronous flow for each iteration of the loop. In some examples, however, multiple synchronous flows can be used for each loop iteration. Consider the example loop given by code segment [2] below: fori=1,10{ x[i]=i*y[i]; saveMem=x[i]; }  [2] In this example, each loop iteration involves multiplying i by a value y[i] read from memory, and then writing the result to memory. Accordingly, each loop iteration includes an asynchronous memory read and an asynchronous memory write. As described herein, the memory read involves sending an asynchronous message to a memory interface and then waiting for the memory interface to reply with another asynchronous message including the requested data. Because the memory read is asynchronous, each loop iteration may use synchronous flow threads executing at two different synchronous flows. For the i=1 loop iteration, a thread at a first synchronous flow sends an asynchronous message to the memory interface including a read request for the value of y[1]. The thread at the first synchronous flow may also send an asynchronous message to a second flow controller of the second synchronous flow instructing the second flow controller to expect the result of the read request (either directly from the memory interface or from a compute element of the second synchronous flow that has received the read data). The memory interface initiates a read of the value of y[1] and provides the value of y[1] to the second synchronous flow via an asynchronous message. Upon receiving an asynchronous message indicating that the read data is received, the second flow controller initiates a thread at the second synchronous flow. (The returned value of y[1] can be provided to the compute elements, for example, via synchronous communications of the thread and/or directly from the memory interface prior to initiation of the thread.) The second thread determines the value of x[1] and sends a synchronous message to the memory interface including a write request for x[1]. In some examples, the number of threads that a synchronous flow controller can initiate at a synchronous flow is limited by the resources of the components of the synchronous flow. For example, threads of a synchronous flow may write data to the various local compute element memories at the synchronous flow compute elements. If too many synchronous flow threads are initiated at the same time, some synchronous flow threads may lack sufficient local memory or other resources. This may prevent a synchronous flow thread from writing its data and/or cause it to overwrite the locally-stored data of other synchronous flow threads. To prevent this, a reconfigurable compute fabric may limit the number of synchronous flow threads that can be initiated at a given time. For example, the reconfigurable compute fabric may implement a pool of thread identifiers (IDs). A flow controller may determine that a thread ID is available before implementing a synchronous flow thread. In some examples, the synchronous messages of a synchronous flow thread may include an indication of the thread ID for a given thread. When a synchronous flow thread is complete, it may send an asynchronous free message, for example, to the flow controller that initiated the synchronous flow thread. This indicates to the flow controller that the thread ID (and associated resources) for the completed synchronous flow thread are now available for use by a new synchronous flow thread. When a synchronous flow is used to execute a loop, the synchronous flow threads executing different iterations of the loop may have need to read data from and/or write data to memory. For example, during execution of a synchronous flow thread, one or more compute elements may read operand data specific to the current loop iteration from compute element memory. Similarly, one or more compute elements may write result data specific to the current loop iteration to compute element, or tile-specific, memory. Further, in some examples, during execution of a synchronous flow thread, a compute element may make loop-iteration specific reads from or writes to external memory via a memory interface. These read, write, compute, or other operations can create timing issues that can reduce system efficiency and resource usage. For example, compute resources may be unused or underused during various clock cycles when data is moved through the same or other compute element in a system. In an example, the system can include or use compute element delay registers with a loop-back or feedback path to help coordinate flow threads and increase resource usage, such as by avoiding output register blocking or avoiding extraneous read or write operations to temporary storage locations. FIG.1illustrates generally a first example of a compute-near-memory system, or CNM system102. The example of the CNM system102includes multiple different memory-compute nodes, such as can each include various compute-near-memory devices. Each node in the system can operate in its own operating system (OS) domain (e.g., Linux, among others). In an example, the nodes can exist collectively in a common OS domain of the CNM system102. The example ofFIG.1includes an example of a first memory-compute node104of the CNM system102. The CNM system102can have multiple nodes, such as including different instances of the first memory-compute node104, that are coupled using a scale fabric106. In an example, the architecture of the CNM system102can support scaling with up to n different memory-compute nodes (e.g., n=4096) using the scale fabric106. As further discussed below, each node in the CNM system102can be an assembly of multiple devices. The CNM system102can include a global controller for the various nodes in the system, or a particular memory-compute node in the system can optionally serve as a host or controller to one or multiple other memory-compute nodes in the same system. The various nodes in the CNM system102can thus be similarly or differently configured. In an example, each node in the CNM system102can comprise a host system that uses a specified operating system. The operating system can be common or different among the various nodes in the CNM system102. In the example ofFIG.1, the first memory-compute node104comprises a host system108, a first switch110, and a first memory-compute device112. The host system108can comprise a processor, such as can include an X86, ARM, RISC-V, or other type of processor. The first switch110can be configured to facilitate communication between or among devices of the first memory-compute node104or of the CNM system102, such as using a specialized or other communication protocol, generally referred to herein as a chip-to-chip protocol interface (CTCPI). That is, the CTCPI can include a specialized interface that is unique to the CNM system102, or can include or use other interfaces such as the compute express link (CXL) interface, the peripheral component interconnect express (PCIe) interface, or the chiplet protocol interface (CPI), among others. The first switch110can include a switch configured to use the CTCPI. For example, the first switch110can include a CXL switch, a PCIe switch, a CPI switch, or other type of switch. In an example, the first switch110can be configured to couple differently configured endpoints. For example, the first switch110can be configured to convert packet formats, such as between PCIe and CPI formats, among others. The CNM system102is described herein in various example configurations, such as comprising a system of nodes, and each node can comprise various chips (e.g., a processor, a switch, a memory device, etc.). In an example, the first memory-compute node104in the CNM system102can include various chips implemented using chiplets. In the below-discussed chiplet-based configuration of the CNM system102, inter-chiplet communications, as well as additional communications within the system, can use a CPI network. The CPI network described herein is an example of the CTCPI, that is, as a chiplet-specific implementation of the CTCPI. As a result, the below-described structure, operations, and functionality of CPI can apply equally to structures, operations, and functions as may be otherwise implemented using non-chiplet-based CTCPI implementations. Unless expressly indicated otherwise, any discussion herein of CPI applies equally to CTCPI. A CPI interface includes a packet-based network that supports virtual channels to enable a flexible and high-speed interaction between chiplets, such as can comprise portions of the first memory-compute node104or the CNM system102. The CPI can enable bridging from intra-chiplet networks to a broader chiplet network. For example, the Advanced eXtensible Interface (AXI) is a specification for intra-chip communications. AXI specifications, however, cover a variety of physical design options, such as the number of physical channels, signal timing, power, etc. Within a single chip, these options are generally selected to meet design goals, such as power consumption, speed, etc. However, to achieve the flexibility of a chiplet-based memory-compute system, an adapter, such as using CPI, can interface between the various AXI design options that can be implemented in the various chiplets. By enabling a physical channel-to-virtual channel mapping and encapsulating time-based signaling with a packetized protocol, CPI can be used to bridge intra-chiplet networks, such as within a particular memory-compute node, across a broader chiplet network, such as across the first memory-compute node104or across the CNM system102. The CNM system102is scalable to include multiple-node configurations. That is, multiple different instances of the first memory-compute node104, or of other differently configured memory-compute nodes, can be coupled using the scale fabric106, to provide a scaled system. Each of the memory-compute nodes can run its own operating system and can be configured to jointly coordinate system-wide resource usage. In the example ofFIG.1, the first switch110of the first memory-compute node104is coupled to the scale fabric106. The scale fabric106can provide a switch (e.g., a CTCPI switch, a PCIe switch, a CPI switch, or other switch) that can facilitate communication among and between different memory-compute nodes. In an example, the scale fabric106can help various nodes communicate in a partitioned global address space (PGAS). In an example, the first switch110from the first memory-compute node104is coupled to one or multiple different memory-compute devices, such as including the first memory-compute device112. The first memory-compute device112can comprise a chiplet-based architecture referred to herein as a compute-near-memory (CNM) chiplet. A packaged version of the first memory-compute device112can include, for example, one or multiple CNM chiplets. The chiplets can be communicatively coupled using CTCPI for high bandwidth and low latency. In the example ofFIG.1, the first memory-compute device112can include a network on chip (NOC) or first NOC118. Generally, a NOC is an interconnection network within a device, connecting a particular set of endpoints. InFIG.1, the first NOC118can provide communications and connectivity between the various memory, compute resources, and ports of the first memory-compute device112. In an example, the first NOC118can comprise a folded Clos topology, such as within each instance of a memory-compute device, or as a mesh that couples multiple memory-compute devices in a node. The Clos topology, such as can use multiple, smaller radix crossbars to provide functionality associated with a higher radix crossbar topology, offers various benefits. For example, the Clos topology can exhibit consistent latency and bisection bandwidth across the NOC. The first NOC118can include various distinct switch types including hub switches, edge switches, and endpoint switches. Each of the switches can be constructed as crossbars that provide substantially uniform latency and bandwidth between input and output nodes. In an example, the endpoint switches and the edge switches can include two separate crossbars, one for traffic headed to the hub switches, and the other for traffic headed away from the hub switches. The hub switches can be constructed as a single crossbar that switches all inputs to all outputs. In an example, the hub switches can have multiple ports each (e.g., four or six ports each), such as depending on whether the particular hub switch participates in inter-chip communications. A number of hub switches that participates in inter-chip communications can be set by an inter-chip bandwidth requirement. The first NOC118can support various payloads (e.g., from 8 to 64-byte payloads; other payload sizes can similarly be used) between compute elements and memory. In an example, the first NOC118can be optimized for relatively smaller payloads (e.g., 8-16 bytes) to efficiently handle access to sparse data structures. In an example, the first NOC118can be coupled to an external host via a first physical-layer interface114, a PCIe subordinate module116or endpoint, and a PCIe principal module126or root port. That is, the first physical-layer interface114can include an interface to allow an external host processor to be coupled to the first memory-compute device112. An external host processor can optionally be coupled to one or multiple different memory-compute devices, such as using a PCIe switch or other, native protocol switch. Communication with the external host processor through a PCIe-based switch can limit device-to-device communication to that supported by the switch. Communication through a memory-compute device-native protocol switch such as using CTCPI, in contrast, can allow for more full communication between or among different memory-compute devices, including support for a partitioned global address space, such as for creating threads of work and sending events. In an example, the CTCPI protocol can be used by the first NOC118in the first memory-compute device112, and the first switch110can include a CTCPI switch. The CTCPI switch can allow CTCPI packets to be transferred from a source memory-compute device, such as the first memory-compute device112, to a different, destination memory-compute device (e.g., on the same or other node), such as without being converted to another packet format. In an example, the first memory-compute device112can include an internal host processor122. The internal host processor122can be configured to communicate with the first NOC118or other components or modules of the first memory-compute device112, for example, using the internal PCIe principal module126, which can help eliminate a physical layer that would consume time and energy. In an example, the internal host processor122can be based on a RISC-V ISA processor, and can use the first physical-layer interface114to communicate outside of the first memory-compute device112, such as to other storage, networking, or other peripherals to the first memory-compute device112. The internal host processor122can control the first memory-compute device112and can act as a proxy for operating system-related functionality. The internal host processor122can include a relatively small number of processing cores (e.g., 2-4 cores) and a host memory device124(e.g., comprising a DRAM module). In an example, the internal host processor122can include PCI root ports. When the internal host processor122is in use, then one of its root ports can be connected to the PCIe subordinate module116. Another of the root ports of the internal host processor122can be connected to the first physical-layer interface114, such as to provide communication with external PCI peripherals. When the internal host processor122is disabled, then the PCIe subordinate module116can be coupled to the first physical-layer interface114to allow an external host processor to communicate with the first NOC118. In an example of a system with multiple memory-compute devices, the first memory-compute device112can be configured to act as a system host or controller. In this example, the internal host processor122can be in use, and other instances of internal host processors in the respective other memory-compute devices can be disabled. The internal host processor122can be configured at power-up of the first memory-compute device112, such as to allow the host to initialize. In an example, the internal host processor122and its associated data paths (e.g., including the first physical-layer interface114, the PCIe subordinate module116, etc.) can be configured from input pins to the first memory-compute device112. One or more of the pins can be used to enable or disable the internal host processor122and configure the PCI (or other) data paths accordingly. In an example, the first NOC118can be coupled to the scale fabric106via a scale fabric interface module136and a second physical-layer interface138. The scale fabric interface module136, or SIF, can facilitate communication between the first memory-compute device112and a device space, such as a partitioned global address space (PGAS). The PGAS can be configured such that a particular memory-compute device, such as the first memory-compute device112, can access memory or other resources on a different memory-compute device (e.g., on the same or different node), such as using a load/store paradigm. Various scalable fabric technologies can be used, including CTCPI, CPI, Gen-Z, PCI, or Ethernet bridged over CXL. The scale fabric106can be configured to support various packet formats. In an example, the scale fabric106supports orderless packet communications, or supports ordered packets such as can use a path identifier to spread bandwidth across multiple equivalent paths. The scale fabric106can generally support remote operations such as remote memory read, write, and other built-in atomics, remote memory atomics, remote memory-compute device send events, and remote memory-compute device call and return operations. In an example, the first NOC118can be coupled to one or multiple different memory modules, such as including a first memory device128. The first memory device128can include various kinds of memory devices, for example, LPDDR5 or GDDR6, among others. In the example ofFIG.1, the first NOC118can coordinate communications with the first memory device128via a memory controller130that can be dedicated to the particular memory module. In an example, the memory controller130can include a memory module cache and an atomic operations module. The atomic operations module can be configured to provide relatively high-throughput atomic operators, such as including integer and floating-point operators. The atomic operations module can be configured to apply its operators to data within the memory module cache (e.g., comprising SRAM memory side cache), thereby allowing back-to-back atomic operations using the same memory location, with minimal throughput degradation. The memory module cache can provide storage for frequently accessed memory locations, such as without having to re-access the first memory device128. In an example, the memory module cache can be configured to cache data only for a particular instance of the memory controller130. In an example, the memory controller130includes a DRAM controller configured to interface with the first memory device128, such as including DRAM devices. The memory controller130can provide access scheduling and bit error management, among other functions. In an example, the first NOC118can be coupled to a hybrid threading processor (HTP140), a hybrid threading fabric (HTF142) and a host interface and dispatch module (HIF120). The HIF120can be configured to facilitate access to host-based command request queues and response queues. In an example, the HIF120can dispatch new threads of execution on processor or compute elements of the HTP140or the HTF142. In an example, the HIF120can be configured to maintain workload balance across the HTP140module and the HTF142module. The hybrid threading processor, or HTP140, can include an accelerator, such as can be based on a RISC-V instruction set. The HTP140can include a highly threaded, event-driven processor in which threads can be executed in single instruction rotation, such as to maintain high instruction throughput. The HTP140comprises relatively few custom instructions to support low-overhead threading capabilities, event send/receive, and shared memory atomic operators. The hybrid threading fabric, or HTF142, can include an accelerator, such as can include a non-von Neumann, coarse-grained, reconfigurable processor. The HTF142can be optimized for high-level language operations and data types (e.g., integer or floating point). In an example, the HTF142can support data flow computing. The HTF142can be configured to use substantially all of the memory bandwidth available on the first memory-compute device112, such as when executing memory-bound compute kernels. The HTP and HTF accelerators of the CNM system102can be programmed using various high-level, structured programming languages. For example, the HTP and HTF accelerators can be programmed using C/C++, such as using the LLVM compiler framework. The HTP accelerator can leverage an open source compiler environment, such as with various added custom instruction sets configured to improve memory access efficiency, provide a message passing mechanism, and manage events, among other things. In an example, the HTF accelerator can be designed to enable programming of the HTF142using a high-level programming language, and the compiler can generate a simulator configuration file or a binary file that runs on the HTF142hardware. The HTF142can provide a mid-level language for expressing algorithms precisely and concisely, while hiding configuration details of the HTF accelerator itself. In an example, the HTF accelerator tool chain can use an LLVM front-end compiler and the LLVM intermediate representation (IR) to interface with an HTF accelerator back end. In an example, the kernel mapping systems and methods discussed herein can leverage LLVM instruction selection (ISel) passes to realize HTF implementations. The solution can include using a virtualized instruction set architecture (ISA) of the HTF that can be described to ISel with standard LLVM mechanisms. For example, the solution can provide a subset of HTF instructions with virtual register operands, each of which looks like a standard RISC instruction. The ISA can further provide various virtual operations that can be lowered to HTF instructions in subsequent passes. For example, an HTF memory load can include operations such as a memory request followed by a memory response. Instead of attempting to describe this to ISel, a virtual RISC-like htf.load instruction can be lowered into a pair of operations before scheduling. In an example, the solution can further include or use a virtualized global register file for ISel. With such a virtualized HTF description, results from ISel can look like an HTF instruction stream, which allows for reuse of machine optimization passes of LLVM. In an example, compute kernels can be parsed and reconstructed to form a DAG within the custom LLVM backend. The optimized ISA representation of the kernel can then be taken as an input and rewritten in terms of virtual HTF instructions using actual HTF instructions. The custom backend can further analyze data flow of the kernel, such as based on the input ISA representation, and then group any asynchronous instructions (e.g., htf.load and htf.store) based on data dependencies, and then partition the instructions into synchronous dataflows (SDFs), with each SDF hosting one group of asynchronous operations. A startup instructions (e.g., htf.startsync) can be inserted at the beginning of each SDF to initiate it. Various other instructions can be used such as to continue to a next or subsequent SDF (e.g., htf.cont), to start a new SDF loop (e.g., htf.loop), or to read from tile memory (e.g., htf.rdtm). Other instructions can be similarly provided to help translate basic kernel instructions into HTF instructions. FIG.2illustrates generally an example of a memory subsystem200of a memory-compute device, according to an embodiment. The example of the memory subsystem200includes a controller202, a programmable atomic unit208, and a second NOC206. The controller202can include or use the programmable atomic unit208to carry out operations using information in a memory device204. In an example, the memory subsystem200comprises a portion of the first memory-compute device112from the example ofFIG.1, such as including portions of the first NOC118or of the memory controller130. In the example ofFIG.2, the second NOC206is coupled to the controller202and the controller202can include a memory control module210, a local cache module212, and a built-in atomics module214. In an example, the built-in atomics module214can be configured to handle relatively simple, single-cycle, integer atomics. The built-in atomics module214can perform atomics at the same throughput as, for example, normal memory read or write operations. In an example, an atomic memory operation can include a combination of storing data to the memory, performing an atomic memory operation, and then responding with load data from the memory. The local cache module212, such as can include an SRAM cache, can be provided to help reduce latency for repetitively-accessed memory locations. In an example, the local cache module212can provide a read buffer for sub-memory line accesses. The local cache module212can be particularly beneficial for compute elements that have relatively small or no data caches. The memory control module210, such as can include a DRAM controller, can provide low-level request buffering and scheduling, such as to provide efficient access to the memory device204, such as can include a DRAM device. In an example, the memory device204can include or use a GDDR6 DRAM device, such as having 16 Gb density and 64 Gb/sec peak bandwidth. Other devices can similarly be used. In an example, the programmable atomic unit208can comprise single-cycle or multiple-cycle operator such as can be configured to perform integer addition or more complicated multiple-instruction operations such as bloom filter insert. In an example, the programmable atomic unit208can be configured to perform load and store-to-memory operations. The programmable atomic unit208can be configured to leverage the RISC-V ISA with a set of specialized instructions to facilitate interactions with the controller202to atomically perform user-defined operations. Programmable atomic requests, such as received from an on-node or off-node host, can be routed to the programmable atomic unit208via the second NOC206and the controller202. In an example, custom atomic operations (e.g., carried out by the programmable atomic unit208) can be identical to built-in atomic operations (e.g., carried out by the built-in atomics module214) except that a programmable atomic operation can be defined or programmed by the user rather than the system architect. In an example, programmable atomic request packets can be sent through the second NOC206to the controller202, and the controller202can identify the request as a custom atomic. The controller202can then forward the identified request to the programmable atomic unit208. FIG.3illustrates generally an example of a programmable atomic unit302for use with a memory controller, according to an embodiment. In an example, the programmable atomic unit302can comprise or correspond to the programmable atomic unit208from the example ofFIG.2. That is,FIG.3illustrates components in an example of a programmable atomic unit302(PAU), such as those noted above with respect toFIG.2(e.g., in the programmable atomic unit208), or toFIG.1(e.g., in an atomic operations module of the memory controller130). As illustrated inFIG.3, the programmable atomic unit302includes a PAU processor or PAU core306, a PAU thread control304, an instruction SRAM308, a data cache310, and a memory interface312to interface with the memory controller314. In an example, the memory controller314comprises an example of the controller202from the example ofFIG.2. In an example, the PAU core306is a pipelined processor such that multiple stages of different instructions are executed together per clock cycle. The PAU core306can include a barrel-multithreaded processor, with thread control304circuitry to switch between different register files (e.g., sets of registers containing current processing state) upon each clock cycle. This enables efficient context switching between currently executing threads. In an example, the PAU core306supports eight threads, resulting in eight register files. In an example, some or all of the register files are not integrated into the PAU core306, but rather reside in a local data cache310or the instruction SRAM308. This reduces circuit complexity in the PAU core306by eliminating the traditional flip-flops used for registers in such memories. The local PAU memory can include instruction SRAM308, such as can include instructions for various atomics. The instructions comprise sets of instructions to support various application-loaded atomic operators. When an atomic operator is requested, such as by an application chiplet, a set of instructions corresponding to the atomic operator are executed by the PAU core306. In an example, the instruction SRAM308can be partitioned to establish the sets of instructions. In this example, the specific programmable atomic operator being requested by a requesting process can identify the programmable atomic operator by the partition number. The partition number can be established when the programmable atomic operator is registered with (e.g., loaded onto) the programmable atomic unit302. Other metadata for the programmable instructions can be stored in memory (e.g., in partition tables) in memory local to the programmable atomic unit302. In an example, atomic operators manipulate the data cache310, which is generally synchronized (e.g., flushed) when a thread for an atomic operator completes. Thus, aside from initial loading from the external memory, such as from the memory controller314, latency can be reduced for most memory operations during execution of a programmable atomic operator thread. A pipelined processor, such as the PAU core306, can experience an issue when an executing thread attempts to issue a memory request if an underlying hazard condition would prevent such a request. Here, the memory request is to retrieve data from the memory controller314, whether it be from a cache on the memory controller314or off-die memory. To resolve this issue, the PAU core306is configured to deny the memory request for a thread. Generally, the PAU core306or the thread control304can include circuitry to enable one or more thread rescheduling points in the pipeline. Here, the denial occurs at a point in the pipeline that is beyond (e.g., after) these thread rescheduling points. In an example, the hazard occurred beyond the rescheduling point. Here, a preceding instruction in the thread created the hazard after the memory request instruction passed the last thread rescheduling point prior to the pipeline stage in which the memory request could be made. In an example, to deny the memory request, the PAU core306is configured to determine (e.g., detect) that there is a hazard on memory indicated in the memory request. Here, hazard denotes any condition such that allowing (e.g., performing) the memory request will result in an inconsistent state for the thread. In an example, the hazard is an in-flight memory request. Here, whether or not the data cache310includes data for the requested memory address, the presence of the in-flight memory request makes it uncertain what the data in the data cache310at that address should be. Thus, the thread must wait for the in-flight memory request to be completed to operate on current data. The hazard is cleared when the memory request completes. In an example, the hazard is a dirty cache line in the data cache310for the requested memory address. Although the dirty cache line generally indicates that the data in the cache is current and the memory controller version of this data is not, an issue can arise on thread instructions that do not operate from the cache. An example of such an instruction uses a built-in atomic operator, or other separate hardware block, of the memory controller314. In the context of a memory controller, the built-in atomic operators can be separate from the programmable atomic unit302and do not have access to the data cache310or instruction SRAM308inside the PAU. If the cache line is dirty, then the built-in atomic operator will not be operating on the most current data until the data cache310is flushed to synchronize the cache and the other or off-die memories. This same situation could occur with other hardware blocks of the memory controller, such as cryptography block, encoder, etc. FIG.4illustrates an example of a hybrid threading processor (HTP) accelerator, or HTP accelerator400. The HTP accelerator400can comprise a portion of a memory-compute device, according to an embodiment. In an example, the HTP accelerator400can include or comprise the HTP140from the example ofFIG.1. The HTP accelerator400includes, for example, a HTP core402, an instruction cache404, a data cache406, a translation block408, a memory interface410, and a thread controller412. The HTP accelerator400can further include a dispatch interface414and a NOC interface416, such as for interfacing with a NOC such as the first NOC118from the example ofFIG.1, the second NOC206from the example ofFIG.2, or other NOC. In an example, the HTP accelerator400includes a module that is based on a RISC-V instruction set, and can include a relatively small number of other or additional custom instructions to support a low-overhead, threading-capable Hybrid Threading (HT) language. The HTP accelerator400can include a highly-threaded processor core, the HTP core402, in which, or with which, threads can be executed in a single instruction rotation, such as to maintain high instruction throughput. In an example, a thread can be paused when it waits for other, pending events to complete. This can allow the compute resources to be efficiently used on relevant work instead of polling. In an example, multiple-thread barrier synchronization can use efficient HTP-to-HTP and HTP-to/from-Host messaging, such as can allow thousands of threads to initialize or wake in, for example, tens of clock cycles. In an example, the dispatch interface414can comprise a functional block of the HTP accelerator400for handling hardware-based thread management. That is, the dispatch interface414can manage dispatch of work to the HTP core402or other accelerators. Non-HTP accelerators, however, are generally not able to dispatch work. In an example, work dispatched from a host can use dispatch queues that reside in, e.g., host main memory (e.g., DRAM-based memory). Work dispatched from the HTP accelerator400, on the other hand, can use dispatch queues that reside in SRAM, such as within the dispatches for the target HTP accelerator400within a particular node. In an example, the HTP core402can comprise one or more cores that execute instructions on behalf of threads. That is, the HTP core402can include an instruction processing block. The HTP core402can further include, or can be coupled to, the thread controller412. The thread controller412can provide thread control and state for each active thread within the HTP core402. The data cache406can include cache for a host processor (e.g., for local and remote memory-compute devices, including for the HTP core402), and the instruction cache404can include cache for use by the HTP core402. In an example, the data cache406can be configured for read and write operations, and the instruction cache404can be configured for read only operations. In an example, the data cache406is a small cache provided per hardware thread. The data cache406can temporarily store data for use by the owning thread. The data cache406can be managed by hardware or software in the HTP accelerator400. For example, hardware can be configured to automatically allocate or evict lines as needed, as load and store operations are executed by the HTP core402. Software, such as using RISC-V instructions, can determine which memory accesses should be cached, and when lines should be invalidated or written back to other memory locations. Data caching on the HTP accelerator400has various benefits, including making larger accesses more efficient for the memory controller, allowing an executing thread to avoid stalling. However, there are situations when using the cache causes inefficiencies. An example includes accesses where data is accessed only once, and causes thrashing of the cache lines. To help address this problem, the HTP accelerator400can use a set of custom load instructions to force a load instruction to check for a cache hit, and on a cache miss to issue a memory request for the requested operand and not put the obtained data in the data cache406. The HTP accelerator400thus includes various different types of load instructions, including non-cached and cache line loads. The non-cached load instructions use the cached data if dirty data is present in the cache. The non-cached load instructions ignore clean data in the cache, and do not write accessed data to the data cache. For cache line load instructions, the complete data cache line (e.g., comprising 64 bytes) can be loaded from memory into the data cache406, and can load the addressed memory into a specified register. These loads can use the cached data if clean or dirty data is in the data cache406. If the referenced memory location is not in the data cache406, then the entire cache line can be accessed from memory. Use of the cache line load instructions can reduce cache misses when sequential memory locations are being referenced (such as memory copy operations) but can also waste memory and bandwidth at the NOC interface416if the referenced memory data is not used. In an example, the HTP accelerator400includes a custom store instruction that is non-cached. The non-cached store instruction can help avoid thrashing the data cache406with write data that is not sequentially written to memory. In an example, the HTP accelerator400further includes a translation block408. The translation block408can include a virtual-to-physical translation block for local memory of a memory-compute device. For example, a host processor, such as in the HTP core402, can execute a load or store instruction, and the instruction can generate a virtual address. The virtual address can be translated to a physical address of the host processor, such as using a translation table from the translation block408. The memory interface410, for example, can include an interface between the HTP core402and the NOC interface416. FIG.5illustrates an example of a representation of a hybrid threading fabric (HTF), or HTF500, of a memory-compute device, according to an embodiment. In an example, the HTF500can include or comprise the HTF142from the example ofFIG.1. The HTF500is a coarse-grained, reconfigurable compute fabric that can be optimized for high-level language operand types and operators (e.g., using C/C++ or other high-level language). In an example, the HTF500can include configurable, n-bit wide (e.g., 512-bit wide) data paths that interconnect hardened SIMD arithmetic units. In an example, the HTF500comprises an HTF cluster502that includes multiple HTF tiles, including an example tile504, or Tile N. Each HTF tile can comprise one or more compute elements with local tile memory or compute element memory and arithmetic functions. For example, each tile can include a compute pipeline with support for integer and floating-point operations. In an example, the data path, compute elements, and other infrastructure can be implemented as hardened IP to provide maximum performance while minimizing power consumption and reconfiguration time. In the example ofFIG.5, the tiles in the HTF cluster502are coupled using a synchronous fabric (SF), or synchronous compute fabric, to perform synchronous dataflows. The synchronous fabric can provide communication between a particular tile and its neighboring tiles in the HTF cluster502. Each HTF cluster502can further include an asynchronous fabric (AF), or asynchronous compute fabric, that can provide communication among, e.g., the tiles in the cluster, the memory interfaces in the cluster, and a dispatch interface508in the cluster. In an example, the synchronous fabric can exchange messages that include data and control information. The control information can include, among other things, instruction RAM address information or a thread identifier. The control information can be used to set up a data path, and a data message field can be selected as a source for the path. Generally, the control fields can be provided or received earlier, such that they can be used to configure the data path. For example, to help minimize any delay through the synchronous flow pipeline in a tile, the control information can arrive at a tile a few clock cycles before the data field. Various registers can be provided to help coordinate dataflow timing in the pipeline. In an example, the control information can comprise a bitmask portion of data communicated throughout the HTF cluster502. In an example, compute elements or tiles in a coarse-grained, reconfigurable architecture or fabric can be arranged a two-dimensional array, such as with data communication and connectivity between nearest neighbors. In a two-dimensional array of compute elements, each element can have up to four connections to its nearest neighbors. However, in such a two-dimensional array, corner compute elements connect to two neighbors and other edge compute elements connect to three neighbors. For arrays with small numbers of compute elements, a relatively large number of the elements are therefore treated as special cases when programming the array. For example, in a sixteen-element array, arranged as a two-dimensional 4×4 array, there are four corners (25%) with two-neighbor connections, eight edges (50%) with three-neighbor connections, and four fully-connected elements (25%) with four-neighbor connections. The present inventors have recognized, however, that compute elements arranged in a one-dimensional array with connections to nearest neighbors and to neighbors one hop away can offer better connectivity for applications, and more flexibility to kernel compilers, than is available using a two-dimensional array. The present inventors have further recognized that providing a passthrough channel in array elements can further enhance connectivity between non-adjacent tiles in the array. For example, a one-dimensional array can have more compute elements or tiles that are fully connected (e.g., with four connections per tile). In an example of a sixteen-element array, arranged as a one-dimensional 1×16 array, there are two corners (12.5%) with single-neighbor connections, two edges (12.5%) with two-neighbor connections, and twelve fully connected elements (75%). In other words, using a one-dimensional tile array provides a configuration where 75% of the tiles are fully connected while 25% are special cases (i.e., have reduced connectivity). Accordingly, the one-dimensional array can provide a compiler or scheduler more flexibility when assigning operations to each tile. A pictorial example of a 1×16 array is included atFIG.9. The present inventors have further recognized that a loop-type array can be used to further enhance connectivity and reduce special connectivity cases. In an example of a sixteen-element array, arranged as a 2×8 array, all edges can be removed while maintaining relatively short connections between tiles. In this example, ends of each column can be connected to the other column to thereby remove all edges or corners. A pictorial example of a 2×8 array is included atFIG.10. In the example ofFIG.5, the tiles comprising the HTF cluster502are linearly arranged, such as in a 1×16 array, and each tile in the cluster can be coupled to one or multiple other tiles in the HTF cluster502. In the example ofFIG.5, the example tile504, or Tile N, is coupled to four other tiles, including to a base tile510(e.g., Tile N−2) via the port labeled SF IN N−2, to an adjacent tile512(e.g., Tile N−1) via the port labeled SF IN N−1, and to a Tile N+1 via the port labeled SF IN N+1 and to a Tile N+2 via the port labeled SF IN N+2. A tile can include a base portion, such as can include hardware that is configured to initiate threads or otherwise act as a flow controller. The example tile504can be coupled to the same or other tiles via respective output ports, such as those labeled SF OUT N−1, SF OUT N−2, SF OUT N+1, and SF OUT N+2. In this example, the ordered list of names for the various tiles are notional indications of the positions of the tiles. In other examples, the tiles comprising the HTF cluster502can be arranged in a grid, loop, or other configuration, with each tile similarly coupled to one or several of its nearest neighbors in the grid. Tiles that are provided at an edge of a cluster can optionally have fewer connections to neighboring tiles. For example, Tile N−2, or the base tile510in the example ofFIG.5, can be coupled only to the adjacent tile512(Tile N−1) and to the example tile504(Tile N). Fewer or additional inter-tile connections can similarly be used. In the example ofFIG.5, the example tile504can include a passthrough channel that can provide a low-latency communication datapath, comprising a portion of the synchronous fabric, between non-adjacent tiles. That is, non-adjacent tiles in an array can be effectively directly coupled using a passthrough channel of one or more intervening tiles. In an illustrative example including Tile N−1, Tile N, and Tile N+1, a passthrough channel in Tile N can be used to couple synchronous fabric inputs and outputs of Tile N−1 and Tile N+1. The HTF cluster502can further include memory interface modules, including a first memory interface module506. The memory interface modules can couple the HTF cluster502to a NOC, such as the first NOC118. In an example, the memory interface modules can allow tiles within a cluster to make requests to other locations in a memory-compute system, such as in the same or different node in the system. That is, the representation of the HTF500can comprise a portion of a larger fabric that can be distributed across multiple nodes, such as with one or more HTF tiles or HTF clusters at each of the nodes. Requests can be made between tiles or nodes within the context of the larger fabric. In an example, each tile in the HTF cluster502can include one or more tile memories. Each tile memory can have the same width as the data path (e.g., 512 bits) and can have a specified depth, such as in a range of 512 to 1024 elements. The tile memories can be used to store data that supports data path operations. The stored data can include constants loaded as part of a kernel's cluster configuration, for example, or can include variables calculated as part of the data flow. In an example, the tile memories can be written from the asynchronous fabric as a data transfer from another synchronous flow, or can include a result of a load operation such as initiated by another synchronous flow. The tile memory can be read via synchronous data path instruction execution in the synchronous flow. In an example, each tile in an HTF cluster502can have a dedicated instruction RAM (INST RAM). In an example of an HTF cluster502with sixteen tiles, and respective instruction RAM instances with sixty-four entries, the cluster can allow algorithms to be mapped with up to 1024 multiply-shift and/or ALU operations. The various tiles can optionally be pipelined together, such as using the synchronous fabric, to allow data flow compute with minimal memory access, thus minimizing latency and reducing power consumption. In an example, the asynchronous fabric can allow memory references to proceed in parallel with computation, thereby providing more efficient streaming kernels. In an example, the various tiles can include built-in support for loop-based constructs, and can support nested looping kernels. The synchronous fabric can allow multiple tiles to be pipelined, such as without a need for data queuing. Tiles that participate in a synchronous domain or synchronous flow can, for example, act as a single pipelined data path. A first or base tile (e.g., Tile N−2, in the example ofFIG.5) of a synchronous flow can initiate a thread of work through the pipelined tiles. The base tile or flow controller can be responsible for starting work on a predefined cadence referred to herein as a Spoke Count. For example, if the Spoke Count is 3, then the base tile can initiate work, or a thread, every third clock cycle. In an example, a synchronous domain, or elements configured to perform a synchronous flow, comprises a set of connected tiles in the HTF cluster502. Execution of a thread can begin at the domain's base tile and can progress from the base or flow controller, via the synchronous fabric, to other tiles or compute elements that are a part of the same flow, or in the same domain. The flow controller can provide the instruction to be executed for the first tile. The first tile can, by default, provide the same instruction for the other connected tiles to execute. However, in some examples, the base tile, or a subsequent tile, can conditionally specify or use an alternative instruction. The alternative instruction can be chosen by having the tile's data path produce a Boolean conditional value, and then can use the Boolean value to choose between an instruction set of the current tile and the alternate instruction. The asynchronous fabric can be used to perform operations that occur asynchronously relative to a synchronous flow. Each tile in the HTF cluster502can include an interface to the asynchronous fabric. The inbound interface can include, for example, a FIFO buffer or queue (e.g., AF IN QUEUE) to provide storage for message that cannot be immediately processed. Similarly, the outbound interface of the asynchronous fabric can include a FIFO buffer or queue (e.g., AF OUT QUEUE) to provide storage for messages that cannot be immediately sent out. In an example, messages in the asynchronous fabric can be classified as data messages or control messages. Data messages can include a SIMD width data value that is written to either tile memory 0 (MEM_0) or memory 1 (MEM_1). Control messages can be configured to control thread creation, to free resources, or to issue external memory references. A tile in the HTF cluster502can perform various compute operations for the HTF. The compute operations can be performed by configuring the data path within the tile and/or compute elements thereof. In an example, a tile includes two functional blocks that perform the compute operations for the tile: a Multiply and Shift Operation block (MS OP) and an Arithmetic, Logical, and Bit Operation block (ALB OP). The two blocks can be configured to perform pipelined operations such as a Multiply and Add, or a Shift and Add, among others. Results from one or more of the functional blocks, or information from the asynchronous queue, can be stored or processed at an output register assembly514. In an example, each instance of a memory-compute device in a system can have a complete supported instruction set for its operator blocks (e.g., MS OP and ALB OP). In this case, binary compatibility can be realized across all devices in the system. However, in some examples, it can be helpful to maintain a base set of functionality and optional instruction set classes, such as to meet various design tradeoffs, such as die size. The approach can be similar to how the RISC-V instruction set has a base set and multiple optional instruction subsets. In an example, the example tile504can include a Spoke RAM. The Spoke RAM can be used to specify which input (e.g., from among the four SF tile inputs and the base input) is the primary input for each clock cycle. The Spoke RAM read address input can originate at a counter that counts from zero to Spoke Count minus one. In an example, different Spoke Counts can be used on different tiles, such as within the same HTF cluster502, to allow a number of slices, or unique tile instances, used by an inner loop to determine the performance of a particular application or instruction set. In an example, the Spoke RAM can specify when a synchronous input is to be written to a tile memory, for instance when multiple inputs for a particular tile instruction are used and one of the inputs arrives before the others. The early-arriving input can be written to the tile memory and can be later read when all of the inputs are available. In this example, the tile memory can be accessed as a FIFO memory, and FIFO read and write pointers can be stored in a register-based memory region or structure in the tile memory. FIG.6AandFIG.6Billustrate generally an example of a chiplet system that can be used to implement one or more aspects of the CNM system102. As similarly mentioned above, a node in the CNM system102, or a device within a node in the CNM system102, can include a chiplet-based architecture or compute-near-memory (CNM) chiplet. A packaged memory-compute device can include, for example, one, two, or four CNM chiplets. The chiplets can be interconnected using high-bandwidth, low-latency interconnects such as using a CPI interface. Generally, a chiplet system is made up of discrete modules (each a “chiplet”) that are integrated on an interposer and, in many examples, are interconnected as desired through one or more established networks to provide a system with the desired functionality. The interposer and included chiplets can be packaged together to facilitate interconnection with other components of a larger system. Each chiplet can include one or more individual integrated circuits (ICs), or “chips,” potentially in combination with discrete circuit components, and can be coupled to a respective substrate to facilitate attachment to the interposer. Most or all chiplets in a system can be individually configured for communication through established networks. The configuration of chiplets as individual modules of a system is distinct from such a system being implemented on single chips that contain distinct device blocks (e.g., intellectual property (IP) blocks) on one substrate (e.g., single die), such as a system-on-a-chip (SoC), or multiple discrete packaged devices integrated on a printed circuit board (PCB). In general, chiplets provide better performance (e.g., lower power consumption, reduced latency, etc.) than discrete packaged devices, and chiplets provide greater production benefits than single die chips. These production benefits can include higher yields or reduced development costs and time. Chiplet systems can include, for example, one or more application (or processor) chiplets and one or more support chiplets. Here, the distinction between application and support chiplets is simply a reference to the likely design scenarios for the chiplet system. Thus, for example, a synthetic vision chiplet system can include, by way of example only, an application chiplet to produce the synthetic vision output along with support chiplets, such as a memory controller chiplet, a sensor interface chiplet, or a communication chiplet. In a typical use case, the synthetic vision designer can design the application chiplet and source the support chiplets from other parties. Thus, the design expenditure (e.g., in terms of time or complexity) is reduced because by avoiding the design and production of functionality embodied in the support chiplets. Chiplets also support the tight integration of IP blocks that can otherwise be difficult, such as those manufactured using different processing technologies or using different feature sizes (or utilizing different contact technologies or spacings). Thus, multiple ICs or IC assemblies, with different physical, electrical, or communication characteristics can be assembled in a modular manner to provide an assembly with various desired functionalities. Chiplet systems can also facilitate adaptation to suit needs of different larger systems into which the chiplet system will be incorporated. In an example, ICs or other assemblies can be optimized for the power, speed, or heat generation for a specific function—as can happen with sensors—can be integrated with other devices more easily than attempting to do so on a single die. Additionally, by reducing the overall size of the die, the yield for chiplets tends to be higher than that of more complex, single die devices. FIG.6AandFIG.6Billustrate generally an example of a chiplet system, according to an embodiment.FIG.6Ais a representation of the chiplet system602mounted on a peripheral board604, that can be connected to a broader computer system by a peripheral component interconnect express (PCIe), for example. The chiplet system602includes a package substrate606, an interposer608, and four chiplets, an application chiplet610, a host interface chiplet612, a memory controller chiplet614, and a memory device chiplet616. Other systems can include many additional chiplets to provide additional functionalities as will be apparent from the following discussion. The package of the chiplet system602is illustrated with a lid or cover618, though other packaging techniques and structures for the chiplet system can be used.FIG.6Bis a block diagram labeling the components in the chiplet system for clarity. The application chiplet610is illustrated as including a chiplet system NOC620to support a chiplet network622for inter-chiplet communications. In example embodiments the chiplet system NOC620can be included on the application chiplet610. In an example, the first NOC118from the example ofFIG.1can be defined in response to selected support chiplets (e.g., host interface chiplet612, memory controller chiplet614, and memory device chiplet616) thus enabling a designer to select an appropriate number or chiplet network connections or switches for the chiplet system NOC620. In an example, the chiplet system NOC620can be located on a separate chiplet, or within the interposer608. In examples as discussed herein, the chiplet system NOC620implements a chiplet protocol interface (CPI) network. In an example, the chiplet system602can include or comprise a portion of the first memory-compute node104or the first memory-compute device112. That is, the various blocks or components of the first memory-compute device112can include chiplets that can be mounted on the peripheral board604, the package substrate606, and the interposer608. The interface components of the first memory-compute device112can comprise, generally, the host interface chiplet612, the memory and memory control-related components of the first memory-compute device112can comprise, generally, the memory controller chiplet614, the various accelerator and processor components of the first memory-compute device112can comprise, generally, the application chiplet610or instances thereof, and so on. The CPI interface, such as can be used for communication between or among chiplets in a system, is a packet-based network that supports virtual channels to enable a flexible and high-speed interaction between chiplets. CPI enables bridging from intra-chiplet networks to the chiplet network622. For example, the Advanced eXtensible Interface (AXI) is a widely used specification to design intra-chip communications. AXI specifications, however, cover a great variety of physical design options, such as the number of physical channels, signal timing, power, etc. Within a single chip, these options are generally selected to meet design goals, such as power consumption, speed, etc. However, to achieve the flexibility of the chiplet system, an adapter, such as CPI, is used to interface between the various AXI design options that can be implemented in the various chiplets. By enabling a physical channel to virtual channel mapping and encapsulating time-based signaling with a packetized protocol, CPI bridges intra-chiplet networks across the chiplet network622. CPI can use a variety of different physical layers to transmit packets. The physical layer can include simple conductive connections, or can include drivers to increase the voltage, or otherwise facilitate transmitting the signals over longer distances. An example of one such a physical layer can include the Advanced Interface Bus (AIB), which in various examples, can be implemented in the interposer608. AIB transmits and receives data using source synchronous data transfers with a forwarded clock. Packets are transferred across the AIB at single data rate (SDR) or dual data rate (DDR) with respect to the transmitted clock. Various channel widths are supported by AIB. The channel can be configured to have a symmetrical number of transmit (TX) and receive (RX) input/outputs (I/Os), or have a non-symmetrical number of transmitters and receivers (e.g., either all transmitters or all receivers). The channel can act as an AIB principal or subordinate depending on which chiplet provides the principal clock. AIB I/O cells support three clocking modes: asynchronous (i.e. non-clocked), SDR, and DDR. In various examples, the non-clocked mode is used for clocks and some control signals. The SDR mode can use dedicated SDR only I/O cells, or dual use SDR/DDR I/O cells. In an example, CPI packet protocols (e.g., point-to-point or routable) can use symmetrical receive and transmit I/O cells within an AIB channel. The CPI streaming protocol allows more flexible use of the AIB I/O cells. In an example, an AIB channel for streaming mode can configure the I/O cells as all TX, all RX, or half TX and half RX. CPI packet protocols can use an AIB channel in either SDR or DDR operation modes. In an example, the AIB channel is configured in increments of 80 I/O cells (i.e. 40 TX and 40 RX) for SDR mode and 40 I/O cells for DDR mode. The CPI streaming protocol can use an AIB channel in either SDR or DDR operation modes. Here, in an example, the AIB channel is in increments of 40 I/O cells for both SDR and DDR modes. In an example, each AIB channel is assigned a unique interface identifier. The identifier is used during CPI reset and initialization to determine paired AIB channels across adjacent chiplets. In an example, the interface identifier is a 20-bit value comprising a seven-bit chiplet identifier, a seven-bit column identifier, and a six-bit link identifier. The AIB physical layer transmits the interface identifier using an AIB out-of-band shift register. The 20-bit interface identifier is transferred in both directions across an AIB interface using bits 32-51 of the shift registers. AIB defines a stacked set of AIB channels as an AIB channel column. An AIB channel column has some number of AIB channels, plus an auxiliary channel. The auxiliary channel contains signals used for AIB initialization. All AIB channels (other than the auxiliary channel) within a column are of the same configuration (e.g., all TX, all RX, or half TX and half RX, as well as having the same number of data I/O signals). In an example, AIB channels are numbered in continuous increasing order starting with the AIB channel adjacent to the AUX channel. The AIB channel adjacent to the AUX is defined to be AIB channel zero. Generally, CPI interfaces on individual chiplets can include serialization-deserialization (SERDES) hardware. SERDES interconnects work well for scenarios in which high-speed signaling with low signal count are desirable. SERDES, however, can result in additional power consumption and longer latencies for multiplexing and demultiplexing, error detection or correction (e.g., using block level cyclic redundancy checking (CRC)), link-level retry, or forward error correction. However, when low latency or energy consumption is a primary concern for ultra-short reach, chiplet-to-chiplet interconnects, a parallel interface with clock rates that allow data transfer with minimal latency can be utilized. CPI includes elements to minimize both latency and energy consumption in these ultra-short reach chiplet interconnects. For flow control, CPI employs a credit-based technique. A recipient, such as the application chiplet610, provides a sender, such as the memory controller chiplet614, with credits that represent available buffers. In an example, a CPI recipient includes a buffer for each virtual channel for a given time-unit of transmission. Thus, if the CPI recipient supports five messages in time and a single virtual channel, the recipient has five buffers arranged in five rows (e.g., one row for each unit time). If four virtual channels are supported, then the recipient has twenty buffers arranged in five rows. Each buffer holds the payload of one CPI packet. When the sender transmits to the recipient, the sender decrements the available credits based on the transmission. Once all credits for the recipient are consumed, the sender stops sending packets to the recipient. This ensures that the recipient always has an available buffer to store the transmission. As the recipient processes received packets and frees buffers, the recipient communicates the available buffer space back to the sender. This credit return can then be used by the sender allow transmitting of additional information. The example ofFIG.6Aincludes a chiplet mesh network624that uses a direct, chiplet-to-chiplet technique without a need for the chiplet system NOC620. The chiplet mesh network624can be implemented in CPI, or another chiplet-to-chiplet protocol. The chiplet mesh network624generally enables a pipeline of chiplets where one chiplet serves as the interface to the pipeline while other chiplets in the pipeline interface only with themselves. Additionally, dedicated device interfaces, such as one or more industry standard memory interfaces (such as, for example, synchronous memory interfaces, such as DDR5, DDR6), can be used to connect a device to a chiplet. Connection of a chiplet system or individual chiplets to external devices (such as a larger system can be through a desired interface (for example, a PCIe interface). Such an external interface can be implemented, in an example, through the host interface chiplet612, which in the depicted example, provides a PCIe interface external to chiplet system. Such dedicated chiplet interfaces626are generally employed when a convention or standard in the industry has converged on such an interface. The illustrated example of a Double Data Rate (DDR) interface connecting the memory controller chiplet614to a dynamic random access memory (DRAM) memory device chiplet616is just such an industry convention. Of the variety of possible support chiplets, the memory controller chiplet614is likely present in the chiplet system due to the near omnipresent use of storage for computer processing as well as sophisticated state-of-the-art for memory devices. Thus, using memory device chiplets616and memory controller chiplets614produced by others gives chiplet system designers access to robust products by sophisticated producers. Generally, the memory controller chiplet614provides a memory device-specific interface to read, write, or erase data. Often, the memory controller chiplet614can provide additional features, such as error detection, error correction, maintenance operations, or atomic operator execution. For some types of memory, maintenance operations tend to be specific to the memory device chiplet616, such as garbage collection in NAND flash or storage class memories, temperature adjustments (e.g., cross temperature management) in NAND flash memories. In an example, the maintenance operations can include logical-to-physical (L2P) mapping or management to provide a level of indirection between the physical and logical representation of data. In other types of memory, for example DRAM, some memory operations, such as refresh can be controlled by a host processor or of a memory controller at some times, and at other times controlled by the DRAM memory device, or by logic associated with one or more DRAM devices, such as an interface chip (in an example, a buffer). Atomic operators are a data manipulation that, for example, can be performed by the memory controller chiplet614. In other chiplet systems, the atomic operators can be performed by other chiplets. For example, an atomic operator of “increment” can be specified in a command by the application chiplet610, the command including a memory address and possibly an increment value. Upon receiving the command, the memory controller chiplet614retrieves a number from the specified memory address, increments the number by the amount specified in the command, and stores the result. Upon a successful completion, the memory controller chiplet614provides an indication of the command success to the application chiplet610. Atomic operators avoid transmitting the data across the chiplet mesh network624, resulting in lower latency execution of such commands. Atomic operators can be classified as built-in atomics or programmable (e.g., custom) atomics. Built-in atomics are a finite set of operations that are immutably implemented in hardware. Programmable atomics are small programs that can execute on a programmable atomic unit (PAU) (e.g., a custom atomic unit (CAU)) of the memory controller chiplet614. The memory device chiplet616can be, or include any combination of, volatile memory devices or non-volatile memories. Examples of volatile memory devices include, but are not limited to, random access memory (RAM)—such as DRAM) synchronous DRAM (SDRAM), graphics double data rate type 6 SDRAM (GDDR6 SDRAM), among others. Examples of non-volatile memory devices include, but are not limited to, negative-and-(NAND)-type flash memory, storage class memory (e.g., phase-change memory or memristor based technologies), ferroelectric RAM (FeRAM), among others. The illustrated example includes the memory device chiplet616as a chiplet, however, the device can reside elsewhere, such as in a different package on the peripheral board604. For many applications, multiple memory device chiplets can be provided. In an example, these memory device chiplets can each implement one or multiple storage technologies, and may include integrated compute hosts. In an example, a memory chiplet can include, multiple stacked memory die of different technologies, for example one or more static random access memory (SRAM) devices stacked or otherwise in communication with one or more dynamic random access memory (DRAM) devices. In an example, the memory controller chiplet614can serve to coordinate operations between multiple memory chiplets in the chiplet system602, for example, to use one or more memory chiplets in one or more levels of cache storage, and to use one or more additional memory chiplets as main memory. The chiplet system602can include multiple memory controller chiplet614instances, as can be used to provide memory control functionality for separate hosts, processors, sensors, networks, etc. A chiplet architecture, such as in the illustrated system, offers advantages in allowing adaptation to different memory storage technologies; and different memory interfaces, through updated chiplet configurations, such as without requiring redesign of the remainder of the system structure. FIG.7illustrates generally an example of a chiplet-based implementation for a memory-compute device, according to an embodiment. The example includes an implementation with four compute-near-memory, or CNM, chiplets, and each of the CNM chiplets can include or comprise portions of the first memory-compute device112or the first memory-compute node104from the example ofFIG.1. The various portions can themselves include or comprise respective chiplets. The chiplet-based implementation can include or use CPI-based intra-system communications, as similarly discussed above in the example chiplet system602fromFIG.6AandFIG.6B. The example ofFIG.7includes a first CNM package700comprising multiple chiplets. The first CNM package700includes a first chiplet702, a second chiplet704, a third chiplet706, and a fourth chiplet708coupled to a CNM NOC hub710. Each of the first through fourth chiplets can comprise instances of the same, or substantially the same, components or modules. For example, the chiplets can each include respective instances of an HTP accelerator, an HTF accelerator, and memory controllers for accessing internal or external memories. In the example ofFIG.7, the first chiplet702includes a first NOC hub edge714coupled to the CNM NOC hub710. The other chiplets in the first CNM package700similarly include NOC hub edges or endpoints. The switches in the NOC hub edges facilitate intra-chiplet, or intra-chiplet-system, communications via the CNM NOC hub710. The first chiplet702can further include one or multiple memory controllers716. The memory controllers716can correspond to respective different NOC endpoint switches interfaced with the first NOC hub edge714. In an example, the memory controller716comprises the memory controller chiplet614or comprises the memory controller130, or comprises the memory subsystem200, or other memory-compute implementation. The memory controllers716can be coupled to respective different memory devices, for example including a first external memory module712aor a second external memory module712b. The external memory modules can include, e.g., GDDR6 memories that can be selectively accessed by the respective different chiplets in the system. The first chiplet702can further include a first HTP chiplet718and second HTP chiplet720, such as coupled to the first NOC hub edge714via respective different NOC endpoint switches. The HTP chiplets can correspond to HTP accelerators, such as the HTP140from the example ofFIG.1, or the HTP accelerator400from the example ofFIG.4. The HTP chiplets can communicate with the HTF chiplet722. The HTF chiplet722can correspond to an HTF accelerator, such as the HTF142from the example ofFIG.1, or the HTF500from the example ofFIG.5. The CNM NOC hub710can be coupled to NOC hub instances in other chiplets or other CNM packages by way of various interfaces and switches. For example, the CNM NOC hub710can be coupled to a CPI interface by way of multiple different NOC endpoints on the first CNM package700. Each of the multiple different NOC endpoints can be coupled, for example, to a different node outside of the first CNM package700. In an example, the CNM NOC hub710can be coupled to other peripherals, nodes, or devices using CTCPI or other, non-CPI protocols. For example, the first CNM package700can include a PCIe scale fabric interface (PCIE/SFI) or a CXL interface (CXL) configured to interface the first CNM package700with other devices. In an example, devices to which the first CNM package700is coupled using the various CPI, PCIe, CXL, or other fabric, can make up a common global address space. In the example ofFIG.7, the first CNM package700includes a host interface724(HIF) and a host processor (R5). The host interface724can correspond to, for example, the HIF120from the example ofFIG.1. The host processor, or R5, can correspond to the internal host processor122from the example ofFIG.1. The host interface724can include a PCI interface for coupling the first CNM package700to other external devices or systems. In an example, work can be initiated on the first CNM package700, or a tile cluster within the first CNM package700, by the host interface724. For example, the host interface724can be configured to command individual HTF tile clusters, such as among the various chiplets in the first CNM package700, into and out of power/clock gate modes. FIG.8illustrates an example tiling of memory-compute devices, according to an embodiment. InFIG.8, a tiled chiplet example800includes four instances of different compute-near-memory clusters of chiplets, where the clusters are coupled together. Each instance of a compute-near-memory chiplet can itself include one or more constituent chiplets (e.g., host processor chiplets, memory device chiplets, interface chiplets, and so on). The tiled chiplet example800includes, as one or multiple of its compute-near-memory (CNM) clusters, instances of the first CNM package700from the example ofFIG.7. For example, the tiled chiplet example800can include a first CNM cluster802that includes a first chiplet810(e.g., corresponding to the first chiplet702), a second chiplet812(e.g., corresponding to the second chiplet704), a third chiplet814(e.g., corresponding to the third chiplet706), and a fourth chiplet816(e.g., corresponding to the fourth chiplet708). The chiplets in the first CNM cluster802can be coupled to a common NOC hub, which in turn can be coupled to a NOC hub in an adjacent cluster or clusters (e.g., in a second CNM cluster804or a fourth CNM cluster808). In the example ofFIG.8, the tiled chiplet example800includes the first CNM cluster802, the second CNM cluster804, a third CNM cluster806, and the fourth CNM cluster808. The various different CNM chiplets can be configured in a common address space such that the chiplets can allocate and share resources across the different tiles. In an example, the chiplets in the cluster can communicate with each other. For example, the first CNM cluster802can be communicatively coupled to the second CNM cluster804via an inter-chiplet CPI interface818, and the first CNM cluster802can be communicatively coupled to the fourth CNM cluster808via another or the same CPI interface. The second CNM cluster804can be communicatively coupled to the third CNM cluster806via the same or other CPI interface, and so on. In an example, one of the compute-near-memory chiplets in the tiled chiplet example800can include a host interface (e.g., corresponding to the host interface724from the example ofFIG.7) that is responsible for workload balancing across the tiled chiplet example800. The host interface can facilitate access to host-based command request queues and response queues, such as from outside of the tiled chiplet example800. The host interface can dispatch new threads of execution using hybrid threading processors and the hybrid threading fabric in one or more of the compute-near-memory chiplets in the tiled chiplet example800. FIG.9illustrates generally an example of a first tile system900comprising a portion of a hybrid threading fabric. In an example, the first tile system900comprises a portion of the HTF142, or comprises the HTF500from the example ofFIG.5. The example ofFIG.9includes a 1×16 array of tiles (labeled Tile 0 through Tile 15) that comprise a portion of a reconfigurable compute array. Each of the tiles can comprise an instance of the example tile504fromFIG.5. That is, each tile can include its own respective memory or memories, compute element or functional element, registers, and inputs and outputs providing connectivity to other tiles in the system, among other components. In the example ofFIG.9, the several tiles in the first tile system900are grouped into four groups of four tiles each, and each group is associated with a memory interface and/or a dispatch interface. For example, a first group of tiles includes a first tile902, a second tile904, a third tile906, and a fourth tile908coupled to a first memory interface934. The first memory interface934can be coupled to a NOC, such as the NOC interface416, to provide communication between tiles in the first group and other tiles, other tile systems, or other system resources. In other tile systems, the tiles can be differently grouped or configured, such as to include fewer or more than four tiles per group. Each tile in the first tile system900can have associated notional coordinates that indicates the particular tile and the memory interface with which it communicates. For example, the first tile902can have coordinates (0,0) to indicate that the first tile902is associated with the first memory interface934MI0 and to indicate that the first tile902is the first tile (i.e., Tile 0) in the first tile system900. The second tile904can have coordinates (0,1) to indicate that the second tile904is associated with the first memory interface934MI0 and to indicate that the second tile904is the second tile (i.e., Tile 1) in the first tile system900. A second group of tiles in the first tile system900includes a fifth tile910(Tile 4), sixth tile912(Tile 5), seventh tile914(Tile 6), and eighth tile916(Tile 7) coupled to a second memory interface936(MI1). A third group of tiles in the first tile system900includes a ninth tile918(Tile 8), tenth tile920(Tile 9), eleventh tile922(Tile 10), and twelfth tile924(Tile 11) coupled to a third memory interface938(MI2). A fourth group of tiles in the first tile system900includes a thirteenth tile926(Tile 12), fourteenth tile928(Tile 13), fifteenth tile930(Tile 14), and sixteenth tile932(Tile 15) coupled to a fourth memory interface940(MI3). The notional coordinates for the various tiles Tile 4 through Tile 15 are illustrated in the example ofFIG.9. Each of the different memory interfaces, such as the first memory interface934, the second memory interface936, third memory interface938, or fourth memory interface940can be coupled to the same or different NOC. In an example, each memory interface can be configured to receive instructions from the NOC and, in response, initiate or participate in one or more synchronous flows using tiles associated with the interface. In the example ofFIG.9, the first memory interface934comprises a dispatch interface that is configured to receive synchronous flow instructions and initiate threads using one or more of the groups of tiles in the first tile system900. The example ofFIG.9illustrates the synchronous and asynchronous fabrics that couple the various tiles and memory interfaces. In the example, solid lines that couple pairs of tiles indicate first portions of the synchronous fabric. Dashed bolded lines that couple other pairs of tiles indicate second portions of the synchronous fabric. Dashed unbolded lines that couple the various tiles to the memory interfaces indicate portions of the asynchronous fabric. Arrowheads in the illustration indicate a direction of data flow and can, where indicated, include bidirectional communication. For ease of illustration, each line may represent one or multiple different unidirectional or bidirectional data communication channels. In an example, the synchronous fabric can include a passthrough channel that extends through or across at least one tile to communicatively couple two other tiles. In the illustrated example, the synchronous fabric can include a bidirectional communication channel that directly couples the first tile902to the second tile904, as indicated by the solid line coupling the two tiles. The synchronous fabric can further include a bidirectional communication channel that directly couples the second tile904to the third tile906, as indicated by the solid line coupling the two tiles. The synchronous fabric can further include a bidirectional communication channel that couples the first tile902to the third tile906, such as using a bidirectional passthrough channel in the second tile904, as indicated by the dashed bolded line coupling the first tile902to the third tile906. In this manner, the first tile902is coupled to each of an adjacent tile (i.e., the second tile904) and to a tile that is one hop away (i.e., the third tile906, which is one “hop” or one tile position away from the first tile902) using the synchronous fabric, such as can be used for synchronous data flows executed by the first tile system900. In the example ofFIG.9, each of the third through fourteenth tiles (Tile 2 through Tile 13) has full or maximum connectivity to its neighbors via the synchronous fabric, each of the second and fifteenth tiles (Tile 1 and Tile 14) has intermediate connectivity to its neighbors, and each of the first and sixteenth tiles (Tile 0 and Tile 15) has minimal connectivity via the synchronous fabric. Tile 0 is considered to have minimal connectivity because it has one adjacent neighbor rather than two, and has one neighbor that is one hop away rather than two such neighbors. For example, Tile 0 is coupled to its one adjacent tile, Tile 1, via a direct bus in the synchronous fabric. Tile 0 is further coupled to one tile that is one hop away, Tile 2, via a passthrough channel or bus in Tile 1. Tile 1 is considered to have intermediate connectivity because it has two adjacent neighbor tiles (e.g., Tile 0 and Tile 2), and has one neighbor that is one hop away rather than two such neighbors. For example, Tile 1 is coupled to its adjacent tiles, Tile 0 and Tile 2, via respective busses in the synchronous fabric. Tile 1 is further coupled to one tile that is one hop away, Tile 3, via a passthrough channel or bus in Tile 2. Tile 2 is considered to have maximum or full connectivity because it has two adjacent neighbor tiles (e.g., Tile 0 and Tile 2), and has two neighbors that are one hop away. For example, Tile 2 is coupled to its adjacent tiles, Tile 1 and Tile 3, via respective busses in the synchronous fabric. Tile 2 is further coupled to tiles that are one hop away in respective different directions. For example, Tile 2 is coupled to Tile 0 via a passthrough channel or bus in Tile 1, and Tile 2 is further coupled to Tile 4 via a passthrough channel or bus in Tile 3. In the example ofFIG.9, each of the various tiles is coupled to a respective one of the memory interfaces using the asynchronous fabric, and each of the memory interfaces is further coupled with the other memory interfaces using the asynchronous fabric. For example, the first group of tiles Tile 0 through Tile 3 is coupled to the first memory interface934using respective asynchronous fabric channels, the second group of tiles Tile 4 through Tile 7 is coupled to the second memory interface936using respective asynchronous fabric channels, the third group of tiles Tile 8 through Tile 11 is coupled to the third memory interface938using respective asynchronous fabric channels, and the fourth group of tiles Tile 12 through Tile 15 is coupled to the fourth memory interface940using respective asynchronous fabric channels. In the example ofFIG.9, the memory interfaces are coupled using unidirectional communication channels that provide a communication loop among the interfaces. For example, using the asynchronous fabric, the first memory interface934is configured to communicate information to the third memory interface938, the third memory interface938is configured to communicate information to the fourth memory interface940, the fourth memory interface940is configured to communicate information to the second memory interface936, and the second memory interface936is configured to communicate information to the first memory interface934. The memory interfaces can optionally be differently coupled, such as using bidirectional channels of the asynchronous fabric, or can be coupled in other than a loop configuration. Each of the memory interfaces can be respectively coupled to the NOC using a NOC bus or portion of a NOC fabric. FIG.10illustrates generally an example of a second tile system1000comprising a portion of a hybrid threading fabric. In an example, the second tile system1000comprises sixteen tiles as similarly provided in the example of the first tile system900fromFIG.9, however, the tiles in the second tile system1000have different connectivity than the tiles in the first tile system900. In an example, the second tile system1000comprises a portion of the HTF142, or comprises the HTF500from the example ofFIG.5. The example ofFIG.10includes a 2×8 array of tiles (labeled Tile 0 through Tile 15) that comprise a portion of a reconfigurable compute array. Each of the tiles can comprise an instance of the example tile504fromFIG.5. That is, each tile can include its own respective memory or memories, compute element or functional element, registers, and inputs and outputs providing connectivity to other tiles in the system, among other components. The second tile system1000can have different tile connectivity characteristics than the first tile system900. For example, the tiles in the second tile system1000can be arranged in a toroidal or loop configuration such that all of the tiles have maximum connectivity, as further described below. The tiles in the example ofFIG.10can be grouped and associated with particular memory interfaces. For example, the second tile system1000can include a first group with a first memory interface1034coupled, via a portion of an asynchronous fabric, to a first tile1002(Tile 0), a second tile1004(Tile 1), a third tile1006(Tile 2), and a fourth tile1008(Tile 3). The second tile system1000can include a second group with a second memory interface1036coupled, via the asynchronous fabric, to a fifth tile1010(Tile 4), a sixth tile1012(Tile 5), a seventh tile1014(Tile 6), and an eighth tile1016(Tile 7). The second tile system1000can include a third group with a third memory interface1038coupled, via the asynchronous fabric, to a ninth tile1018(Tile 8), a tenth tile1020(Tile 9), an eleventh tile1022(Tile 10), and a twelfth tile1024(Tile 11). The second tile system1000can include a fourth group with a fourth memory interface1040coupled, via the asynchronous fabric, to a thirteenth tile1026(Tile 12), a fourteenth tile1028(Tile 13), a fifteenth tile1030(Tile 14), and a sixteenth tile1032(Tile 15). In the example ofFIG.10, the first memory interface1034comprises a dispatch interface that is configured to receive synchronous flow instructions and initiate threads using one or more of the groups of tiles in the second tile system1000. The memory interfaces1034-1040can be coupled to a NOC, such as the NOC interface416, to provide communication between or among tiles in the group, other tiles in the system, or other system resources. In other example tile systems, the tiles can be differently grouped or configured, such as to include fewer or more than four tiles per group. Each of the different memory interfaces, such as the first memory interface1034, the second memory interface1036, the third memory interface1038, or the fourth memory interface1040, can be coupled to the same or different NOC. In an example, each memory interface can be configured to receive instructions from the NOC and, in response, initiate or participate in one or more synchronous flows using tiles associated with the interface. The example ofFIG.10illustrates the synchronous and asynchronous fabrics that couple the various tiles and memory interfaces using the same conventions established inFIG.9. That is, the solid lines that couple pairs of tiles indicate portions of a synchronous fabric, dashed bolded lines that couple pairs of tiles indicate other portions of the synchronous fabric, and dashed unbolded lines that couple the various tiles to the memory interfaces indicate portions of an asynchronous fabric. Arrowheads in the illustration indicate a direction of data flow and can, where indicated, include bidirectional communication. As in the example ofFIG.9, the second tile system1000includes tiles that can include or use a passthrough channel that extends through or across at least one tile to communicatively couple two other tiles. In the example ofFIG.10, each of the first through sixteenth tiles (Tile 0 through Tile 15) can have full or maximum connectivity to its neighbors via the synchronous fabric. In other words, none of the tiles in the second tile system1000provides a special or unique connectivity case. In the example, each tile is considered to have maximum or full connectivity because it has two adjacent neighbor tiles and two neighbors that are one hop away. The full connectivity is provided because the top and bottom tiles of each column of the 2×8 array of tiles is coupled to the respective other column. For example, at one end of the first column of the 2×8 array comprising Tile 0 through Tile 7, the first tile1002or Tile 0 is coupled to its adjacent tiles, Tile 1 and Tile 15, via respective busses in the synchronous fabric. Tile 0 is further coupled to tiles that are one hop away in respective different directions. For example, Tile 0 is coupled to Tile 2 via a passthrough channel or bus in Tile 1, and Tile 0 is further coupled to Tile 14 via a passthrough channel or bus in Tile 15. At an opposite end of the first column of the 2×8 array comprising Tile 0 through Tile 7, the eighth tile1016or Tile 7 is coupled to its adjacent tiles, Tile 6 and Tile 8, via respective busses in the synchronous fabric. Tile 7 is further coupled to tiles that are one hop away in respective different directions. For example, Tile 7 is coupled to Tile 5 via a passthrough channel or bus in Tile 6, and Tile 7 is further coupled to Tile 9 via a passthrough channel or bus in tile8. Tile 8 and Tile 15, at the ends of the second column of the 2×8 array of tiles, are similarly configured to Tile 0 and Tile 7 to enhance connectivity. In the example ofFIG.10, each of the various tiles is coupled to a respective one of the memory interfaces using the asynchronous fabric, and each of the memory interfaces is further coupled with the other memory interfaces using the asynchronous fabric. For example, the first group of tiles Tile 0 through Tile 3 is coupled to the first memory interface1034using respective asynchronous fabric channels, the second group of tiles Tile 4 through Tile 7 is coupled to the second memory interface1036using respective asynchronous fabric channels, the third group of tiles Tile 8 through Tile 11 is coupled to the third memory interface1038using respective asynchronous fabric channels, and the fourth group of tiles Tile 12 through Tile 15 is coupled to the fourth memory interface1040using respective asynchronous fabric channels. In the example ofFIG.10, the memory interfaces are coupled using unidirectional communication channels that provide a communication loop among the interfaces. For example, using the asynchronous fabric, the first memory interface1034is configured to communicate information to the second memory interface1036, the second memory interface1036is configured to communicate information to the third memory interface1038, the third memory interface1038is configured to communicate information to the fourth memory interface1040, and the fourth memory interface1040is configured to communicate information to the first memory interface1034. The memory interfaces can optionally be differently coupled, such as using bidirectional channels of the asynchronous fabric, or can be coupled in other than a loop configuration. Any one or more of the memory interfaces in the example ofFIG.10can be further coupled with a NOC via a NOC bus or NOC fabric. Connections between the memory interfaces and the NOC are omitted from the illustration for clarity of the other illustrated features. FIG.11illustrates generally an example of mapping a kernel graph with various compute resources. The example ofFIG.11includes a kernel example1102represented as a dependency graph or directed graph. The graph includes vertices that represent compute operations or processing elements, and edges that represent connectivity and dependency between the different operations or elements. The example ofFIG.11further includes a resource group1114that represents various functional elements that can perform the operations defined by the kernel example1102. In an example, the kernel example1102represents a code segment that has been parsed and optionally reconstructed to form a directed acyclic graph. A scheduler can receive the kernel example1102and attempt to map the different operations defined by the graph to available compute resources, such as in the HTF142, such as can comprise a portion of the resource group1114. That is, the scheduler can perform a kernel graph to resource mapping1118to define HTF-level instructions for executing the code represented by the kernel example1102. The kernel example1102includes multiple operations represented by a first vertex1104, a second vertex1106, a third vertex1108, a fourth vertex1110, and a fifth vertex1112. The edges between the vertices represent data dependencies. In an example, an operation represented by the fifth vertex1112can depend on, and should be timed to coincide with, results from operations corresponding to the second vertex1106, the third vertex1108, and the first vertex1104. In an example, the kernel graph to resource mapping1118includes or uses a modulo scheduling technique whereby loop iterations can be repeated at regular intervals, or initiation intervals (II), with respect to various resource constraints. The kernel graph to resource mapping1118can include or use a modulo routing resource graph (MRRG) to represent the resource group1114, and the MRRG can include a directed graph where graph G={V, E, II}. The graph G models the resources of the target architecture (e.g., corresponding to the HTF) with modulo time corresponding to the initiation interval II, and includes V as the set of nodes that models the ports, functional units, registers, and busses, within each HTF tile, memory interface MI, and dispatch interface DI, and includes E as a representation of an edge set modeling the links between the nodes V, or vertices of the graph. In an example, occupancy of each node in V can be maintained separately for each cycle in the II. In an example, II is measured in clock cycles and represents the initiation interval of pipelines or flows on the HTF. In an example, the resource group1114comprises a first tile system1116that includes multiple tile groups or systems, such as a first tile system1116and a second tile system1122. The tile systems can each comprise N (or more, or fewer) tiles configured to execute synchronous flows. The different tile systems can have the same or different number of tiles per system, and tiles within a particular system can be coupled using a synchronous fabric, such as described herein in the examples ofFIG.9orFIG.10. In an example, memory interfaces or dispatch interfaces associated with the respective tile systems can be coupled using an asynchronous compute fabric bus1120and used to initiate synchronous flows to perform the operations defined by the kernel example1102. In an example, the kernel graph to resource mapping1118can be configured to map each HTF instruction from the kernel example1102to a particular node or resource in the MRRG that represents the HTF. The kernel graph to resource mapping1118can be constrained in several ways, including that a candidate node for a particular operation should be capable of executing the particular HTF instruction, an occupancy of the candidate node should not exceed its capacity, and, if the particular operation has multiple input operands, then the mapped paths for the operands in the MRRG should have the same total latency so that arrival times can coincide. In an example, the kernel graph to resource mapping1118comprises an algorithm configured to evaluate different nodes from the MRRG based on a cost function. The cost function can assign a value to different candidate nodes of the MRRG according to their relative impact on, for example, system performance, resource utilization (e.g., at the tile, tile system, or CNM system level), or power consumption, among other factors. FIG.12illustrates generally an example of a resource mapping routine1200. The resource mapping routine1200can be performed, for example, using a compiler or scheduler for the CNM system102, the HTF142, or the HTF500. In an example, the resource mapping routine1200begins at block1202with receiving a compute kernel for execution by or using a the CNM system. The compute kernel can be uncompiled or compiled code that can be implemented using one or more tiles in an HTF. In an example, the compute kernel received at block1202can include or comprise the kernel example1102from the example ofFIG.11. At block1204, the received kernel can be parsed to provide an acyclic graph that represents multiple operations that can be performed using one or multiple functional units of one or multiple tiles in the HTF. At block1206, the resource mapping routine1200can include performing a resource search to identify candidate resources, or tiles, in the CNM system to perform particular operations from the graph. In an example, block1206includes preparing a MRRG or other resource graph that represents the different resources or tiles that are available to perform the operations from the graph. In an example, block1206includes performing a search algorithm, such as a branch-and-bound search algorithm, to map the kernel graph to resources from the MRRG. The branch-and-bound algorithm can include determining a set of candidate solutions or mappings, such as with various dependencies between the possible solutions, and then evaluating different branches or subsets of the candidate solutions. At block1208, the resource mapping routine1200can include continuing the resource search by evaluating a cost function for each candidate tile or resource in the MRRG. The various inputs to the cost function are described herein in the discussion ofFIG.13. In an example, a result of the cost function evaluation at block1208can include a numerical value representative of one or multiple potential mappings for a particular operation or group of operations. At block1210, the operations can be assigned to respective different tiles based on the cost function results and timing information from the graph. Following block1210, the tiles or CNM system can be configured to execute the compute kernel. In an example, the resource mapping routine1200can leverage various HTF features to help find multiple efficient mappings for compute kernels. For example, the enhanced connectivity between or among tiles in the HTF, including the synchronous and asynchronous fabric of the HTF tiles structure, can help provide flexibility as there can be reduced or fewer constraints over which tiles are selected for particular synchronous data flows. That is, the multiple tile inputs and multiple tile outputs, in each of multiple directions, helps provide flexibility for assignment of data flows. In an example, each tile can include multiple tile memories, and each memory can be implemented with fast SRAM that supports single-cycle read and write operations. In an example, these tile memories can provide compiler buffer space for storage of intermediate results or for storing values over many clock cycles. In an example, portions of the resource mapping routine1200can be represented by the following pseudocode [3]: [3]:Input: Lowered kernel, K; MRRG; Mapping, M;Current operation, Op; Mapped MRRG node, MN;Output: Mapping, MGreedyMap(K, MRRG, M, Op, MN) {MapAndStoreResult(Op, MN, M);for each unmapped operand, Opr of Op {Route = GetRouteForOprInMRRG(Opr, MRRG);if !satisfyConstraint_3(Route, Op)return FAILED; }if isLastOp(Op, K) { print M; return SUCCESS; }I = getNextOpToMap(K, Op);MI = getClosestOpThatDefinesOprOfNextOp(K, I);SMN = getMappedMRRGNode(MI, MRRG);PriorityQueue.insert({M.Cost, SMN});while (!PriorityQueue.empty( )) {C = PriorityQueue.top( ).first;MN = PriorityQueue.top( ).second;PriorityQueue.pop( );for each fanout, FMN of MN {if satisfyConstraints_1_2(FMN, I)return GreedyMap(K, MRRG, M, I, FMN);C = C + evalCostFunc(FMN, I);Priority.Queue.insert({C, FMN});}}} In the pseudocode example [3], GreedyMap defines a greedy mapping routine that attempts to generate a mapping M from available resources defined in the MRRG, based on input kernel graph data, such as from a data dependency graph K. The Current operation, Op, is a current node within the graph K that is input into the mapping routine. The mapped node, MN, represents a candidate node to evaluate. In other words, the mapping route receives kernel and resource graph information and uses a search algorithm, such as a breadth-first search algorithm, to evaluate each operation or node in the graph K, assign it to Op, and then try to find a mapped node, MN, where the operation can be mapped or assigned. In one iteration of the greedy mapping, a pair of nodes can be selected from the Op and MN graphs and stored for further evaluation using the for loop defined in the pseudocode. In the loop, for each operand or input without a mapping, the algorithm searches for a resource in the MRRG that satisfies various requirements such as latency matching, occupancy, capability, or other constraints. If a potential match is found and the requirements are satisfied, then a subsequent or next Op in the graph K can be evaluated. If that next Op fails, then FAILED is returned, and the algorithm can roll back. If, on the other hand, the next Op succeeds, then the algorithm retrieves the next operand. Input arguments can be scanned to identify a closest one in terms of physical distance, and then that Op and argument pair can be used as an anchor to grow a search tree. In the example of the pseudocode, a function PriorityQueue.insert can be based on a cost function result. If the evaluation queue is not empty, then the function can obtain cost and solution (MN) information and evaluate each fanout, or potential locations to which to map the next operation. A for loop can be used to evaluate the subsequent operations one by one and remove operations that do not satisfy constraints 1 and 2, and then recursively call GreedyMap to map the next instruction I on FMN, the fanout of the node. In this example, constraint 1 corresponds to whether a particular node MN is capable of performing the particular operation Op, constraint 2 corresponds to occupancy of the particular node MN, and constraint 3 corresponds to a latency characteristic of the particular node MN relative to its position or potential position in a synchronous flow. FIG.13illustrates generally an example of a cost function determination routine1300. In an example, the cost function determination routine1300can comprise a portion of or correspond to the example of block1208from the resource mapping routine1200. In an example, the cost function determination routine1300can begin at block1302with a request to evaluate a candidate resource mapping, such as between a particular compute operation and a particular one of multiple available resources. At block1304, the cost function determination routine1300can include determining a resource capability to execute the particular compute operation. For example, block1304can include evaluating whether the resource or HTF tile has appropriate functional units to carry out the operation. At block1306, the cost function determination routine1300can include determining a resource occupancy characteristic for a particular resource. The occupancy characteristic can indicate whether the particular resource is available or unavailable at a particular time slice or Spoke Count within a synchronous flow. At block1308, the cost function determination routine1300can include determining a latency characteristic for a particular resource. The latency characteristic can indicate when a particular resource is or will be available to carry out a particular operation. At block1310, the cost function determination routine1300can include determining a resource utilization characteristic for a particular resource. In an example, block1310can include determining whether operations are balanced across a number of tiles within a tile group, within a tile system, or across a CNM system that comprises multiple tile systems. In an example, it can be beneficial to distribute resource utilization throughout available tiles to help reduce heat generation, reduce power consumption, and enhance longevity of the system. At block1312, the cost function determination routine1300can include evaluating a cost function that is based on one or more of the determined resource capability from block1304, the determined resource occupancy from block1306, the determined resource latency from block1308, or the determined utilization from block1310. In an example, block1312includes providing a numerical result indicative of a cost of assigning a particular operation or group of operations to particular resources in the MRRG so that the compiler or scheduler can make an informed decision about resource allocation. FIG.14illustrates a block diagram of an example machine1400with which, in which, or by which any one or more of the techniques (e.g., methodologies) discussed herein can be implemented. Examples, as described herein, can include, or can operate by, logic or a number of components, or mechanisms in the machine1400. Circuitry (e.g., processing circuitry) is a collection of circuits implemented in tangible entities of the machine1400that include hardware (e.g., simple circuits, gates, logic, etc.). Circuitry membership can be flexible over time. Circuitries include members that can, alone or in combination, perform specified operations when operating. In an example, hardware of the circuitry can be immutably designed to carry out a specific operation (e.g., hardwired). In an example, the hardware of the circuitry can include variably connected physical components (e.g., execution units, transistors, simple circuits, etc.) including a machine readable medium physically modified (e.g., magnetically, electrically, moveable placement of invariant massed particles, etc.) to encode instructions of the specific operation. In connecting the physical components, the underlying electrical properties of a hardware constituent are changed, for example, from an insulator to a conductor or vice versa. The instructions enable embedded hardware (e.g., the execution units or a loading mechanism) to create members of the circuitry in hardware via the variable connections to carry out portions of the specific operation when in operation. Accordingly, in an example, the machine-readable medium elements are part of the circuitry or are communicatively coupled to the other components of the circuitry when the device is operating. In an example, any of the physical components can be used in more than one member of more than one circuitry. For example, under operation, execution units can be used in a first circuit of a first circuitry at one point in time and reused by a second circuit in the first circuitry, or by a third circuit in a second circuitry at a different time. Additional examples of these components with respect to the machine1400. In alternative embodiments, the machine1400can operate as a standalone device or can be connected (e.g., networked) to other machines. In a networked deployment, the machine1400can operate in the capacity of a server machine, a client machine, or both in server-client network environments. In an example, the machine1400can act as a peer machine in peer-to-peer (P2P) (or other distributed) network environment. The machine1400can be a personal computer (PC), a tablet PC, a set-top box (STB), a personal digital assistant (PDA), a mobile telephone, a web appliance, a network router, switch or bridge, or any machine capable of executing instructions (sequential or otherwise) that specify actions to be taken by that machine. Further, while only a single machine is illustrated, the term “machine” shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein, such as cloud computing, software as a service (SaaS), other computer cluster configurations. The machine1400(e.g., computer system) can include a hardware processor1402(e.g., a central processing unit (CPU), a graphics processing unit (GPU), a hardware processor core, or any combination thereof), a main memory1404, a static memory1406(e.g., memory or storage for firmware, microcode, a basic-input-output (BIOS), unified extensible firmware interface (UEFI), etc.), and mass storage device1408(e.g., hard drives, tape drives, flash storage, or other block devices) some or all of which can communicate with each other via an interlink1430(e.g., bus). The machine1400can further include a display device1410, an alphanumeric input device1412(e.g., a keyboard), and a user interface (UI) Navigation device1414(e.g., a mouse). In an example, the display device1410, the input device1412, and the UI navigation device1414can be a touch screen display. The machine1400can additionally include a mass storage device1408(e.g., a drive unit), a signal generation device1418(e.g., a speaker), a network interface device1420, and one or more sensor(s)1416, such as a global positioning system (GPS) sensor, compass, accelerometer, or other sensor. The machine1400can include an output controller1428, such as a serial (e.g., universal serial bus (USB), parallel, or other wired or wireless (e.g., infrared (IR), near field communication (NFC), etc.) connection to communicate or control one or more peripheral devices (e.g., a printer, card reader, etc.). Registers of the hardware processor1402, the main memory1404, the static memory1406, or the mass storage device1408can be, or include, a machine-readable media1422on which is stored one or more sets of data structures or instructions1424(e.g., software) embodying or used by any one or more of the techniques or functions described herein. The instructions1424can also reside, completely or at least partially, within any of registers of the hardware processor1402, the main memory1404, the static memory1406, or the mass storage device1408during execution thereof by the machine1400. In an example, one or any combination of the hardware processor1402, the main memory1404, the static memory1406, or the mass storage device1408can constitute the machine-readable media1422. While the machine-readable media1422is illustrated as a single medium, the term “machine-readable medium” can include a single medium or multiple media (e.g., a centralized or distributed database, or associated caches and servers) configured to store the one or more instructions1424. The term “machine readable medium” can include any medium that is capable of storing, encoding, or carrying instructions for execution by the machine1400and that cause the machine1400to perform any one or more of the techniques of the present disclosure, or that is capable of storing, encoding or carrying data structures used by or associated with such instructions. Non-limiting machine-readable medium examples can include solid-state memories, optical media, magnetic media, and signals (e.g., radio frequency signals, other photon-based signals, sound signals, etc.). In an example, a non-transitory machine-readable medium comprises a machine-readable medium with a plurality of particles having invariant (e.g., rest) mass, and thus are compositions of matter. Accordingly, non-transitory machine-readable media are machine readable media that do not include transitory propagating signals. Specific examples of non-transitory machine readable media can include: non-volatile memory, such as semiconductor memory devices (e.g., electrically programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM)) and flash memory devices; magnetic disks, such as internal hard disks and removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks. In an example, information stored or otherwise provided on the machine-readable media1422can be representative of the instructions1424, such as instructions1424themselves or a format from which the instructions1424can be derived. This format from which the instructions1424can be derived can include source code, encoded instructions (e.g., in compressed or encrypted form), packaged instructions (e.g., split into multiple packages), or the like. The information representative of the instructions1424in the machine-readable media1422can be processed by processing circuitry into the instructions to implement any of the operations discussed herein. For example, deriving the instructions1424from the information (e.g., processing by the processing circuitry) can include: compiling (e.g., from source code, object code, etc.), interpreting, loading, organizing (e.g., dynamically or statically linking), encoding, decoding, encrypting, unencrypting, packaging, unpackaging, or otherwise manipulating the information into the instructions1424. In an example, the derivation of the instructions1424can include assembly, compilation, or interpretation of the information (e.g., by the processing circuitry) to create the instructions1424from some intermediate or preprocessed format provided by the machine-readable media1422. The information, when provided in multiple parts, can be combined, unpacked, and modified to create the instructions1424. For example, the information can be in multiple compressed source code packages (or object code, or binary executable code, etc.) on one or several remote servers. The source code packages can be encrypted when in transit over a network and decrypted, uncompressed, assembled (e.g., linked) if necessary, and compiled or interpreted (e.g., into a library, stand-alone executable etc.) at a local machine, and executed by the local machine. The instructions1424can be further transmitted or received over a communications network1426using a transmission medium via the network interface device1420utilizing any one of a number of transfer protocols (e.g., frame relay, internet protocol (IP), transmission control protocol (TCP), user datagram protocol (UDP), hypertext transfer protocol (HTTP), etc.). Example communication networks can include a local area network (LAN), a wide area network (WAN), a packet data network (e.g., the Internet), mobile telephone networks (e.g., cellular networks), plain old telephone (POTS) networks, and wireless data networks (e.g., Institute of Electrical and Electronics Engineers (IEEE) 802.11 family of standards known as Wi-Fi®, IEEE 802.16 family of standards known as WiMax®), IEEE 802.15.4 family of standards, peer-to-peer (P2P) networks, among others. In an example, the network interface device1420can include one or more physical jacks (e.g., Ethernet, coaxial, or phone jacks) or one or more antennas to connect to the network1426. In an example, the network interface device1420can include a plurality of antennas to wirelessly communicate using at least one of single-input multiple-output (SIMO), multiple-input multiple-output (MIMO), or multiple-input single-output (MISO) techniques. The term “transmission medium” shall be taken to include any intangible medium that is capable of storing, encoding or carrying instructions for execution by the machine1400, and includes digital or analog communications signals or other intangible medium to facilitate communication of such software. A transmission medium is a machine readable medium. To better illustrate the methods and apparatuses described herein, a non-limiting set of Example embodiments are set forth below as numerically identified Examples. Example 1 can include a method comprising receiving a compute kernel for execution using a compute-near-memory system that comprises memory-compute tiles having respective functional units and respective memory circuits, parsing the compute kernel to provide an acyclic graph that represents multiple operations, and performing a resource search to identify candidate tiles in the compute-near-memory system to perform respective ones of the multiple operations using their respective functional units. Example 1 can further include assigning the multiple operations to respective ones of the identified candidate tiles based on results of the resource search and timing information from the acyclic graph. In Example 1, each tile in the compute-near-memory system can be coupled to one or more adjacent tiles using a synchronous compute fabric and can be coupled to one or more non-adjacent tiles using an asynchronous compute fabric, and at least one of the multiple operations can depend on an intermediate compute result communicated between adjacent tiles using the synchronous compute fabric or the asynchronous compute fabric. Example 2 can include or use features of Example 1, and can further include executing the compute kernel to produce a final compute result, and the final compute result can be determined using functional units corresponding to multiple different tiles and using information exchanged between the multiple different tiles using the synchronous compute fabric. Example 3 can include or use features of any of the preceding examples, and can further include executing the compute kernel to produce a final compute result, and the final compute result can be determined using functional units corresponding to multiple different tiles and using information exchanged between the multiple different tiles using the asynchronous compute fabric. Example 4 can include or use features of any of the preceding examples, and can further include executing the compute kernel to produce a final compute result, and the final compute result can be determined using functional units corresponding to multiple different tiles and using information exchanged between the multiple different tiles using the asynchronous compute fabric and using the synchronous compute fabric. Example 5 can include or use the features of Example 4, and executing the compute kernel can include executing a kernel that includes a nested loop initiated in response to an instruction received via the asynchronous compute fabric. Example 6 can include or use features of any of the preceding examples, and can further include performing the resource search by performing a branch-and-bound search to map respective ones of the multiple operations to different tiles in a first node of the system. Example 7 can include or use features of any of the preceding examples, and can further include performing the resource search by evaluating a cost function associated with each of the candidate tiles. Example 8 can include or use the features of Example 7, and evaluating the cost function can include determining, for each candidate tile, a capability of the candidate tile to perform a particular operation, an occupancy characteristic of the candidate tile, and a latency characteristic of the candidate tile. Example 9 can include or use features of any of the preceding examples, and can further include, at a first tile of the multiple memory-compute tiles, storing information in a memory that supports single-cycle read and write operations, and the information can include a tile-local compute result or information received via one of the synchronous compute fabric and the asynchronous compute fabric. Example 10 can include or use features of any of the preceding examples, and can further include parsing the compute kernel by providing an acyclic graph that represents multiple different parallel operations, and assigning the multiple operations to respective ones of the identified candidate tiles can include assigning the parallel operations to respective different tiles for concurrent processing. Example 11 can include an apparatus comprising a plurality of memory-compute nodes in a compute-near-memory system, wherein the nodes comprise tiles that are coupled using an asynchronous compute fabric and a synchronous compute fabric, and wherein each tile comprises: first synchronous fabric inputs configured to receive information from one or more other tiles in a first node of the compute-near-memory system via the synchronous compute fabric, first synchronous fabric outputs configured to provide information to the same one or more other tiles in the first node, an asynchronous fabric input configured to receive information via the asynchronous compute fabric, an asynchronous fabric output configured to provide information to the asynchronous compute fabric, a functional unit, a first tile-local memory configured to store one of (1) information from one of the first synchronous fabric inputs, (2) result information from the functional unit, and (3) information from the asynchronous fabric input, and a second tile-local memory configured to store another one of the (1) information from one of the first synchronous fabric inputs, (2) result information from the functional unit, and (3) information from the asynchronous fabric input. In Example 10, a compute kernel for execution using the compute-near-memory system can comprise operations performed in parallel using respective ones of the plurality of memory-compute tiles, and intermediate compute results can be communicated between the plurality of memory-compute tiles using at least one of the synchronous compute fabric and the asynchronous compute fabric. Example 12 can include or use the features of Example 11, and the compute kernel can include latency-matched operations performed using at least respective first and second tiles in the first node. Example 13 can include or use features of any of the preceding examples, and can further include, a compiler, configured to parse the compute kernel, and configured to generate a synchronous dataflow that depends in part on information from the asynchronous compute fabric. Example 14 can include or use features of any of the preceding examples, and can further include, first and second tiles of the first node, wherein a functional unit of the first tile can be configured to provide a first compute result based on first information from the asynchronous compute fabric, and a functional unit of the second tile can be configured to provide a second compute result based on the first compute result. In an example, the second tile can be configured to receive the first compute result, from the first tile, via the synchronous compute fabric. Example 15 can include or use the features of Example 14, and the first and second tiles can be adjacent tiles in the first node, and a third tile, configured to provide the first compute result, can be non-adjacent to the first tile in the first node. Example 16 can include a non-transitory processor-readable storage medium, the processor-readable storage medium including instructions that when executed by a processor, cause the processor to: receive a compute kernel for execution using a compute-near-memory system that comprises memory-compute tiles having respective functional units and respective memory circuits; parse the compute kernel to provide an acyclic graph that represents multiple operations; perform a resource search to identify candidate tiles in the compute-near-memory system to perform respective ones of the multiple operations using their respective functional units; and assign the multiple operations to respective ones of the identified candidate tiles based on results of the resource search and timing information from the acyclic graph. In Example 16, each tile in the compute-near-memory system can be coupled to one or more adjacent tiles use a synchronous compute fabric and can be coupled to one or more non-adjacent tiles using an asynchronous compute fabric. In an example, at least one of the multiple operations depends on an intermediate compute result communicated between adjacent tiles using the synchronous compute fabric or the asynchronous compute fabric. Example 17 can include or use the features of Example 16, and the instructions can further configure the processor to execute the compute kernel to produce a final compute result, wherein the final compute result is determined using functional units corresponding to multiple different tiles and using information exchanged between the multiple different tiles using the synchronous compute fabric and using the asynchronous compute fabric. Example 18 can include or use the features of Example 17, and the instructions to execute the compute kernel can include instructions to configure the processor to execute a kernel that includes a nested loop. Example 19 can include or use features of any of the preceding examples, and can further include the instructions to perform the resource search as instructions to configure the processor to perform a branch-and-bound search to map respective ones of the multiple operations to different tiles in a first node of the system. Example 20 can include or use features of any of the preceding examples, and can further include, the instructions to perform the resource search including instructions to evaluate a cost function associated with each of the candidate tiles. Example 21 can include or use the features of Example 20, and the instructions to evaluate the cost function can include instructions to determine, for each candidate tile, a capability of the candidate tile to perform a particular operation, an occupancy characteristic of the candidate tile, and/or a latency characteristic of the candidate tile. Example 22 can include or use features of any of the preceding examples, and can further include, the instructions to parse the compute kernel including instructions to provide an acyclic graph that represents multiple different parallel operations, and instructions for assigning the multiple operations to respective ones of the identified candidate tiles can include assigning the parallel operations to respective different tiles for concurrent processing. The above detailed description includes references to the accompanying drawings, which form a part of the detailed description. The drawings show, by way of illustration, specific embodiments in which the invention can be practiced. These embodiments are also referred to herein as “examples.” Such examples can include elements in addition to those shown or described. However, the present inventors also contemplate examples in which only those elements shown or described are provided. Moreover, the present inventors also contemplate examples using any combination or permutation of those elements shown or described (or one or more aspects thereof), either with respect to a particular example (or one or more aspects thereof), or with respect to other examples (or one or more aspects thereof) shown or described herein. In this document, the terms “a” or “an” are used, as is common in patent documents, to include one or more than one, independent of any other instances or usages of “at least one” or “one or more.” In this document, the term “or” is used to refer to a nonexclusive or, such that “A or B” can include “A but not B,” “B but not A,” and “A and B,” unless otherwise indicated. In the appended claims, the terms “including” and “in which” are used as the plain-English equivalents of the respective terms “comprising” and “wherein”. Also, in the following claims, the terms “including” and “comprising” are open-ended, that is, a system, device, article, or process that includes elements in addition to those listed after such a term in a claim are still deemed to fall within the scope of that claim. Moreover, in the following claims, the terms “first,” “second,” and “third,” etc. are used merely as labels, and are not intended to impose numerical requirements on their objects. The above description is intended to be illustrative, and not restrictive. For example, the above-described examples (or one or more aspects thereof) can be used in combination with each other. Other embodiments can be used, such as by one of ordinary skill in the art upon reviewing the above description. It is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. Also, in the above Detailed Description, various features can be grouped together to streamline the disclosure. This should not be interpreted as intending that an unclaimed disclosed feature is essential to any claim. Rather, inventive subject matter can lie in less than all features of a particular disclosed embodiment. Thus, the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separate embodiment, and it is contemplated that such embodiments can be combined with each other in various combinations or permutations. The scope of the invention should be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled.
151,286
11860801
DETAILED DESCRIPTION OF ILLUSTRATIVE EMBODIMENTS The making and using of the presently preferred embodiments are discussed in detail below. It should be appreciated, however, that the present invention provides many applicable inventive concepts that can be embodied in a wide variety of specific contexts. The specific embodiments discussed are merely illustrative of specific ways to implement and use the invention and do not limit the scope of the invention. Moreover, the same reference signs refer to the same technical features if not stated otherwise. As far as “may” is used in this application it means the possibility of doing so as well as the actual technical implementation. As far as “about” is used in this application, it means that also the exact given value is disclosed. The Figures are not drawn to scale, i.e. there may be other dimensions and proportions of the shown elements. The present invention will be described with respect to the preferred embodiment in a specific context namely an optical output device in the form of an output device with LEDs as output elements. The invention may also be applied, however, to other output arrangements, to input arrangements, for instance to keyboards, or to mixed output/input arrangements, see for instanceFIG.7. Furthermore, the embodiments relate to serial bus systems. Alternatively, it is possible to use parallel bus systems for parallel data transmission but without separate address lines and without using multiplexing of data and address bus between bus units SLC and/or bus control unit MIC. The bus systems may be replaced by wireless connections in other application examples. FIG.1shows a first bus topology of a bus system BS. In the first bus topology there is one bus control unit MIC that is connected with a chain4of resistors R0to Rn all having the same resistive value within the fabrication tolerance. This means that the MIC is able to perform an address allocation method in order to allocate addresses to the SLC after power on. However there may be a second bus topology where an MCU is connected to chain4of resistors R0to Rn. In this case the MCU controls the allocation of addresses to SLCs. It is possible to have a further tap that goes from the middle of chain4to a further input/output pin of the MCU when using the second topology. A third topology uses one master MIC and several subordinated MICs on bus system BS. This may allow longer bus wires or more SLCs on bus DHIB. The subordinated MICs are also part of chain4, i.e. their pins DET and DETB are connected to the left or right with resistors. A fourth topology uses a master MIC and several bridge MICs that are placed between adjacent bus segments of bus system BS and between segments of chain4. In this topology, line termination units are located at the ends of the wires of the bus of each bus segment. It is possible to have even longer bus systems using bridge MICs. It is, of course, possible to combine features of the four topologies to get further topologies. The first bus topology is described in more detail here. The bus system BS is part of an output device2that comprises more than 100 LEDs R or G or B or LED groups R, G, B one of them shown on bus unit SLC1. Optionally one switch may be connected to each SLC, see switch SW1on SLC1which is for instance the “ESC” (Escape) key, i.e. the output device is also a keyboard according to this option. Although the bus DHIB (Differential Host Interface Bus) of bus system BS is shown along a straight line inFIG.1the bus DHIB may change its direction several times in another output device2so that there are several parallel sections of bus DHIB, for instance 5 to 7 parallel sections, seeFIG.6. The resistors R0bis Rn of chain4of resistors are connected in a serial connection beginning with R0, then R1and so on, see further resistors11, to the last but not least resistor R(n−1) and to the last resistor Rn. The free end of resistor R0is connected to a DET output of bus control unit MIC. The free end of resistor Rn is connected to a DETB output of bus control unit MIC. Between two adjacent resistors there are respective taps. The tap between R0and R1is connected to bus unit SLC1input/output pin DET (DETermine). The tap between R1and R2is connected to a bus unit SLC2(not shown, see further bus units10) and so on. The final tap between resistor R(n−1) and Rn is connected to the last bus unit SLCn on the bus DHIB. The ends of chain4may be connected to pins DET, DETB on a bus control unit MIC or on the MCU. LED groups of three LEDs red R, green G and blue B may be coupled to each bus unit SLC respectively. It is possible to control the LED groups and the LED within one group independently from the LEDs of other groups or of other LEDs within the same group. The bus system BS includes:one bus control unit MIC (MIC—Master Interface Controller) in short MIC,bus units SLC1to SLCn (SLC subordinated or slave controller) in short SLC, for instance between 100 and 200 SLCs or 250 SLCs, andthe bus DHIB (Differential Host Interface Bus) in short DHIB. The bus DHIB comprises two bus wires D+, D−. Bus wire D+ is for the transmission of the logical positive signal, i.e. it signals a logical 1 with positive potential. Bus wire D− is for the transmission of the negative (logical inverse) signals of the differential signal. The bus units SLC1,10to SLCn are conductively connected to the bus wires D+ and D− in parallel connection. This means that all other bus units SLC will still work even if one bus unit SLC does not work properly or does not work at all. Furthermore, output device2comprises a processor unit MCU (Microprocessor Control Unit) or in short MCU. Between the MCU and the bus control unit MIC there is an SPI (Serial Peripheral Interface) bus20, seeFIG.4for more details. Furthermore, there are control lines22between the MCU and the bus control unit MIC. Control lines22are also explained in more detail with regard toFIG.4below. There is an interface24, for instance USB (Universal Serial Bus), Bluetooth etc., between the MCU and a further MCU or/and a main processor of a computer. Interface24is used to transmit data that sets output states of output elements of output device2from the main processing unit. Optionally, input data from input elements on bus units SLC may be read and sent to the main processor. There are two bus termination units12,14at the ends of bus DHIB for line termination, i.e. in order to prevent reflection of signals at the end of the wires D+ and D−. Such reflection would interfere with the transmitted signals. A power unit16generates the power, i.e. the power potential Utt, for bus termination units12,16. The relevant voltage is derived from ground GND potential and positive potential Vdd. There is an enable line26from MCU to power unit16that enables or disenables power generation for potential Utt, i.e. for the potential that is relevant for the powering of the line termination units12and14. This may be used for energy savings. Due to biasing termination may always use two potentials. While usually the negative one is GND and the positive is Utt there may be applications were it is necessary to move the potentials either further apart (for instance for a very large DHIB) or closer together (for instance for low power tweaking) which both will result in two distinct termination voltages Utt+ and Utt−. FIG.2shows sub units of the bus control unit (MIC):a state engine200of bus control unit MIC that controls the functions of the MIC,a receiving unit M6for receiving data and commands from bus DHIB,a sending/transmitting unit M7for sending data and commands to the bus DHIB,a match and general control unit M8that is used for implicit addressing and for general control,an interface unit M9that comprises an interface to and from the processor unit MCU, seeFIG.4for more details,a tristate differential driver TDD0with special state driving (OOB out of band) signaling. The two outputs of TDD0are connected to bus wires D+ and D−.a differential receiver DR0with special state detect. The two inputs of TDD0are connected to bus wires D+ and D−.a DET control unit204having a first output pin DET that is connected to R0of chain4and a second output pin DETB that is connected to the last resistor Rn of chain4enabling the MIC to set the ends of chain4to low and high during allocation of addresses to SLCs as described in more detail at the end of the description.an address and match unit206that is used for addressing and that comprises an address register LBAR0(Local Bus Address Register, address register relevant for bus DHIB) and a counter register IAAR0(Immanent (upcoming) Address Access Register) as well as a match/compare unit800. Only the [IAAR] counting may be specific to implicit addressing. The addressing unit as a whole may not be optional, but may be necessary to implement a means of distinguishing the bus stations. Only the IAAR may be definitely optional and LBAR may also be optional, if some sort of “hard wiring” (preprogramming) of the address is used. There are the following connections between the units of MIC:data output line210for data transmitted to bus DHIB arranged between sending/transmitting unit M7and input of driver TDD0,a control line212that is between sending/transmitting unit M7and the control input of driver TDD0,a data input line214for data received from bus DHIB arranged between the output of receiver DR0and receiving unit M6,a control line216from receiving unit to a control input of receiver DR0,SPI interface lines20between processor unit MCU and interface unit M9, seeFIG.4for more details,a local addressed data bus240that may comprise a data bus and an address bus separated from each other or multiplexed. Bus240is between state engine200, sending/transmitting unit M7and match and general control unit M8.control lines244between receiving unit M6and unit M8,a match control line246between unit M8and state engine200for the signaling of a match of addresses LBAR0, IAAB0in match unit800. Furthermore, bus control unit MIC comprises:an exception signaling unit300having two inputs connected to bus DHIB and being able to detect or to initiate out of band signaling (OOB),a data buffer register302for intermediate storing of data tokens received via bus DHIB,a bus gate unit310for enabling data transfer from receiving unit M6, received token bus326b, to command token and address bus326abetween either receiving unit M6or command token generator (CTG) unit and internal arbitration unit900which is part of interface unit M9and state engine200, i.e. for preventing transmission conflicts. This is an enable gate unit310. The other source of command tokens is the Command Token Generator (CTG,900) under control of the SPI engine902. The state engine200is a pure sink for the commands, i.e. a mere execution unit. Nevertheless state engine200selects the source to obtain the next command queued in from: If a command from SPI engine902is pending state engine200selects CTG (900) as source and on demand even can actively terminate the present command to execute the one from the SPI engine902. In most modern FPGA (Field Programmable Gate Array) and ASIC (Application Specific Integrated Circuit) implementations “busses” may not be implemented by separate transceivers for each source, but by a multiplexer which may intrinsically prevent conflicts.a bidirectional signaling line320between exception signaling unit300and state engine200. For easier implementation this may be a three line point to point bus, not just one line:Enable (exception out) signal to the OOB (out of band—signaling) driver, i.e. exception signaling unit300,OOB signal state indicator (exception in) to the state engine200, andOOB data line (bidir).a comma or separator signaling line322from receiving unit M6to state engine200,the command token and address bus326afor the transmission of command tokens from receiving unit M6or the command token generator (CTG) unit and internal arbitration unit900to the state engine902.the received token bus326bfor the transmission of received tokens from interface unit M9to state engine200and of data and address tokens from receiving unit M6via data buffer register302to local addressed data bus. Any token may consist of 8 bit and may be flagged by a ninth one either as data or as command. An address token thereby may be a data token that due to the preceding command is going to be interpreted as an address or as extension of a command (flags, etc.) by “addressing” a sub-command. Thereby addresses may mainly be handled by the data paths. They may just be interpreted differently due to the control exerted by the state engine. Therefore most address tokens may just be transferred to the [IAAR] or another address related register.a data token bus328for the transmission of data tokens from receiving unit M6via data buffer register302to local addressed data bus240. This data token bus328may also be a branch of received token bus326brather than command token and address bus326asince on this bus data tokens which are not being interpreted as command extension only can originate in receiving unit M6. This may be determined by implementation needs.status and control line(s)330between state engine200and data buffer register302,a dummy clock enable line332from state engine200to sending/transmitting unit M7for controlling the generation of dummy clock data on bus DHIB,control lines333from state engine200to sending/transmitting unit M7and match and general control unit M8for general control purposes,a command token bus line334from state engine200to sending/transmitting unit M7for the transmission of command tokens that shall be transmitted via bus DHIB to the SLCs,a synchronization clock line342that transmits a clock signal to other units of MIC especially while receiving data via bus DHIB. The clock signal is generated inside receiving unit M6.a bus line350between match and general control unit M8and DET control unit204for transmitting data that sets high or low state at the DET and DETB pins of control unit204. FIG.3shows sub units of a bus unit (SLC), for instance of SLC1. There are the following similarities between the MIC shown inFIG.2and the SLC1shown inFIG.3. With regard to the connection of these elements reference is made to the respective elements that have been described with regard toFIG.2above. The corresponding elements are shown in round brackets: state engine400(SLC) (200MIC), receiving unit M6a(M6), transmitting unit M7a(M7), match and general control unit M8a(M8), DET control unit404(204), address and match unit406(206), address register LBAR1(LBAR0), counter register IAAR1(IAAR0), match unit802(800), tristate differential driver TDD1(with special state driving) (TDD0), differential receiver DR1(with special state detect) (DR0), data output line410(to bus) (210), control line412(212), data input line414(from bus) (214), control line416(216), local addressed data bus440(data bus and address bus separate or multiplexed) (240), match control line446(246), exception signaling unit500(300), data buffer register502(302), signaling line520(320), comma signaling line522(322), data token bus528(328), status and control line530(330), dummy clock enable line532(332), control lines533(333), command token line534(334), synchronization clock line542(342), connection lines550(350). There are the following differences:address registers LBAR1to LBARn and counter registers IAAR1to IAARn are mandatory in SLCs,the DET control unit404does not have a second input/output pin, i.e. DETB,an optional switch sample unit409athat is coupled to key switch SW1and that determines how deep key switch SW1is pressed down,an LED control engine409bthat is coupled to one, two or three LEDs, i.e. a red one R, a green one G and a blue one B, or to more than three LEDs,a command token and address bus526between receiving unit M6aand state engine400. There is no bus gate unit in the SLC corresponding to bus gate unit310. Furthermore, there is no bus that corresponds to bus326abecause there is no interface unit M9in SLCs.connection lines552from unit M8ato switch sample unit409aand to LED control engine409. It is for instance possible to transmit the state of control flags via lines552. Furthermore, there is a second part M8bof match and general control unit M8of SLC, SLC1comprising:a register560(ILPCDR—Intermediate LED (light emitting diode) PWM control register) for controlling PWM (pulse width modulation) of the LEDs R, G and B,a register562(ILDCDR and LSTAT—Intermediate LED dot correction control register and LED status register) for controlling further functions of the LEDs, i.e. bin correction, on/off etc., andan optional register564(ISSOR—Intermediate switch sample output register) that stores the sample value that is sampled from switch SW1for instance using an ADC. There is a connection line570between register560and LED control engine409b. A further connection line572is between register562and LED control engine409b. A third optional connection line574is between register564and switch sample unit409a. All three registers560,562and564are also connected to local addressed data bus440, i.e. register560for write access, register562for read or write access and register560for read access. Further registers DCR0to DCR3of match and general control unit M8aand M8bwill be described below. The receiving unit M6, M6amay include:an edge detector and filter unit that receives its input from receiver DR0or DR1,a clock recuperation and synchronization unit that may receive its input from the edge detector and filter unit,a phase alignment unit that may receive input from receiver DR0or DR1and from clock recuperation and synchronization unit,a 10 bit shifter unit that may be coupled to the phase alignment unit,a history buffer that may store the previously received symbol,a modified 8b/10b decoder, the optional modifications may be made with regard to a decoder as described in the article of A. X. Widmer, Peter A. Franaszek, “A DC-Balanced, Partitioned-Block, 8B/10B Transmission Code”, IBM J. RES. DEVELOP., Vol. 27, No. 5 Sep. 1983, pp. 440 to 451, and in the literature listed at the end of this article. Some of the modifications will be explained below in more detail. The modified 8b/10b decoder may receive its input from the 10 bit shifter and from the history buffer.a comma detection unit that detects the comma separator of the frames transmitted on bus DHIB and signals its presence to the respective state engine200or400. The comma detection unit may be closely coupled to the modified 8b/10b decoder.a command detection unit for detecting commands that have been transmitted via bus DHIB. An output of the clock recuperation and synchronization unit may output a synchronization clock on line342or542for other units of the MIC or SLC. Furthermore clock recuperation and synchronization units may be coupled to control lines244(544). The command detection unit may be coupled to received token bus326b(526). The transmitting unit M7, M7amay include:a data out buffer and special code insertion unit,an out FIFO unit that may store 4 tokens for example and that receives its inputs from the data out buffer and special code insertion unit,a modified or not modified 8b/10b encoder unit that receives its input from the out FIFO unit, anda 10b (bit) output shifter unit that receives its input from the modified 8b/10b encoder. The local addressed data bus240or440is connected to the input of data out buffer and special code insertion unit which also receives command tokens via command token line(s)332respectively532. Dummy clock enable line332is also connected with data out buffer and special code insertion unit. The output of the 10b output shifter unit is connected with the input of driver TDD0or TDD1. All units except the FIFO unit are controlled by the control lines333. The match and general control unit M8comprises the registers that are mentioned in the following in addition to the registers LBAR0(Local Bus Address Register, address register) and IAAR0(Imminent Access Address Register, counter register) as well as the match unit800. The match and general control unit M8aalso comprises the registers that are mentioned in the following in addition to the registers LBAR1(Local Bus Address Register) and IAAR1(Imminent Access Address Register, counter register) as well as the match unit802:register DCR0that has a bidirectional connection to DET (Determine) control unit204or to DET control unit404,register DCR1that is connected with lines552in unit M8a. These may be several lines carrying the control bits from [DCR1]: enable, mode bits, test flags, etc.register DCR2that is connected with control lines244,544, andregister DCR3that may be used for other purposes. Local addressed data bus240,440is connected bidirectional, i.e. for sending and receiving, to all four registers DCR0to DCR3in both units M8and M8a. Control lines244,544may carry control bits, mostly clock mode controls, from DCR2to receiving unit M6and M6aand allow the read back of some status bits from the receiving unit M6, M6a. FIG.4shows sub units of an interface unit M9within the bus control unit (MIC). The interface unit M9includes:a second part900of state machine/engine of bus control unit MIC,an SPI (Serial Peripheral Interface) engine902that is available in the market,a command and data separator unit904,an input FIFO906(write W-FIFO—First In First Out)an output FIFO908(read R-FIFO),a command token generator (CTG) unit and internal arbitration unit910creating internal command tokens to be executed by the state machine200upon receiving a transfer from SPI engine902for the DHIB or for local register access. Some very basic commands may be directly processed by the CTG by arbitrating internal control lines, for example “hard” resetting the chip. Since the state engine902is built for processing DHIB commands, any command coming in via SPI engine902may translated into an appropriate local command token, which will be executed the normal way by the state engine200, like in an SLC. In order to distinguish those locally created tokens from those received via the DHIB tokens may be used that have no legal symbol encoding on the DHIB, but nevertheless share most of the bit pattern with their functional DHIB equivalent. In execution there is no difference except of the data flow: Commands transferring data to DHIB may use the W-FIFO as data source instead of the register file of M8, while commands transferring data from DHIB may use the R-FIFO instead of the register file. Local transfers (between local register file and the SPI engine902may replace receiving unit M6and sending/transmitting unit M7by the appropriate FIFO. However, a few commands may not fit into this scheme like “RESET”, local power down and unlocking setup bits that in their present state are explicitly protected from changing by a DHIB access. These commands may be directly executed by the CTG by directly arbitrating the appropriate control lines.a bus gate912between the output of unit910and command token and address bus326a,an exception output line/EXCP as part of control lines22,a “ready” output line/Ready as part of control lines22a “wait” output line/Wait as part of control lines22an “enable” input line/EN as part of the standard SPI interface20,a clock line SCLK as part of the standard SPI interface20,an input line MOSI as part of the standard SPI interface20,an output line MISO as part of the standard SPI interface20,a transaction indicator line920between SPI engine902and unit904indicating a continuous transaction,a clock line922between SPI engine902and unit904,a start signaling line924between SPI engine902and unit904,a data line926between SPI engine902and unit904,a clock line930for R-FIFO908between SPI engine902and output FIFO908,a data output line932of R-FIFO908connected with an input of SPI engine902,an input clock line940of input or W-FIFO906coming from command and data separator unit904,a data input line942of input or W-FIFO906coming from command and data separator unit904,an error signaling line950(FF_Err) coming from FIFOs906,906and going to the second part900of the state engine of the bus control unit MIC signalling an overflow or underflow,an output clock line960of W-FIFO906going to second part900of state engine,an input clock line962of R-FIFO908coming from second part900of state engine,a bus wait line964coming from output NE (Nearly Empty) of input FIFO906and from output NF (Nearly Full) of output FIFO908and connected to second part900of state machine, i.e. forming a signal DHIBFF_Wait. InFIG.4, these lines are shown as a “wire or” which may not be available in modern chips any more. So the creation of DHIBFF_Wait probably may be implemented using a “real” or gate.an output enable/disable line966connected to a respective input of W-FIFO906for controlling and synchronizing data output to the local addressed data bus240,a control line970(WFF_NFull) coming from a respective control output of input FIFO906and going to the second part900of state engine for signaling that input FIFO906is nearly full,a command signal line980from command and data separator unit904to command token unit and internal arbitration unit910,a control line990(SPI_Pend) from unit910to second part900of state engine200for signaling that SPI data has been received, andbus gate control line992from second part900of state engine200to bus gate912for opening or closing this electronic gate912. Bus gate control line992is also connected to bus gate310, seeFIG.2. Local addressed data bus240is also connected with data output of input FIFO906and with data input of output FIFO908. Alternatively, it is possible to use a parallel bus system in bus system BS for parallel data transmission but without separate address lines and without using multiplexing of data and address bus between bus units SLC and/or bus control unit MIC. Moreover, the bus system may be replaced by wireless connections in other application examples of implicit addressing, for instance for a light chain or for an input arrangement, especially a keyboard. FIG.5illustrates a method500for implicit addressing. The method starts in method step S502. There are preparing steps that have to be executed only once, i.e. step S504:providing S504within a first unit SLC1and within a second unit SLC2respectively a counter unit IAARn, a comparison unit802and a storing unit LBARn for the storage of an identifier, i.e. the SLCs have to be produced. Step S506is a preparing step that has to be performed only once if programming of the identifiers or addresses is used. Step S506has to be performed after each power on if flexible address/identifier allocation is used using for instance chain4of electronic elements. Examples for this allocation step are described in more detail in the last part of this application, for instance using analogue-digital-converters in SLCs or using Schmitt-trigger-circuits. In the example the following steps are performed:allocating S506a first identifier to the first unit SLC1, for instance 1, andallocating S506a second identifier that is different from the first identifier to the second unit SLC2, for instance 2. Step S508has to be performed before each block read access or block write access to several SLCs if not all SLCs should be included within the access, i.e. step S508is optional. Using for instance a broadcast command the MIC informs the SLC of a start value for the counter registers IAAR. The start value may be zero or one depending on the implementation, i.e. for starting with the first SLC. Alternatively, it is possible to select any other SLC as the first for the next block/bulk access. There are several methods for restricting a bulk access to only a part of the SLCs:defining a standard value, for instance 10, 20 etc., i.e. 10 SLCs etc. are accessed in bulk access that relates only to a part of the SLCs.communicating an end value for the counter registers IAAR, see optional Step S510, orcommunicating a number of SLCs that have to included. If a mixed block read/write is prepared, it has to be specified from which SLCs data has to be read and to which SLC data has be written. A separate preparation block write access may be used for this purpose, i.e. a write access to all bus units SLC or to only a part of the bus units SLC. In the example the following steps are performed:setting S508the same counter value in the counter units of both units SLC1, SLC2, for instance the value 1. After step S508/S510step S512is performed by the MIC. The bus control unit MIC sends a block read command or a block write command or a mixed read/write command to the SLCs. In an implementation concept the bulk commands may be addressed, i.e. containing the start address meaning the start address has not to be set before and that bulk commands are not broadcasted since all addressed commands (bulk and single) may set the IAAR of all SLCs. After setting S508the counter values and after receiving the command, SLC1and SLC2perform the following steps:comparing S520the counter value in the first unit SLC1to the first identifier and comparing S530the counter value in the second unit SLC2to the second identifier,based on equality (1 equal 1) of the comparison in the first unit SLC1sending S522of first data from the first unit in case of a block read command or S522assigning of first data to the first unit SLC1in case of a block write command. Data is transmitted via bus DHIB in both cases.based on inequality (1 unequal 2) S531of the comparison in the second unit SLC2no sending of data or assigning of data to the second unit SLC2,after step S522and S532each SLC, for instance SLC1and SLC2check whether the block access is finished or not,after steps S522and S532the counter value within counter register IAAR is counted up S526, S536or down in both units SLC1, SLC2. There are several possibilities for implementation of steps S524and S534:a flag may be set in the last bus unit of all bus units, i.e. in bus unit SLCn, or alternatively in the last bus unit that is involved in a partly block access. This may be done by the bus control unit MIC during allocation for SLCn, see step S506, or during preparing the block write,comparing current counter value in register IAAR with an end value that was transmitted earlier,counting the number of loops that have performed and comparing this number to a predefined number or to a number that was transmitted earlier. At the moment it is assumed that the end is not reached yet. The method steps that are performed independently within each SLC are now repeated, i.e. there is a loop of method steps S520to S526in SLC1and of method steps S530to S536in SLC2, and in further SLCs, see steps S550. Within the second loop the following steps are performed:after the first counting up S526, S536or down comparing S520the counter value in the first unit SLC1to the first identifier and comparing S530the counter value in the second unit SLC2to the second identifier,based on inequality (2 unequal 1) of the comparison in the first unit SLC1no sending S521of data from the first unit or S521no assigning of data to the first unit SLC1,based on equality (2 equal 2) S531of the comparison in the second unit SLC2sending of second data, i.e. as part of a block read access, or assigning of second data as data that is dedicated to the second unit SLC2, i.e. as part of a block read access. Data is transmitted via bus DHIB in both cases.further counting up S526, S536or down of the counter value in both units SLC1, SLC2. The loops of method steps S520to S526, S530to S536are left in step S524, S534, etc. if the end of block access is reached. There may be a flag that is set in all bus units SLC indicating whether the respective bus unit is the last unit with regard to a bulk access. The flag has for instance the value 1 if the respective SLC is the last one. Therefore, it is possible to check this flag in all bus units. If this flag is set and if the bus unit in which this flag is set has already performed the bus access it is clear that the end of bus access is reached. Therefore the last bus unit may inform the other bus units and optionally also the MIC about the end of block access. An implementation may for instance use a RELEASEF message instead of a RELEASE message that has to be used if the flag is not set and if there was a bus access of the unit that checks its flag. RELEASEF may also have the advantage, that the SLCs do not have to find out themselves if the transaction is complete by actively checking for command tokens on the DHIB. So using RELESEF may reduce necessary state engine complexity. If it is assumed that only the two bus units SLC1and SLC2are involved in a block access to only these two units the loops S520to S526, S530to S536are left after the second time. This means that the flag indicating the last unit is set in SLC2in the example. The method ends therefore in step S540. Further SLCs may perform the same steps as SLC1and SLC2, see method steps S550, if more than two SLCs are involved. The number of involved SLCs depends on the kind of block access that is performed, i.e. all SLCs or only a part of the SLCs. Alternatively, it is possible to use the same method500with a parallel bus system for parallel data transmission but without separate address lines and without using multiplexing of data and address bus between bus units SLC and/or bus control unit MIC. The bus system may be replaced by wireless connections in other application examples of the method500of implicit addressing. FIG.6illustrates a 2D (two dimensional) display600. The display600comprises a bus system BS2which is the same as bus system BS described with reference toFIGS.1to4above. However, no input switches SW1etc. are used. A Cartesian system of coordinates602comprises a horizontal x-axis604and a vertical y-axis606. The bus system BS2in display600has straight sections610to616extending along the x-axis604and each comprising four SLCs. These straight sections610to616are parallel with regard to each other and are connected by intermediate sections that do not include SLCs. Further sections618of bus system BS2are indicated by dots. Thus, bus system BS2extends in display600in a meandering way within a plane. InFIG.6the following straight sections610to618are shown:straight sections610including SLC1to SLC4(from right to left), i.e. optical output elements OE1to OE4,straight sections612including SLC5to SLC8(from left to right), i.e. optical output elements OE5to OE8,straight sections614including SLC9to SLC12(from right to left), i.e. optical output elements OE9to OEl12, andstraight sections616including SLC13to SLC16(from left to right), i.e. optical output elements OE13to OE16. There are equal distances between adjacent bus units/output elements, for instance between OE1and OE2, OE2and OE3, and so on. Alternatively, there may be applications with unequal distances. However, the 2D display600may comprise more than 100, more than one thousand or even more bus units in order to have the resolution that is necessary for the respective application. Connection lines620connect the MIC and the MCU of bus system BS2within display600. The MCU may be connected to a local or remote computer system by an interface622, for instance an USB (Universal Serial Bus), Bluetooth, LAN (Local Area Network), WLAN (Wireless LAN), WIMAX (Worldwide Interoperability for Microwave Access) etc. Optical output elements OE1to OE16etc. comprise only one LED for a monochrome display. Alternatively optical output elements OE1to OE16etc. comprise groups of three LEDs, for instance a red R one, green G one and blue B one for a color display. There may also be more than three LEDs in each of the groups. 2D Display600may be used to display more static information as in industrial control panels, displays in stadiums, displays in bus or train stations or airports etc. Alternatively, 2D display600may be used to display 2D light shows for advertising, for instance. More sophisticated displays600may show movies, TV (television) programs, video data, blue ray data etc. More sophisticated displays600may comprise further bus systems626, for instance for each section610to616etc. a separate bus system. The first bus system would comprise only the SLCs and OEs of the first straight section610in this case, i.e. SLC1to SLC4and further SLCs that are not depicted. The second bus system is shown in more detail, i.e. comprising a bus control unit MIC2that has the same functions as the first bus control unit MIC, seeFIGS.1and2and corresponding description. Connection lines, for instance624, of further bus systems626may be connected to the MCU of the first bus system or to their own MCU. In the last case the MCUs of the bus systems may be connected to a super ordinated MCU. This super ordinated MCU may have an interface to a local or remote computer system. Alternatively, the MCUs of each bus system may have their own interfaces. There may be units that allow synchronization of transmitted data for the case in which several bus systems are used within the same display600. However this synchronization is optional. Alternatively, it is possible to use a parallel bus system in display600for parallel data transmission but without separate address lines and without using multiplexing of data and address bus between bus units SLC and/or bus control unit MIC. The bus system may be replaced by wireless connections in other display application examples. There may be applications for 3D displays using a stack of 2D displays. Alternatively, bus system BS2may extend within several two dimensional planes of a 3D display that are arranged in parallel to the plane that is shown inFIG.6, i.e. the x-y-plane. A two dimensional (2D) field of sensors or a three dimensional (3D) field of sensors may be produced correspondingly to a 2D display, for instance600, or 3D display replacing the optical output elements by sensors and using read commands instead of write commands, especially bulk/block commands. FIG.7illustrates a bus system BS3with sensors and actuators. Bus system BS3is the same as bus system BS described with reference toFIGS.1to4above. However, no input switches SW1or LEDs etc. are used. Line termination units12,14may be optional, for instance for short busses DHIB, as well as chain4of electronic elements, especially with nonvolatile addressing. Bus system BS3may be part of a heating and/or ventilation and/or air condition installation BS3within a building700. However the heating and/or ventilation and/or air condition installation BS3may also be installed within an airplane, a ship or within a bus or car. The building700comprises windows702and one or more doors704. The building may have several floors, for instance between 5 and ten floors or more. Alternatively, building700may be a house for a family, e.g. a detached house, a semi-detached house, etc. However, building700may also be another type of building, for instance a factory, etc. Bus system BS3comprises lots of SLCs, for instance more than 10 or more than 100. Sensor elements and/or actuating elements are connected to respective bus units SLCa to SLCn, for instance:a temperature sensor710ais connected to SLCa,a smoke sensor710bis connected to SLCb,an actuating element710cfor a valve of a heating element is connected to SLCc,an actuating element710dfor a ventilation flap is connected to SLCd,a pressure sensor,a light sensor, anda sensor for measuring the speed of wind at the outside of the building. A connection712is used between bus unit SLCa and sensor710a. There are further connections between the other SLCs, i.e. SLCb, SLCc and SLCd and the sensor710b, the actuator710cand the actuator710drespectively. A central processor may be part of a control system of building700. The central processor communicates via the MCU with the MIC of bus system BS3. The control system operates the actuators710c,710detc. depending on the outputs of the sensors710a,710betc., e. g. heating elements are controlled using actuator (for instance710c) dependent on outputs of temperature sensors (for instance710a) in the same room as the respective heating element, if smoke is detected (using for instance sensor710b) ventilation flaps (using for instance actuator710d) are closed, etc. Alternatively, it is possible to use a parallel bus system in building700for parallel data transmission but without separate address lines and without using multiplexing of data and address bus between bus units SLC and/or bus control unit MIC. The bus system may be replaced by wireless connections in other building700application examples. Alternatively, bus system BS3may be used for operating the electronic circuits within a vehicle750, a ship or an airplane. The vehicle750may have four or more wheels752. The vehicle750may be a passenger car, a van, a bus, a truck or another type of vehicle. Bus system BS3comprises lots of SLCs, for instance more than 10 or more than 100. Sensor elements and/or actuating elements are connected to respective bus units SLCa to SLCn, for instance:a sensor760aof a steering device connected to SLCa,a sensor760bof a device for changing the velocity connected to SLCb,an actuation element760cfor changing the direction of movement connected to SLCc,an actuation760delement for changing the velocity of movement connected to SLCd,a sensor of a device for indicating a change of direction,a sensor that measures a physical entity on a driving unit, preferably on a motor, andan actuation element for displaying a change of the direction of movement. A connection762is used between bus unit SLCa and sensor760a. There are further connections between the other SLCs, i.e. SLCb, SLCc and SLCd and the sensor760b, the actuator760cand the actuator760drespectively. A central processor may be part of a control system of the car. The central processor communicates via the MCU with the MIC of bus system BS3. The control system operates the actuators760c,760detc. depending on the outputs of the sensors760a,760betc., e. g. steering, braking, accelerating, etc. There may be a redundant bus system BS3bin addition to bus system BS3within the same vehicle/car750as well as a second, redundant central processor for security reasons with regard to the health of the driver and the passengers of the vehicle. Alternatively, it is possible to use a parallel bus system in vehicle750for parallel data transmission but without separate address lines and without using multiplexing of data and address bus between bus units SLC and/or bus control unit MIC. The bus system may be replaced by wireless connections in other vehicle750application examples. The function of bus system BS is described in the following. The following terms are used as synonyms in the following: “station” for bus units SLCs and bus control units MICs connected to the bus DHIB and “block” for the respective unit. There are for instance the following methods for allocating addresses to bus units SLCs and/or to subordinated bus control units MICs at bus DHIB. First Method:using ADCs within the bus units SLC and/or within the subordinated bus control units MIC and a chain4of resistors R0to Rn,pull first end of chain4to low and pull second end of chain2to high potential,sample all taps of chain4at the same time, anduse sample values as part of addresses for the SLCs/subordinated MICs,optionally: read all possible addresses and rearrange in order to get address space without gaps. Second Method:same as first method but partitioning of address space is used in order to form partitions that allow sampling of the values on the taps of resistor chain4only for a segment/partition. SLCs in previous partition may pull taps to low and SLCs in following partitions may pull taps to high. The resolution of potential values in the respective “middle” partition is improved considerably reducing detection errors and influence of interference. This may be done for all segments/partitions. Third Method:same as second method but with using a uniting of two adjacent partitions combined with sampling of values only within the united partition. This may reduce further errors during the allocation of addresses. Fourth method: using Schmitt Trigger circuits on the taps of chain4of resistors R0to Rn. Fifth method: Using one of the first to fourth method and storing the addresses that have been allocated in a non-volatile memory for further use after allocation. Using the process flow shown inFIG.8A to8Ethe allocation goes on as shown in the following table. Z means a high ohmic output state on the DET pins of DET control units404of SLCs and subordinated MICs if any. The Schmitt trigger circuits may be centered to half Vdd and may have a range of for instance 0.8 Volt if Vdd is 3.3 Volt for instance. The letters A to D that are shown inFIGS.8A to8Eare also used in the following table in order to ease the orientation, i.e. the mapping between both kinds of descriptions for the same allocation method. The table has a left part, a middle part and a right part which have to be put together using the same line numeration. There is a command TSTPRES (<tstadr>) that was not mentioned above but which has the same function as the command RDREG (<tstadr>.[LADR]) that was mentioned above. Basically it replaces the RDREG(<tstadr>.[LADR]) and the subsequent decision must be replaced by a decision like “SLC found ?”. The decisions to be replaced are at the end ofFIG.5B(step ST11) and at the upper right ofFIG.5C(step ST14). Step ST14has to be replaced by TSTPRES (<tstadr>+1). The directly following decision has to be rewritten as “SLC found?”, i.e. step ST12and step ST15. Register R1refers to the DET control unit. The left bit stands for the pin value. A write to the DET pin sets the DET pin to the pin value of the left bit. A read to the DET pin reads the external to the left pin. The second bit from left is 1 for output mode and 0 for input mode. If input mode is active, i.e. the second bit is 0 this means that the DET pin is high ohmic connected to chain4, i.e. state “Z”. If the DET pin is in output mode, i.e. the second bit is 1 the DET pin is driven with the value set by the first bit. x0 (00 or 10): DET pin is in input mode, for instance step ST23, high ohmic, result of input read is 0 if DET pin is pulled high externally and 1 if it is pulled low externally. The output bit value (first bit) is ignored in input mode. A read always directly will read the external value. 01: output zero, for instance step ST8, 11: output one, for instance step ST10, ST27. The addresses of all SLCs are not shown in every line of the table. In order to ease understanding the addresses are mainly shown if there is a change in addresses. This is the left part of the table: 1Command (Symbolic)MarkCommand/StateDet InDet Out2Reset & Ends High111111ZZZZZZ34ANear End := Low0010115WRADR_E (<wrkadr>), <tstadr>WRADR_E (#FFFFh), #F000hWRREG (#F001h.Ra),6WRREG (<tstadr>+1.R1), #11...b#11...b011111ZZ1Z117TSTPRES (<tstadr>)BTSTPRES (#F000h)8FNear End := High111111ZZ1Z11910ANear End := Low011111ZZ1Z1111WRADR_E (<wrkadr>), <tstadr>WRADR_E (#F000h), #F002hWRREG (#F003h.R1),12WRREG (<tstadr>+1.R1), #11...b#11...b011111Z11Z1113TSTPRES (<tstadr>)BTSTPRES (#F002h)14FNear End := High111111Z11Z111516ANear End := Low111111Z11Z1117WRADR_E (<wrkadr>), <tstadr>WRADR_E (#F002h), #F004hWRREG (#F005h.R1),18WRREG (<tstadr>+1.R1), #11...b#11...b111111111Z1119TSTPRES (<tstadr>)BTSTPRES (#F004h)20TSTPRES (<tstadr>+1)CTSTPRES (#F005h)21WRADR(<tstadr>+1), <desta_cnt>WRADR(#F005h), #0000h22WRADR(<wrkadr>+1), <wrkadr>WRADR(#F003h), #F002h23(Rollback shelved)24WRADR(<wrkadr>+2), <wrkadr>WRADR(#F002h), #F000h25WRADR(<wrkadr>+1), <wrkadr>WRADR(#F001h), #F000hWRREG (#F000h.R1),26WRREG (<wrkadr>.R1), #00...b#00...b1111111ZZZZZ27FNear End := High (1stSLC)1111111ZZZZZ2829ANear End := Low (1stSLC)0001110ZZZZZ30WRADR_E (<wrkadr>), <tstadr>WRADR_E (#F000h), #F002hWRREG (#F003h.R1),31WRREG (<tstadr>+1.R1), #11...b#11...b0011110ZZ11132TSTPRES (<tstadr>)BTSTPRES (#F002h)33FNear End := High (1stSLC)1111110ZZ1113435ANear End := Low (1stSLC)0011110ZZ11136WRADR_E (<wrkadr>), <tstadr>WRADR_E (#F002h), #F004hWRREG (#F005h.R1),37WRREG (<tstadr>+1.R1), #11...b#11...b0011110Z111138TSTPRES (<tstadr>)BTSTPRES (#F004h)39FNear End := High (1stSLC)1111111Z11114041ANear End := Low (1stSLC)0111110Z111142WRADR_E (<wrkadr>), <tstadr>WRADR_E (#F004h), #F006hWRREG (#F007h.R1),43WRREG (<tstadr>+1.R1), #11...b#11...b00111101111144TSTPRES (<tstadr>)BTSTPRES (#F004h)45TSTPRES (<tstadr>+1)CTSTPRES (#F005h)46WRADR(<tstadr>+1), <desta_cnt>WRADR(#F007h), #0001h47WRADR(<wrkadr>+1), <wrkadr>WRADR(#F005h), #F004h48(Rollback to shelved)49WRADR(<wrkadr>+2), <wrkadr>WRADR(#F004h), #F002h50WRADR(<wrkadr>+1), <wrkadr>WRADR(#F003h), #F002hWRREG (#F002h.R1),51WRREG (<wrkadr>.R1), #00...b#00...b00011101ZZZZ52FNear End := High (2ndSLC)01111101ZZZZ53 This is the middle part of the table: 1Adr(SLC1)Adr(SLC2)Adr(SLC3)Adr(SLC4)Adr(SLC5)Adr(SLC6)2#FFFFh#FFFFh#FFFFh#FFFFh#FFFFh#FFFFh34#FFFFh#FFFFh#FFFFh#FFFFh#FFFFh#FFFFh5#F000h#F000h#F001h#F000h#F001h#F001h6#F000h#F000h#F001h#F000h#F001h#F001h7891011#F002h#F003h#F001h#F003h#F001h#F001h121314151617#F005h#F003h#F001h#F003h#F001h#F001h18192021#0000h#F003h#F001h#F003h#F001h#F001h22#0000h#F002h#F001h#F002h#F001h#F001h2324#0000h#F000h#F001h#F000h#F001h#F001h25#0000h#F000h#F000h#F000h#F000h#F000h2627282930#0000h#F002h#F002h#F003h#F003h#F003h313233343536#0000h#F004h#F005h#F003h#F003h#F003h373839404142#0000h#F007h#F005h#F003h#F003h#F003h43444546#0000h#0001h#F005h#F003h#F003h#F003h47#0000h#0001h#F004h#F003h#F003h#F003h4849#0000h#0001h#F002h#F003h#F003h#F003h50#0000h#0001h#F002h#F002h#F002h#F002h515253 And this is the right part of the table: 1Commentwrkadrtstadrdesta_cnt2#FFFFh#F000h#0000h34567Some SLC with Adr.LSB:=0 ? → Here: Yes#F000h#F002h89101112ST of SLC1 still low !13Some SLC with Adr.LSB:=0 ? → Here: Yes#F002h#F004h141516With only one SLC in low end portion the Schmitt-T stays high171819Some SLC with Adr.LSB:=0 ? → Here: No20Some SLC with Adr.LSB:=1 ? → Here: Yes21#0001h22Rollback to shelved SLCs. Stop and assess, if wrkadr23becomes <#F000h#F000h#F002h242526272829303132Some SLC with Adr.LSB:=0 ? → Here: Yes#F002h#F004h333435363738Some SLC with Adr.LSB:=0 ? → Here: Yes#F004h#F006h394041424344Some SLC with Adr.LSB:=0 ? → Here: No45Some SLC with Adr.LSB:=1 ? → Here: Yes46#0002h47Rollback to shelved SLCs. Stop and assess, if wrkadr48becomes <#F000h#F002h#F004h4950515253 The steps are repeated until all SLCs have their final address, i.e. in the example also SLC3to SLC6. At the end of the procedure some steps may be performed to clear some variables etc. Using the gist of the shown embodiment for the Schmitt trigger circuits and using the messages and tokens used in this embodiment it is possible for the person skilled in the art to realize also the first three methods for allocating addresses mentioned above without undue burden or effort. Although embodiments of the present invention and their advantages have been described in detail above, it should be understood that various changes, substitutions and alterations can be made therein without departing from the spirit and scope of the invention as defined by the appended claims. For example, it will be readily understood by those skilled in the art that many of the features, functions, processes and methods described herein may be varied while remaining within the scope of the present invention. Moreover, the scope of the present application is not intended to be limited to the particular embodiments of the system, process, manufacture, method or steps described in the present invention. As one of ordinary skill in the art will readily appreciate from the disclosure of the invention systems, processes, manufacture, methods or steps presently existing or to be developed later that perform substantially the same function or achieve substantially the same result as the corresponding embodiments described herein may be utilized according to the present invention. Accordingly, the appended claims are intended to include within their scope such systems, processes, methods or steps. It is possible to combine the embodiments of the introduction with each other. Furthermore, it is possible to combine the examples of the description of Figures with each other. Further, it is possible to combine the embodiments of the introduction and the examples of the description of Figures.
52,742
11860802
The foregoing and other features of the present disclosure will become apparent from the following description and appended claims, taken in conjunction with the accompanying drawings. Understanding that these drawings depict only several embodiments in accordance with the disclosure and are therefore, not to be considered limiting of its scope, the disclosure will be described with additional specificity and detail through use of the accompanying drawings. DETAILED DESCRIPTION In the following detailed description, reference is made to the accompanying drawings, which form a part hereof. In the drawings, similar symbols typically identify similar components, unless context dictates otherwise. The illustrative embodiments described in the detailed description, drawings, and claims are not meant to be limiting. Other embodiments may be utilized, and other changes may be made, without departing from the spirit or scope of the subject matter presented here. It will be readily understood that the aspects of the present disclosure, as generally described herein, and illustrated in the figures, can be arranged, substituted, combined, and designed in a wide variety of different configurations, all of which are explicitly contemplated and made part of this disclosure. In embodiments lacking the improvements described herein, in which disk data of a workload such as an application of a virtual machine or container is external to a system that the workload, virtual machine, container, or volume group is running on, the system must move disk data into an underlying storage of the system in order to recover the workload, virtual machine, container, or volume group from the disk data. This arrangement needlessly consumes a lot of time, CPU cycles, and storage resources. Herein, in some embodiments, instant recovery refers to immediately restoring one or more workloads, virtual machines, containers, or volume groups by running the one or more workloads, virtual machines, containers, or volume groups from remote storage. The remote storage may include, but is not limited to, backup storage or files that are used for disaster recovery. In other embodiments lacking the improvements described herein, instant recovery is done without a hyper-converged infrastructure (HCI). Specifically, instant recovery can be done by mounting a network file system (NFS) export surfaced by the backup software as a data store on a host of a non-HCI hypervisor and booting the virtual machine from the non-HCI hypervisor. A separate software is used to move the virtual machine to the data store backed by the primary server. Herein, in some embodiments, a hyper-converged infrastructure (HCI) refers to compute resources (e.g., tightly) coupled to storage resources. In some embodiments, the HCI is a hypervisor (e.g., a software component) that is (e.g., tightly) coupled to a virtual disk controller (e.g., a software component). The hypervisor can provide (e.g., an abstraction for, virtualized, etc.) compute resources for workloads, substrates (e.g., VMs or containers), or both, and the virtual disk controller, which can be part of the distributed storage fabric, can provide (e.g., an abstraction for, virtualized, etc.) storage resources of the distributed storage fabric for the workloads, substrates, or both. Tightly coupled herein may refer to binding resources, or software controlling the resources, to a particular use case, a specific instance, a specific frontend, or, more generally, a specific purpose or function. In some embodiments, a native or proprietary protocol may be used to communicate between the hypervisor and the virtual disk controller. Each host on the cluster can include a hypervisor and a virtual disk controller. The virtual disk controller can expose the storage fabric (e.g., the storage resources) as being hosted on the same host that the workload or the substrate is hosted on. The compute and storage resources that are provided can be available cluster wide. Thus, the HCI can scale as more hosts are added, with data distributed across the storage resources of the cluster. Disclosed herein are embodiments of systems and methods for instantly recovering workloads (e.g., an application hosted on a substrate, wherein the substrate is either a virtual machine or a container), virtual machines, containers, or volume groups, workload disks, virtual machine disks, or container disks on the HCI from secondary storage. In some embodiments, I/O traffic is translated from a first protocol supported by the HCI to a second protocol supported by the secondary storage. In some embodiments, I/O traffic is forwarded to the secondary storage. In some embodiments, I/O traffic is absorbed locally. In some embodiments, data is brought in as demanded by the running workload or based on a background migration policy. Advantageously, embodiments disclosed herein enable near-zero-recovery time objective (RTO) recovery from any storage that can surface a workload disk, virtual machine disk, container disk, or volume group as a data source (e.g., over NFSv3 or S3), including third party backup repositories, turkey solutions (e.g., Nutanix Mine), or Amazon Snowball-like appliances. Passthrough instant recovery enables testing a product before moving full-scale and performing quick sanity checks before upgrading a product instance of an application. Passthrough instant recovery can be used to instantly create workloads virtual machines, containers, or volume groups based off images, alleviating the need for having to comply images to individual clusters. Moreover, in some embodiments, database copy-and-paste operations (e.g., Era time machines) are no longer limited by the snapshots that are locally available. This model can allow for centralized administration of gold images and alleviates the need for having to copy images to individual clusters. In addition, instant recovery can be a complementary feature with disaster recovery (e.g., long-term storage service). Workload disks, virtual machine disks, container disks, or volume groups can be received and stored in a third-party/cloud storage system like S3. During a disaster recovery event, the workload disks, virtual machine disks, container disks, or volume groups can be instantly recovered from snapshots stored in a third-party/cloud storage system. FIG.1is an example block diagram of a network environment100, in accordance with some embodiments of the present disclosure. The network environment100includes a host101(e.g., node, computer, machine). The host101may be a part of a cluster105of nodes with access to a pool of resources (e.g., storage resources, compute resources). In some embodiments, the host101is an HCI host. The host101can be part of an HCI or HCI cluster. The host101includes one or more workloads such as workload102(e.g., application or service). The workload102can be hosted on a substrate103. The substrate103can be a virtual machine (VM) or a container. The host101includes a hypervisor104in communication with the workload102and a storage fabric108. In some embodiments, the workload102is running on (e.g., receiving compute resources from) the hypervisor104. In some embodiments, the hypervisor104launches/powers on the workload102or the VM/container hosting the workload102irrespective of whether the workload102has access (e.g., local access) to disk data of the workload disk data, virtual machine disk data, container disk data, or volume group data. In some embodiments, the hypervisor launches/powers on the workload102or the VM/container hosting the workload102before the workload102has received (e.g., all of) disk data (e.g., workload disk/image, virtual machine disk/image, container disk/image, volume group) from the data source111of the external repository110. The network environment100includes a storage fabric108, which may be referred to as a data plane. The storage fabric108may be distributed across a cluster105of hosts including the host101. The storage fabric may include, but is not limited to, non-volatile memory (NVM) such as non-volatile dual in-line memory modules (NVDIMM), storage devices, optical disks, smart cards, solid state devices, etc. The storage fabric108can be shared with the cluster105of hosts. In some embodiments, the storage fabric108includes direct-attached storage109. In some embodiments, the storage fabric108includes network-attached-storage such as a storage array network (SAN). The network-attached storage may be storage hosted on a different host than host101. In some embodiments, the storage fabric108includes direct-attached storage109and network-attached storage. The storage fabric108includes a virtual disk controller116(e.g., a controller virtual machine, a controller container, an independent service, distributed storage fabric component, distributed storage fabric operating system, etc.). In some embodiments, the virtual disk controller116instantly recovers the workload/VM/container/volume group even though the disk data for the workload/VM/container/volume group is not locally available in the underlying cluster105(e.g., the host101and/or a plurality of hosts, the storage fabric108). In some embodiments, the virtual disk controller116live migrates disk data to the cluster105without interrupting the recovered workload/VM/container. In some embodiments, the virtual disk controller116includes instructions that, when executed by a processor associated with the virtual disk controller116, manages (e.g., adapts, converts, translates, generates, forwards, encapsulates, de-encapsulates, etc.) input/output (I/O) traffic such as read requests, write requests, and responses to read/write requests. In some embodiments, the I/O traffic is in accordance with internet small computer systems interface (iSCSI) protocol/format. In some embodiments, the I/O traffic is in accordance with a native or proprietary (e.g., HCI) protocol/format. In some embodiments, the virtual disk controller116determines whether a destination of the I/O traffic is the storage fabric108or the external repository110. The virtual disk controller116includes a storage fabric adapter114. In some embodiments, the I/O traffic is received at the storage fabric adapter114. In some embodiments, the I/O traffic is in accordance with internet small computer systems interface (iSCSI) protocol/format. In some embodiments, the storage fabric adapter114terminates the I/O traffic. In some embodiments, the storage fabric adapter114strips/removes/de-encapsulates protocol-related information (e.g., headers protocol-specific commands). In some embodiments, one of the storage fabric adapter114translates the I/O traffic from an iSCSI protocol/format to a HCI-native protocol/format. The virtual disk controller116includes a virtual disk115. The storage fabric adapter114forwards the I/O traffic to the virtual disk115. The virtual disk115can be a logical storage that is mapped to, backed by, abstracted from, or exposed from direct-attached storage109. The virtual disk controller116can deflect/forward the I/O traffic from the virtual disk115to the external repository adapter118. The virtual disk controller116includes an external repository adapter118. In some embodiments, the external repository adapter118is interposed on the I/O path between the virtual disk115and the external repository110. In some embodiments, the external repository adapter118receives the I/O traffic from the virtual disk (vdisk) controller116(e.g., via the virtual disk115). In some embodiments, the external repository adapter118adapts the I/O traffic to support communication of the I/O traffic with the external repository110. For example, the external repository adapter118adapts the I/O traffic to include protocol/format-type (e.g., NFS) primitives (e.g., read, write, and an indicator to determine a suitable10size to use for reading and writing). In some embodiments, the external repository adapter118forwards the adapted I/O traffic to the external repository110. In some embodiments, the vdisk controller116or the external repository adapter118can map the virtual disk115a data source111of the external repository110. In some embodiments, the vdisk controller116or the external repository adapter118writes to or reads from the virtual disk115mapped to the data source111. The network environment100includes an external repository110. An external repository110is one or more storages/storage facilities/data sources, such as the data source111, that are external to the storage fabric108and the cluster105of hosts and that is managed by a third party. In some embodiments, the external repository110is one or more of a network file system (NFS) export, a server message block (SMB) share, a simple storage service (S3) bucket, etc. In some embodiments, the external repository110surfaces disks over different storage protocols (e.g., NFSv3, NFSv4, S3, or some proprietary application programming interface or API, etc.) An administrator (e.g., cluster administrator or virtualization administrator) of the cluster can create/specify/enable access to an external repository110through a first API call/command. The administrator can issue the first API call via a client device (e.g., the workload102, the hypervisor104, the software stack106, or anywhere on the host101where the API is being hosted). The administrator can specify/assign an identifier (e.g., name) to the external repository110. The administrator can specify a server where the external repository110is located. The administrator can specify a representation of the external repository110(e.g., a path of an NFS export, etc.). The administrator (e.g., virtualization administrator or backup administrator) or a privileged user (e.g., a user with privileges to create and update a workload configuration) can create/launch/power on a workload and attach the data source111as a virtual disk to the workload102through a second API call. The administrator or privileged user can specify a uniform resource locator (URL) of the data source111. The URL can be used to read from and write to the data source111. The administrator or privileged user can specify a virtual disk, including one or more of a VM disk universally unique identifier (UUID), a device index, a device protocol, an NFS file path, or a volume group UUID. In some embodiments, the second API call can specify the virtual disk as a passthrough or a fast clone, which will be discussed below. The control plane120, discussed below, can service the first and second API calls. In some embodiments, the second API call can be integrated/combined with the first API call. Referring now toFIG.2, a flowchart of an example method200is illustrated, in accordance with some embodiments of the present disclosure. The method200may be implemented using, or performed by, the network environment100, one or more components of the network environment100, a processor associated with the network environment100, or a processor associated with the one or more components of network environment100. Additional, fewer, or different operations may be performed in the method200depending on the embodiment. A processor (e.g., the workload102, the hypervisor104, the software stack106, or anywhere on the host101) receives an indication of/specifies/identifies an external repository (e.g., the external repository110) hosting a virtual machine image (e.g., virtual machine disk, virtual machine disk image), at operation210. In some embodiments, the virtual machine image is included in the data source. In some embodiments, the virtual machine image is a data source. In some embodiments, the indication of the external repository is received/specified/identified as part of a request to recover the virtual machine. In some embodiments, the processor receives an indication of/specifies/identifies the external repository by receiving/issuing a first API call. In some embodiments, receiving an indication of/specifying the external repository enables the processor, or a cluster on which the processor is on, to access data sources (e.g., the data source111) within the external repository. In some embodiments, the processor exposes the virtual machine image as a virtual disk. In some embodiments, the processor maps/links the virtual machine image to a virtual disk (e.g., virtual machine disk) that is located in the cluster that the process is included in. In some embodiments, the processor specifies a URL of the data source when creating the virtual disk. The processor powers-on a virtual machine, at operation220. The processor attaches the virtual machine image to the virtual machine, at operation230. In some embodiments, the processor exposes and attaches the virtual machine image by issuing a second API call or as part of receiving/issuing the first API call. In some embodiments, the virtual machine image is attached to the virtual machine after the virtual machine is powered-on. In some embodiments, a first portion of the virtual machine image is attached to the virtual machine before the virtual machine is powered-on, and a second portion of the virtual machine image is attached to the virtual machine after the virtual machine is powered-on. Referring now toFIG.3, a flowchart of an example method300is illustrated, in accordance with some embodiments of the present disclosure. The method300may be implemented using, or performed by, the network environment100, one or more components of the network environment100, a processor associated with the network environment100, or a processor associated with the one or more components of network environment100. Additional, fewer, or different operations may be performed in the method300depending on the embodiment. One or more operations of the method300may be combined with one or more operations of the method200. A processor (e.g., the storage fabric adapter114, the vdisk controller116, the external repository adapter118, or a combination thereof) receives, from a hypervisor (e.g., the hypervisor104), a workload (e.g., the workload102) or substrate recovered/launched/powered on by the hypervisor, I/O traffic (e.g., read request, write request) programmed according to a first I/O traffic protocol (e.g., iSCSI or native protocol) supported by a cluster-wide storage fabric (e.g., the storage fabric108) exposed to the workload as being hosted on the same host that the workload or substrate is on, at operation310. In some embodiments, the workload or substrate is an instantly recovered workload or substrate. In some embodiments, the workload or substrate is recovered by a hypervisor hosted on a host of a cluster of hosts. In some embodiments, the storage fabric is tightly coupled to the hypervisor. In some embodiments, the storage fabric and the hypervisor form an HCI or HCI unit. In some embodiments, the storage fabric and the hypervisor are part of an HCI cluster (e.g., hyperconverged cluster of hosts). In some embodiments, the processor identifies that a destination of the I/O traffic is a repository (e.g., the external repository110) external to the storage fabric. In some embodiments, the processor exposes the repository as a virtual disk. In some embodiments, the processor exposes the data source of the repository as a virtual disk. In some embodiments, the processor identifies a virtual disk (e.g., the virtual disk115) from the I/O traffic and maps the virtual disk to a URL of a data source. In some embodiments, the processor identifies the URL of the data source from the I/O traffic. The processor adapts/converts/translates the I/O traffic to generate second I/O traffic programmed according to a second I/O traffic protocol supported by the repository external to the storage fabric, at operation320. The processor forwards the second I/O traffic to the repository, at operation330. In some embodiments, the data source111is passed through (e.g., exposed) as a first virtual disk to the workload102on the hypervisor104(e.g., the first virtual disk is created and mapped to the data source111). The first virtual disk can reside in, or be associated with, the virtual disk controller116. This type of data source may be referred to as a passthrough data source (e.g., passthrough external data source) and a process of reading/writing for a workload using a passthrough data source may be referred to as forwarding or passing through. In some embodiments, the processor (e.g., the external repository adapter118) translates a read/write request from a first protocol to a second protocol and forwards the read/write request to the (e.g., from the first virtual disk) data source111. In some embodiments, the processor adapts the I/O traffic in the first virtual disk to generate the second I/O traffic. In some embodiments, passthrough data sources are used for ephemeral virtual machines (e.g., ephemeral virtual machine disks) or CD-ROM optical disk images (ISOs). In some embodiments, the processor (e.g., the virtual disk controller116) creates one or more additional virtual disks (e.g., virtual machine disks) that are mapped/linked to the first virtual disk backed by the data source111. The one or more additional virtual disks can reside in, or be associated with, the virtual disk controller116. This type of data source may be referred to as a fast-cloned data source (e.g., fast-cloned external data source) and a process of reading/writing for a workload using a fast-cloned data source may be referred to as fast-cloning or absorbing. The virtual disk exposing the fast-cloned data source may be referred to as a parent vdisk and each of the one or more additional virtual disks may be referred to as a child vdisk. In some embodiments, the processor clones the I/O traffic from the first virtual disk (e.g., the child vdisk) to a second virtual disk (e.g., the parent vdisk). In some embodiments, the processor adapts the I/O traffic in the second virtual disk to generate the second I/O traffic. In some embodiments, the virtual disk controller116writes to the child vdisk (e.g., the child disk absorbs the writes to the fast-cloned data source, foreground write). In some embodiments, the processor clones the I/O traffic from the virtual disk (e.g., the child virtual disk) to a second virtual disk (e.g., the parent virtual disk). In some embodiments, the virtual disk controller116determines if reads can be serviced locally (e.g., if data on the fast-cloned data source is also located in storage/memory/cache in the storage fabric108such as the child vdisk). In some embodiments, if the virtual disk controller116determines that the reads cannot be serviced locally (e.g., if a requested range is unavailable), the external repository adapter118translates and forwards the read request to the data source111(e.g., from the parent vdisk). Otherwise, the data can be read locally in the child vdisk (e.g., foreground read). FIG.4is an example block diagram of the network environment100that illustrates forwarding requests, in accordance with some embodiments of the present disclosure. In some embodiments, forwarding can occur for a passthrough data source. At operation401, the workload102sends the request (e.g., an iSCSI request), via the hypervisor104, to the virtual disk controller116, which includes the storage fabric adapter114, the vdisk410, and the external repository adapter118. The workload102can be hosted on a virtual machine or container. The request may specify the vdisk410as the recipient, and the vdisk410may be backed by the data source111. At operation402, the storage fabric adapter114receives and terminates the request (e.g., removes any iSCSI-related headers, etc.). At operation403, the storage fabric adapter114forwards the (e.g., stripped-down) request to the vdisk410. In some embodiments, the storage fabric adapter114translates the request into a request in an HCI-native format that can be interpreted by the vdisk410. The request can pass through the vdisk410and the external repository adapter118can receive the request. At operation404, the external repository adapter118translates the request to a request that is supported by the external repository110. At operation405, the external repository adapter118forwards the translated request to the data source111. FIG.5is an example block diagram of the network environment100that illustrates servicing requests using a fast clone, in accordance with some embodiments of the present disclosure. In some embodiments, an I/O request is locally absorbed by the fast clone if the I/O request is (a) a read request in which the data is available locally, or (b) a write request. In some embodiments, forwarding to the external repository can occur for a read request on a fast-cloned data source where the data to be read is not available locally. The operations501and502are similar to the operations401and402ofFIG.4. At operation503, the storage fabric adapter114forwards a request to the child vdisk512. In some embodiments, the request includes instructions for whether to absorb locally or forward the request. Additionally or alternatively, in some embodiments, if the request is a read, the storage fabric adapter114may indicate to absorb the read locally. In some embodiments, if the child vdisk512returns, to the storage fabric adapter114, a cache miss for the requested data on a cache associated with the child vdisk512, and the storage fabric adapter114sends instructions to the child vdisk512to forward the request to the parent vdisk510. If the request is forwarded to the parent vdisk510, then operations504and505are performed. The operations504and505are similar to the operations404and405ofFIG.4. In some embodiments, the virtual disk controller116includes mechanisms for protecting the storage fabric108. In some embodiments, the virtual disk controller116prevents zero-filling the empty regions or incomplete blocks of disks during writes and copy block map operations. In some embodiments, read ahead, deduplication, coalescing and creation of merged vblocks are delayed until storage migration, which is discussed below, is complete. Herein, a vblock can refer to a block of a virtual disk. In some embodiments, the virtual disk controller116can handle issues arising out of inconsistent caches in various hosts of the HCI cluster due to managing data disks originating on external repositories. In some embodiments, on encountering an absence of vblock metadata, the range is forwarded to the root/parent vdisk. Using the feedback from the forwarded request, the cache can be cleared on the original node. In some embodiments, the virtual disk controller116clears the vblock map cache entries on a node before hosting a data source vdisk. In some embodiments, once the storage migration is complete, the virtual disk controller116clears the vblock map cache entries on nodes for data source vdisks before updating their configurations. In some embodiments, fast-cloned external data sources support foreground migration, background migration, snapshots, clones, and back-ups of virtual disks. In some embodiments, the processes passing through and fast cloning can be performed together (e.g., mirroring). In some embodiments, the virtual disk controller116live migrates disk data of the external repository110to the storage fabric108without interruption to the recovered workload102. In some embodiments, the storage migration is performed on page boundaries with a predetermined page size (e.g., 1 MB). For every vblock that is being fetched from the external repository110, there can be a metadata update. The fetched data can be written to the storage fabric108(e.g., an extent store). Advantageously, having metadata updates prevents future requests to that region from being forwarded to the external repository110. In some embodiments, storage migration is performed if a predetermined amount of storage is available in the storage fabric108. In some embodiments, a number of pages migrated in parallel is controlled per data source. In some embodiments, storage migration (e.g., foreground migration) is initiated by read requests from the running workload102. The data source111can be represented by one or more regions (e.g., blocks, vblocks, extents, etc.). In some embodiments, a read request to the data source111causes a foreground migration of the region associated with the read request. In some embodiments, the virtual disk controller116makes a determination on whether to perform a foreground migration based on one or more of (a) a foreground read size, (b) a max read I/O size supported by the external storage system, (c) a current load on the extent store, or (d) available storage space in the cluster. Referring toFIG.1, in some embodiments, the network environment100includes a control plane120. In some embodiments, the control plane120provisions passthrough and fast-cloned data sources in the cluster that the host101is a part of. In some embodiments, the control plane120administers foreground and background policies for fast-cloned data sources. In some embodiments, the control plane120manages a lifecycle of data sources. In some embodiments, the control plane120initiates/manages/drives storage migration (e.g., background migration). In some embodiments, managing the background migration includes setting polices for one or more of enabling background migration, suspending background migration, or resuming background migration. The control plane120and/or an administrator can control what state a background migration is in. The state of a background migration can include one or more of disabled, enabled, running, suspended, or completed. The network environment100may include (e.g., run on) one or more clusters of nodes (e.g., hosts, computers, machines). Each node may include one or more virtualized workloads (one or more virtual machines, containers, virtual disks, etc.) that run services/applications/operating systems by using storage and compute resources virtualized through the hypervisor (e.g., KVM, AHV, ESXi, Hyper-V, etc.) and distributed across the cluster. The cluster of nodes may be in one data center (on-premises), in a cloud, (off-premises), or distributed across one or more of multiple data centers, multiple clouds, or a hybrid of data centers and clouds. At least one of the workloads (e.g., a controller virtual machine or container) in the cluster or one of the hypervisors may run core services that manages and maintains the cluster or the workloads. The core services may include a cluster manager, a health/wellness check manager, an I/O storage manager, and the like. In some embodiments, the core services manage multiple clusters. Each of the components (e.g., elements, entities) of the network environment100(e.g., the host101, the workload102, the hypervisor104, the software stack106, the storage fabric108, the virtual disk controller116, the storage fabric adapter114, the virtual disk controller116, and the external repository adapter118), is implemented using hardware, software, or a combination of hardware or software, in one or more embodiments. Each of the components of the network environment100may be a processor with instructions or an apparatus/device (e.g., server) including a processor with instructions, in some embodiments. In some embodiments, multiple components may be part of a same apparatus and/or processor. Each of the components of the of the network environment100can include any application, program, library, script, task, service, process or any type and form of executable instructions executed by one or more processors, in one or more embodiments. Each of the one or more processors is hardware, in some embodiments. The instructions may be stored on one or more computer readable and/or executable storage media including non-transitory storage media. The herein described subject matter sometimes illustrates different components contained within, or connected with, different other components. It is to be understood that such depicted architectures are merely exemplary, and that in fact many other architectures can be implemented which achieve the same functionality. In a conceptual sense, any arrangement of components to achieve the same functionality is effectively “associated” such that the desired functionality is achieved. Hence, any two components herein combined to achieve a particular functionality can be seen as “associated with” each other such that the desired functionality is achieved, irrespective of architectures or intermedial components. Likewise, any two components so associated can also be viewed as being “operably connected,” or “operably coupled,” to each other to achieve the desired functionality, and any two components capable of being so associated can also be viewed as being “operably couplable,” to each other to achieve the desired functionality. Specific examples of operably couplable include but are not limited to physically mateable and/or physically interacting components and/or wirelessly interactable and/or wirelessly interacting components and/or logically interacting and/or logically interactable components. With respect to the use of substantially any plural and/or singular terms herein, those having skill in the art can translate from the plural to the singular and/or from the singular to the plural as is appropriate to the context and/or application. The various singular/plural permutations may be expressly set forth herein for sake of clarity. It will be understood by those within the art that, in general, terms used herein, and especially in the appended claims (e.g., bodies of the appended claims) are generally intended as “open” terms (e.g., the term “including” should be interpreted as “including but not limited to,” the term “having” should be interpreted as “having at least,” the term “includes” should be interpreted as “includes but is not limited to,” etc.). It will be further understood by those within the art that if a specific number of an introduced claim recitation is intended, such an intent will be explicitly recited in the claim, and in the absence of such recitation no such intent is present. For example, as an aid to understanding, the following appended claims may contain usage of the introductory phrases “at least one” and “one or more” to introduce claim recitations. However, the use of such phrases should not be construed to imply that the introduction of a claim recitation by the indefinite articles “a” or “an” limits any particular claim containing such introduced claim recitation to inventions containing only one such recitation, even when the same claim includes the introductory phrases “one or more” or “at least one” and indefinite articles such as “a” or “an” (e.g., “a” and/or “an” should typically be interpreted to mean “at least one” or “one or more”); the same holds true for the use of definite articles used to introduce claim recitations. In addition, even if a specific number of an introduced claim recitation is explicitly recited, those skilled in the art will recognize that such recitation should typically be interpreted to mean at least the recited number (e.g., the bare recitation of “two recitations,” without other modifiers, typically means at least two recitations, or two or more recitations). Furthermore, in those instances where a convention analogous to “at least one of A, B, and C, etc.” is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., “a system having at least one of A, B, and C” would include but not be limited to systems that have A alone, B alone, C alone, A and B together, A and C together, B and C together, and/or A, B, and C together, etc.). In those instances where a convention analogous to “at least one of A, B, or C, etc.” is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., “a system having at least one of A, B, or C” would include but not be limited to systems that have A alone, B alone, C alone, A and B together, A and C together, B and C together, and/or A, B, and C together, etc.). It will be further understood by those within the art that virtually any disjunctive word and/or phrase presenting two or more alternative terms, whether in the description, claims, or drawings, should be understood to contemplate the possibilities of including one of the terms, either of the terms, or both terms. For example, the phrase “A or B” will be understood to include the possibilities of “A” or “B” or “A and B.” Further, unless otherwise noted, the use of the words “approximate,” “about,” “around,” “substantially,” etc., mean plus or minus ten percent. The foregoing description of illustrative embodiments has been presented for purposes of illustration and of description. It is not intended to be exhaustive or limiting with respect to the precise form disclosed, and modifications and variations are possible in light of the above teachings or may be acquired from practice of the disclosed embodiments. It is intended that the scope of the invention be defined by the claims appended hereto and their equivalents.
37,357
11860803
DETAILED DESCRIPTION FIG.1illustrates a block diagram of a memory device according to an example embodiment. A memory device100amay relate to a processor in memory (PIM) or a function in memory (FIM) and may further execute a data processing operation in addition to reading and writing data. The memory device100amay correspond to a computational memory device including a random access memory (RAM) and a processing element (PE) integrated in the same die. The memory device100amay include core dies110and120and a buffer die190. For example, each of the core dies110and120may be also referred to a “memory die”, a “PIM die”, an “FIM die”, or a “slave die”, and the buffer die190may be also referred to as an “interface die”, a “logic die”, or a “master die”. A die may be also referred to as a “chip”. The core die110may be stacked on the buffer die190, and the core die120may be stacked on the core die110. The memory device100amay have a three-dimensional memory structure in which the plurality of dies110,120, and190are stacked. The memory device100amay include a common command and address bus101including paths through which command and address (or command/address) signals CA1and CA2are transmitted. The common command and address bus101may include through silicon vias (TSVs)111passing through the core die110, TSVs121passing through the core die120, TSVs191passing through the buffer die190, and micro bumps (refer to1102ofFIG.15) providing an electrical connection or contact between the TSVs111,121, and191. The memory device100amay include a common data input/output bus102including paths through which data input/output signals DQ1and DQ2are transmitted. The common data input/output bus102may include TSVs112passing through the core die110, TSVs122passing through the core die120, TSVs192passing through the buffer die190, and micro bumps (refer to1102ofFIG.15) providing an electrical connection or contact between the TSVs112,122, and192. The core dies110and120and the buffer die190may be electrically interconnected through the common command and address bus101and the common data input/output bus102. The core die110may include the TSVs111and112, a memory cell array113allocated to a channel CH1, a command decoder114, a data input/output circuit115, a processing circuit118, and an internal data input/output bus119. The channel CH1may include access (or communication) paths capable of accessing the memory device100a. For example, an external device (e.g., a memory controller, a system on chip (SoC), an application processor (AP), or a host) may access the memory cell array113through the channel CH1, may transmit commands and addresses associated with the memory cell array113, and may exchange data on the memory cell array113. The memory cell array113may include memory cells connected to word lines (or rows) and bit lines (or columns). For example, a memory cell may be a dynamic random access (DRAM) cell, a static random access memory (SRAM) cell, a NAND flash memory cell, a NOR flash memory cell, a resistive random access memory (RRAM) cell, a ferroelectric random access memory (FRAM) cell, a phase change random access memory (PRAM) cell, a thyristor random access memory (TRAM) cell, a magnetic random access memory (MRAM) cell, etc. The command decoder114may decode commands transmitted through the buffer die190and the common command and address bus101. For example, the commands may include the following command associated with the memory cell array113: an active command, a precharge command, a write command, and a refresh command; alternatively, the commands may include the following commands associated with an operation mode of the core die110: a mode register set command, a mode register write command, and a mode register read command. Additionally, the commands may include a broadcast command requesting a data movement within the core dies110and120and a processing command requesting a processing operation on data of the core dies110and120. The broadcast command may be also referred to as a “move command” or a “transfer command”. The command decoder114may decode a command and may control the memory cell array113and the data input/output circuit115. The data input/output circuit115may receive write data through the common data input/output bus102and may transmit the write data to the memory cell array113. The write data may be written, stored, or programmed to the memory cell array113. The write data may be transmitted from the external device through the channel CH1, the buffer die190, and the common data input/output bus102, may be transmitted from the processing circuit118through the internal data input/output bus119, or may be transmitted from the core die120through the common data input/output bus102. The data input/output circuit115may read data stored in the memory cell array113or may output (or transmit) the read data to the common data input/output bus102. The read data of the common data input/output bus102may be transmitted to the external device through the buffer die190and the channel CH1, may be transmitted to the core die120through the common data input/output bus102, or may be transmitted to the processing circuit118through the internal data input/output bus119. The processing circuit118may perform a processing operation on data of the internal data input/output bus119or data of the common data input/output bus102. For example, the data of the internal data input/output bus119may mean data output from the data input/output circuit115to the internal data input/output bus119. For example, the data of the common data input/output bus102may mean data output from the data input/output circuit115, the core die120, or the buffer die190to the common data input/output bus102. The processing circuit118may be disposed on the same core die110together with the memory cell array113and may be referred to as a “PE” or a “processor”. For example, the processing circuit118may be accessed through the channel CH1capable of accessing the memory cell array113and may be allocated to the channel CH1. The paths of the internal data input/output bus119may be electrically connected to the paths of the common data input/output bus102, respectively. Accordingly, the paths of the internal data input/output bus119may be included in the paths of the common data input/output bus102. For another example, the paths of the internal data input/output bus119may be electrically separated from the paths of the common data input/output bus102, respectively. In any case, the data input/output circuit115may output the same data to the internal data input/output bus119or the common data input/output bus102. The core die120may include the TSVs121and122, a memory cell array123, a command decoder124, a data input/output circuit125, a processing circuit128, and an internal data input/output bus129. For example, the core die120may be stacked on the core die110. As in the channel CH1, a channel CH2may include access (or communication) paths capable of accessing the memory device100aand may be independent of the channel CH1. For example, the external device may access the memory cell array123or the processing circuit128through the channel CH2. The core die120may be implemented to be substantially identical to the core die110. Operations of the components123to129of the core die120may be similar or substantially identical to the operations of the components113to119of the core die110except that the core die120is accessible through the channel CH2and the core die110is accessible through the channel CH1. The buffer die190may include the TSVs191and192, pins193and194, a command and address buffering circuit198and a data input/output buffering circuit199. The pins193may be allocated to the channel CH1. The pins193may include command and address pins receiving command and address signals CA1transmitted from the external device through the channel CH1and data input/output pins receiving the data input/output signals DQ1transmitted from the external device through the channel CH1. Because the data input/output signals DQ1are bidirectional, the data input/output signals DQ1may be transmitted from the above data input/output pins to the external device through the channel CH1. Here, “DQ” may mean a data input/output. The pins194may be implemented to be substantially identical to the pins193except that the pins194are allocated to the channel CH2. The command and address buffering circuit198may buffer (i.e., receive and amplify) the command and address signals CA1transmitted through the pins193and may output the command and address signals CA1to the common command and address bus101. The command and address signals CA1may relate to the core die110and may in detail include commands for the memory cell array113, addresses (e.g., including a bank address, a row address, and a column address) indicating memory cells of the memory cell array113, or commands for the processing circuit118. For example, the command and address buffering circuit198may output an identification (ID) (or an address) indicating the channel CH1(or the core die110) to the common command and address bus101together with the command and address signals CA1. The command decoder114may identify the ID of the channel CH1, may decode the command and address signals CA1, and may not decode the command and address signals CA2. As in the above manner, the command and address buffering circuit198may output the command and address signals CA2to the common command and address bus101. As in the command decoder114, the command decoder124may identify the ID of the channel CH2and may decode the command and address signals CA2. The data input/output buffering circuit199may receive or output the data input/output signals DQ1through the channel CH1and the pins193. The data input/output buffering circuit199may output the data input/output signals DQ1and may further output the ID of the channel CH1to the common data input/output bus102. The data input/output circuit115may identify the ID of the channel CH1and may receive the data input/output signals DQ1. The data input/output circuit115may also receive the data input/output signals DQ1without identifying the ID of the channel CH1under control of the command decoder114. The data input/output circuit115may output the data input/output signals DQ1to the common data input/output bus102and may further output the ID of the channel CH1to the common data input/output bus102. The data input/output buffering circuit199may receive the data input/output signals DQ1through the common data input/output bus102. For example, the data input/output buffering circuit199may identify the ID of the channel CH1and may output the data input/output signals DQ1to the external device through the pins193and the channel CH1. For another example, under control of the command and address buffering circuit198, the data input/output buffering circuit199may output the data input/output signals DQ1to the external device through the pins193and the channel CH1without identifying the ID. As in the above manner, the data input/output buffering circuit199may output the data input/output signals DQ2. As in the data input/output circuit115, the data input/output circuit125may identify the ID of the channel CH2and output the data input/output signals DQ2. Referring toFIG.1, the memory device100amay receive broadcast commands BCMD1and BCMD2from the external device through the channels CH1and CH2, respectively. The broadcast commands BCMD1and BCMD2may request the movement of data from the memory cell array113allocated to the channel CH1to the memory cell array123allocated to the channel CH2. For example, the broadcast command BCMD1may indicate a start and a start location of the data movement (i.e., the memory cell array113allocated to the channel CH1), and the broadcast command BCMD2may indicate an end (or target) and an end location of the data movement (i.e., the memory cell array123allocated to the channel CH2). The command and address buffering circuit198may receive the broadcast command BCMD1through the channel CH1and the pins193, may generate the ID of the channel CH1, and may output the broadcast command BCMD1and the ID of the channel CH1to the common command and address bus101. The command decoder114may identify the ID of the channel CH1, may decode the broadcast command BCMD1, and may control the memory cell array113and the data input/output circuit115. The data input/output circuit115may receive data stored in memory cells of the memory cell array113, which addresses of the broadcast command BCMD1indicate, and may output data MD to the common data input/output bus102. For example, the operations of the memory cell array113, the command decoder114, and the data input/output circuit115, which are performed in response to the broadcast command BCMD1, may correspond to a read operation of the core die110, but the data MD may not be output to the external device unlike a normal read operation. The command and address buffering circuit198may receive the broadcast command BCMD2through the channel CH2and the pins194, may generate the ID of the channel CH2, and may output the broadcast command BCMD2and the ID of the channel CH2to the common command and address bus101. The command decoder124may identify the ID of the channel CH2, may decode the broadcast command BCMD2, and may control the memory cell array123and the data input/output circuit125. The data input/output circuit125may receive the data MD of the common data input/output bus102. The received data MD of the data input/output circuit125may be transmitted to memory cells of the memory cell array123, which addresses of the broadcast command BCMD2or a next command indicate. For example, the operations of the memory cell array123, the command decoder124, and the data input/output circuit125, which are performed in response to the broadcast command BCMD2, may correspond to a write operation of the core die120, but the write data MD may be provided from the core die110and not from the external device unlike a normal write operation. For example, the broadcast command BCMD1may be transmitted to the memory device100aprior to the broadcast command BCMD2. The order of transmitting the broadcast commands BCMD1and BCMD2may depend on a start and an end of the data movement. The broadcast command BCMD2may be transmitted to the memory device100aprior to the broadcast command BCMD1. For another example, the broadcast commands BCMD1and BCMD2may be simultaneously transmitted to the memory device100a. In this case, a bit (or logical value) of a specific signal of the broadcast command BCMD1may indicate the start of the data movement, a bit (or logical value) of a specific signal of the broadcast command BCMD2may indicate the end of the data movement, and the values of both bits may be different from each other. The memory device100aaccording to an example embodiment may support the data movement between the core dies110and120based on (or by using) the broadcast commands BCMD1and BCMD2issued or output from the external device. The external device may set the start and end of the data movement by adjusting the order of transmitting (or issuing or generating) the broadcast commands BCMD1and BCMD2or setting the values of the specific bits of the broadcast commands BCMD1and BCMD2. Because the memory device100asupports the data movement through the broadcast commands BCMD1and BCMD2, the external device may execute (or accomplish) the data movement through the broadcast commands BCMD1and BCMD2without needing to read data from the core die110of the memory device100aand transmit the read data to the core die120of the memory device100a, for the data movement. Accordingly, a latency between the memory device100aand the external device, which is caused due to the data movement, may be removed or decreased. FIG.2illustrates a block diagram of a memory device according to another example embodiment. A difference between a memory device100band the memory device100awill be mainly described, and additional description associated with components having the same reference numerals will be omitted to avoid redundancy. Also, in the case of describing other drawings, additional description associated with components having the same reference numerals will be omitted to avoid redundancy. The processing circuit118may include a command decoder118_1, an arithmetic logic block (ALU)118_2, and a data buffer118_3. As in the command decoder114, the command decoder118_1may identify the ID of the channel CH1, may decode the command and address signals CAL and may control the ALU118_2and the data buffer118_3. The ALU118_2may execute various arithmetic or logic operations under control of the command decoder118_1, such as addition, subtraction, multiplication, division, shift, AND, NAND, OR, NOR, XNOR, and XOR. The above operation may be also referred to as a “processing operation”. The data buffer118_3may include a plurality of registers or latches. The data buffer118_3may store data of the internal data input/output bus119or the common data input/output bus102, or intermediate or final data of the arithmetic logic operation executed by the ALU118_2. The processing circuit128may include a command decoder128_1, an ALU128_2, and the data buffer128_3that operate to be similar to the components118_1to118_3. As in the command decoder124, the command decoder128_1may identify the ID of the channel CH2, may decode the command and address signals CA2, and may control the ALU128_2and the data buffer128_3. For example, a processing operation that is executed by the ALU118_2may be identical to or different from a processing operation that is executed by the ALU128_2. The memory device100bmay further receive a processing command PCMD2. For example, the external device may further issue the processing command PCMD2, and the memory device100bmay receive the processing command PCMD2after the broadcast commands BCMD1and BCMD2. Compared to the case of the memory device100a, the external device may further request the processing circuit128to perform the processing operation on moved data by issuing the broadcast commands BCMD1and BCMD2and then issuing the processing command PCMD2. The command and address buffering circuit198may receive the processing command PCMD2through the channel CH2and the pins194and may output the processing command PCMD2together with the ID of the channel CH2to the common command and address bus101. The command decoder128_1may identify the ID of the channel CH2and may decode the processing command PCMD2. The data buffer128_3may store the data MD transmitted from the data input/output circuit125through the internal data input/output bus129or the common data input/output bus102. The data MD may be data that have not yet been stored in the memory cell array123, such as data received from the core die110, or data that are stored in the memory cell array123and are then read from the memory cell array123. The ALU128_2may execute a processing operation with respect to the data MD of the data buffer128_3to generate the data PD. The data buffer128_3may store data PD, and the data PD may be transmitted to the data input/output circuit125through the internal data input/output bus129or the common data input/output bus102. The data input/output circuit125may transmit the data PD to the memory cell array123under control of the command decoder124decoding the broadcast command BCMD2or may transmit the data PD to the memory cell array123under control of the command decoder124decoding both the broadcast command BCMD2and the processing command PCMD2, and the data PD may be stored in the memory cell array123. The data PD, not the data MD, may be stored in the memory cell array123. In an example embodiment, the processing command PCMD2may be merged to the broadcast command BCMD2. The broadcast command BCMD2may include information for requesting a processing operation of the processing circuit128on an end location, as well as an end and the end location of the data movement. The broadcast commands BCMD1and BCMD2may be distinguished from each other by the order of transmission or by a different bit value(s) of a specific signal(s) of the broadcast commands BCMD1and BCMD2. FIG.3illustrates a block diagram of a memory device according to another example embodiment. A difference between a memory device100cand the memory device100bwill be mainly described. The external device may further generate external data ED. The data input/output buffering circuit199may receive the data input/output signals DQ2including the external data ED through the channel CH2and the pins194and may output the data input/output signals DQ2including the external data ED to the common data input/output bus102. The data buffer128_3may further receive and store the external data ED through the common data input/output bus102. The ALU128_2may execute a processing operation with respect to the data MD and the external data ED to generate the data PD. The data PD may be transmitted to the memory cell array123by the data input/output circuit125under control of the command decoder124decoding the broadcast command BCMD2or the processing command PCMD2. The external device may request the movement of the data MD from the core die110to the core die120and the processing operation of the data MD and the external data ED, by further inputting the external data ED to the memory device100c. The external data ED may be based on the processing command PCMD2or the broadcast command BCMD2. FIG.4is a flowchart illustrating an operating method of memory devices ofFIGS.1to3. The flowchart ofFIG.4may be applied to the memory devices100ato100c. In operation S105, the command and address buffering circuit198of the buffer die190may output the broadcast command BCMD1and the ID of the channel CH1received through the channel CH1to the common command and address bus101. In operation S110, the command decoder114of the core die110may check the ID associated with the broadcast command BCMD1, may identify the ID of the channel CH1and may decode the broadcast command BCMD1. In operation S115, the command decoder124of the core die120may check the ID associated with the broadcast command BCMD1, may identify the ID of the channel CH1and may not decode the broadcast command BCMD1. For example, each of the command decoders114and124may compare the previously programmed or stored ID of each of the channels CH1and CH2and the ID transmitted through the common command and address bus101and may decode a command depending on whether the IDs are matched. In operation S120, under control of the command decoder114, data stored in the memory cell array113may be read, and the data input/output circuit115may output the data MD to the common data input/output bus102. In operation S125, the command and address buffering circuit198of the buffer die190may output the broadcast command BCMD2and the ID of the channel CH2received through the channel CH2to the common command and address bus101. In operation S130, the command decoder114of the core die110may check the ID associated with the broadcast command BCMD2, may identify the ID of the channel CH2and may not decode the broadcast command BCMD2. In operation S135, the command decoder124of the core die120may check the ID associated with the broadcast command BCMD2, may identify the ID of the channel CH2and may decode the broadcast command BCMD2. For example, while the data MD are output from the memory cell array113to the common data input/output bus102in operation S120, operation S125to operation S135may be performed. In operation S140, the data input/output circuit125of the core die120may receive the data MD of the common data input/output bus102. Depending on a next command (e.g., the processing command PCMD2) or the broadcast command BCMD2, the data input/output circuit125may transmit the data MD or the data PD being a result of executing the processing operation of the processing circuit128on the data MD. In operation S145, the command and address buffering circuit198of the buffer die190may output the processing command PCMD2and the ID of the channel CH2received through the channel CH2to the common command and address bus101. In operation S150, the command decoder118_1of the core die110may check the ID associated with the processing command PCMD2, may identify the ID of the channel CH2and may not decode the processing command PCMD2. In operation S155, the command decoder128_1of the core die120may check the ID associated with the processing command PCMD2, may identify the ID of the channel CH2and may decode the processing command PCMD2. In operation S160, the ALU128_2may execute the processing operation on the data MD of the internal data input/output bus129or data of the common data input/output bus102. When the data input/output buffering circuit199of the buffer die190receives the external data ED through the channel CH2, the ALU128_2may execute the processing operation on the data MD and the external data ED of the common data input/output bus102. In operation S165, the data input/output circuit125may transmit the data PD to the memory cell array123, and the data PD may be written to the memory cell array123. In an example embodiment, an operation mode of the memory device100a/100b/100cmay be classified as a normal mode or a processing mode (or a broadcast mode). The external device may transmit, to the memory device100a/100b/100c, normal commands (e.g., an active command, a precharge command, a read command, and a write command) for writing data to the memory device100a/100b/100cor reading data from the memory device100a/100b/100c, and the memory device100a/100b/100cmay operate in the normal mode. The external device may transmit, to the memory device100a/100b/100c, a broadcast command or a processing command for requesting processing operations supported by the processing circuit118/128of the memory device100a/100b/100c, and the memory device100a/100b/100cmay operate in the processing mode, not in the normal mode. The processing mode and the normal mode may be distinguished from each other depending on a kind of a command transmitted to the memory device100a/100b/100c. FIG.5illustrates a block diagram of a memory device according to another example embodiment. A difference between a memory device100dand the memory device100b/100cwill be mainly described. The memory device100b/100cmay receive the processing command PCMD2after the broadcast commands BCMD1and BCMD2. The command and address buffering circuit198of the memory device100dmay receive the broadcast command BCMD1through the channel CH1and the pins193, may then receive the processing command PCMD2through the channel CH2and the pins194, and may then receive the broadcast command BCMD2through the channel CH2and the pins194. In one example embodiment, unlike the example illustrated inFIG.5, the processing command PCMD2may be included in the broadcast command BCMD2. The command and address buffering circuit198may output the broadcast command BCMD1and the ID of the channel CH1, the processing command PCMD2and the ID of the channel CH2, and the broadcast command BCMD2and the ID of the channel CH2to the common command and address bus101. The data input/output circuit115may output the data MD to the common data input/output bus102under control of the command decoder114decoding the broadcast command BCMD1, the ALU128_2may execute the processing operation on the data MD of the common data input/output bus102under control of the command decoder128_1decoding the processing command PCMD2, the data buffer128_3may output the data PD to the common data input/output bus102or the internal data input/output bus129, and the data input/output circuit125may receive the data PD under control of the command decoder124decoding the broadcast command BCMD2. The data input/output buffering circuit199of the memory device100dmay further receive the external data ED through the channel CH2and the pins194, and the ALU128_2may execute the processing operation on the data MD and the external data ED to generate the data PD. FIG.6illustrates a block diagram of a memory device according to another example embodiment. A difference between a memory device100eand the memory device100dwill be mainly described. The command and address buffering circuit198of the memory device100emay receive the broadcast command BCMD1through the channel CH1and the pins193, may then receive a processing command PCMD1through the channel CH1and the pins193, and may then receive the broadcast command BCMD2through the channel CH2and the pins194. In one example embodiment, unlike the example illustrated inFIG.6, the processing command PCMD1may be included in the broadcast command BCMD1. The command and address buffering circuit198may output the broadcast command BCMD1and the ID of the channel CH1, the processing command PCMD1and the ID of the channel CH1, and the broadcast command BCMD2and the ID of the channel CH2to the common command and address bus101. The data input/output circuit115may output the data MD to the common data input/output bus102under control of the command decoder114decoding the broadcast command BCMD1, the ALU118_2may execute the processing operation on the data MD of the common data input/output bus102under control of the command decoder118_1decoding the processing command PCMD1, the data buffer118_3may output the data PD to the common data input/output bus102, and the data input/output circuit125may receive the data PD of the common data input/output bus102under control of the command decoder124decoding the broadcast command BCMD2. The data input/output buffering circuit199of the memory device100emay further receive the external data ED through the channel CH1and the pins193, and the ALU118_2may execute the processing operation on the data MD and the external data ED to generate the data PD. FIG.7illustrates a block diagram of a memory device according to another example embodiment. A difference between a memory device100fand the memory device100d/100ewill be mainly described. The command and address buffering circuit198of the memory device100fmay receive the broadcast command BCMD1through the channel CH1and the pins193, may then receive the processing command PCMD1through the channel CH1and the pins193, may then receive the processing command PCMD2through the channel CH2and the pins194, and may then receive the broadcast command BCMD2through the channel CH2and the pins194. In one example embodiment, unlike the examples illustrated inFIG.7, the processing command PCMD1may be included in the broadcast command BCMD1and the processing command PCMD2may be included in the broadcast command BCMD2. The command and address buffering circuit198may output the broadcast command BCMD1and the ID of the channel CH1, the processing command PCMD1and the ID of the channel CH1, the processing command PCMD2and the ID of the channel CH2, and the broadcast command BCMD2and the ID of the channel CH2to the common command and address bus101. The data input/output circuit115may output the data MD to the common data input/output bus102under control of the command decoder114decoding the broadcast command BCMD1, the ALU118_2may execute the processing operation on the data MD of the common data input/output bus102under control of the command decoder118_1decoding the processing command PCMD1, the data buffer118_3may output the data PD to the common data input/output bus102, the ALU128_2may execute the processing operation on the data PD of the common data input/output bus102under control of the command decoder128_1decoding the processing command PCMD2, the data buffer128_3may output the data PD to the common data input/output bus102or the internal data input/output bus129, and the data input/output circuit125may receive the data PD under control of the command decoder124decoding the broadcast command BCMD2. The data input/output buffering circuit199of the memory device100fmay further receive the external data ED through the channel CH1and the pins193or may receive the external data ED through the channel CH2and the pins194; the ALU118_2may execute the processing operation on the data MD and the external data ED to generate the data PD, or the ALU128_2may execute the processing operation on the data MD and the external data ED to generate the data PD. FIG.8is a flowchart illustrating an operation method of a memory device ofFIG.7. The flowchart ofFIG.8may be applied to the memory devices100dto100f. Operation S205to operation S220are substantially the same as operation S105to operation S120. Operation S225to operation S240are substantially identical to operation S145to operation S160except that operation S225to operation S240relate to the channel CH1. Operation S245to operation S260are substantially identical to operation S145to operation S160. Operation S265to operation S285are substantially identical to operation S125to operation S140and operation S165. FIG.9illustrates a block diagram of a memory device according to another example embodiment. A difference between a memory device100gand the memory device100awill be mainly described. The memory device100gmay further include a core die130. The core die130may include TSVs131and132, a memory cell array133, a command decoder134, a data input/output circuit135, a processing circuit138, and an internal data input/output bus139. The core die130may be stacked on the core die120. As in the channels CH1and CH2, a channel CH3may include access (or communication) paths capable of accessing the memory device100gand may be independent of the channels CH1and CH2. The external device may access the memory cell array133or the processing circuit138through the channel CH3. Operations of the components133to139of the core die130may be similar or substantially identical to the operations of the components113to119/123to129of the core die110/120except that the core die130is accessible through the channel CH3and the core die110/120is accessible through the channel CH1/CH2. The buffer die190may further include pins195. The pins195may be implemented to be substantially identical to the pins193except that the pins195are allocated to the channel CH3. As in the command and address signals CA1/CA2, the command and address buffering circuit198may output command and address signals CA3to the common command and address bus101. As in the data input/output signals DQ1/DQ2, the data input/output buffering circuit199may transmit data input/output signals DQ3. The memory device100gmay further receive a broadcast command BCMD3from the external device through the channel CH3. For example, the broadcast command BCMD3may be transmitted to the memory device100gtogether with the broadcast command BCMD2after the broadcast command BCMD1or may be transmitted to the memory device100gafter the broadcast command BCMD2. Alternatively, the broadcast command BCMD3may be transmitted after the broadcast command BCMD1, and then, the broadcast command BCMD2may be transmitted. The broadcast commands BCMD1, BCMD2, and BCMD3may request the movement of data from the memory cell array113allocated to the channel CH1to the memory cell arrays123and133respectively allocated to the channels CH2and CH3. For example, the broadcast command BCMD3transmitted to the memory device100gthrough the channel CH3may indicate an end and an end location of the data movement (i.e., the memory cell array133allocated to the channel CH3). For example, the number of memory cell arrays or channels corresponding to the end location may be one or more. Also, a processing command may be further merged to the broadcast command BCMD3. After operations of the command and address buffering circuit198, the memory cell array113, the command decoder114, and the data input/output circuit115, which are performed in response to the broadcast command BCMD1, are completed, the data MD may be output to the common data input/output bus102. After operations of the command and address buffering circuit198, the memory cell array123, the command decoder124, and the data input/output circuit125, which are performed in response to the broadcast command BCMD2, are completed, the data input/output circuit125may receive the data MD of the common data input/output bus102. As in the operations of the command and address buffering circuit198, the memory cell array123, the command decoder124, and the data input/output circuit125, which are performed in response to the broadcast command BCMD2, operations of the command and address buffering circuit198, the memory cell array133, the command decoder134, and the data input/output circuit135, which are performed in response to the broadcast command BCMD3, are completed, the data input/output circuit135may receive the data MD of the common data input/output bus102. For example, the command and address buffering circuit198may output the command and address signals CA1including the broadcast command BCMD1to the common command and address bus101, may then output the command and address signals CA2including the broadcast command BCMD2to the common command and address bus101, and may then output the command and address signals CA3including the broadcast command BCMD3to the common command and address bus101. The data input/output circuit125may receive the data MD of the common data input/output bus102before the data input/output circuit135. For another example, the command and address buffering circuit198may output the command and address signals CA3including the broadcast command BCMD3to the common command and address bus101before the command and address signals CA2including the broadcast command BCMD2. The data input/output circuit135may receive the data MD of the common data input/output bus102before the data input/output circuit125. FIG.10illustrates a block diagram of a memory device according to another example embodiment. A difference between a memory device100hand the memory device100gwill be mainly described. The command and address buffering circuit198may output the command and address signals CA1including the broadcast command BCMD1to the common command and address bus101and may output a command and address signals, to which the broadcast command BCMD2and the broadcast command BCMD3are merged, to the common command and address bus101. Logical values of the command and address signals CA2and CA3respectively indicating the broadcast commands BCMD2and BCMD3may be identical. The command and address buffering circuit198may merge the broadcast commands BCMD2and BCMD3and may output the merged broadcast command having the same logical values to the common command and address bus101. The command and address buffering circuit198may merge IDs of the channels CH2and CH3or may set, to “Don't care”, a specific bit capable of identifying the channels CH2and CH3from among bits of the IDs of the channels CH2and CH3. The command and address buffering circuit198may output the merged ID simultaneously indicating the channels CH2and CH3to the common command and address bus101, together with the merged broadcast command. Accordingly, the command decoders124and134may simultaneously decode the merged broadcast command, and the data input/output circuits125and135may simultaneously receive the data MD of the common data input/output bus102. FIG.11is a flowchart illustrating an operation method of a memory device ofFIG.10. The flowchart ofFIG.11may be applied to the memory devices100gand100h. Operation S305, operation S310, and operation S325may be substantially identical to operation S105, operation S110, and operation S120. Each of operation S315and operation S320may be substantially identical to operation S115. In operation S330, the command and address buffering circuit198of the buffer die190may merge the broadcast commands BCMD2and BCMD3received through the channels CH2and CH3and may output a merged broadcast command BCMD23and a merged ID indicating the channels CH2and CH3to the common command and address bus101. Operation S335may be substantially identical to operation S150. Operation S340and operation S350may be substantially identical to operation S135and operation S140. Operation S345and operation S355may be substantially identical to operation S135and operation S140. In an example embodiment, the memory device100g/100hmay further receive a processing command in addition to the broadcast commands BCMD1to BCMD3. For example, as described with reference toFIGS.2to4, the processing command may be transmitted to the memory device100g/100hafter the broadcast commands BCMD1to BCMD3. For another example, as described with reference toFIGS.5to8, the processing command may be transmitted to the memory device100g/100hafter the broadcast command BCMD1is transmitted and before the broadcast commands BCMD2and BCMD3are transmitted. The processing command may request one of the processing circuits118to138to perform a processing operation on the data MD of the common data input/output bus102or the data PD being a result of a processing operation performed by another of the processing circuits118to138. Two or more processing commands may be transmitted to the memory device100g/100h, and the processing commands may request two or more of the processing circuits118to138of the core dies110to130to perform processing operations on the data MD or the data PD of the common data input/output bus102. Also, as described with reference toFIGS.5to8, the external data ED may be further transmitted to the memory device100g/100htogether with the processing command. The processing command may request one of the processing circuits118to138of the core dies110to130to perform a processing operation on the data MD, the data PD, or the external data ED of the common data input/output bus102. Example embodiments are not limited to those described with reference toFIGS.1to11. For example, the number of core dies stacked on the buffer die190may be one or more. For example, a start location of the data movement may be another memory cell array allocated to another channel without limitation to the memory cell array113allocated to the channel CH1, and an end location of the data movement may be one or more memory cell arrays. For example, one or more processing commands may be transmitted to the buffer die190together with broadcast commands or may be included in broadcast commands, one or more processing operations may be performed on the data MD/PD of the common data input/output bus102or the external data ED. FIG.12illustrates a block diagram of a memory device according to another example embodiment. A difference between a memory device200aand the memory device100bwill be mainly described. The memory device200amay include core dies210and220and a buffer die290. Each of the core dies110to130of the memory devices100ato100hdescribed above may be allocated to one channel, but each of the core dies210and220may be allocated to two channels. Of course, each of the core dies210and220may be allocated to two or more channels. The core die210may include TSVs211_1and212_1, a memory cell array213_1, a command decoder214_1, a data input/output circuit215_1, a processing circuit218_1including a command decoder218_11, an ALU218_12, and a data buffer218_13, and an internal data input/output bus219_1. The components211_1to219_1may be allocated to the channel CH1so as to be accessed through the channel CH1and may be implemented to be identical to the components111to119of the memory devices100ato100h. The core die210may further include TSVs211_1and212_1, a memory cell array213_2, a command decoder214_2, a data input/output circuit215_2, a processing circuit218_2including a command decoder218_21, an ALU218_22, and a data buffer218_23, and an internal data input/output bus219_2. The components211_2to219_2may be allocated to the channel CH2so as to be accessed through the channel CH2and may be implemented to be identical to the components111to119of the memory devices100ato100h. The core die220may include TSVs221_3and222_3, a memory cell array223_3, a command decoder224_3, a data input/output circuit225_3, a processing circuit228_3including a command decoder228_31, an ALU228_32, and a data buffer228_33, and an internal data input/output bus229_3. The components221_3to229_3may be allocated to the channel CH3so as to be accessed through the channel CH3and may be implemented to be identical to the components111to119of the memory devices100ato100h. The core die220may further include TSVs221_4and222_4, a memory cell array223_4, a command decoder224_4, a data input/output circuit225_4, a processing circuit228_4including a command decoder228_41, an ALU228_42, and a data buffer228_43, and an internal data input/output bus229_4. The components221_4to229_4may be allocated to the channel CH4so as to be accessed through the channel CH4and may be implemented to be identical to the components111to119of the memory devices100ato100h. The buffer die290may include TSVs291_1and292_1, pins293_1and294_1, a command and address buffering circuit298_1and a data input/output buffering circuit299_1. The components291_1to294_1,298_1, and299_1may be allocated to the channels CH1and CH3and may be implemented to be identical to the components191to199of the memory devices100ato100h. The buffer die290may further include TSVs291_2and292_2, pins293_2and294_2, a command and address buffering circuit298_2and a data input/output buffering circuit299_2. The components291_2to294_2,298_2and299_2may be allocated to the channels CH2and CH4and may be implemented to be identical to the components191to199of the memory devices100ato100h. For example, unlike the example illustrated inFIG.12, the command and address buffering circuits298_1and298_2may be merged to one circuit, and the data input/output buffering circuits299_1and299_2may also be merged to one circuit. The memory device200amay include a common command and address bus201_1being common to the channels CH1and CH3, a common command and address bus201_2being common to the channels CH2and CH4, a common data input/output bus202_1being common to the channels CH1and CH3, and a common data input/output bus202_2being common to the channels CH2and CH4. The external device may transmit broadcast commands for requesting the data movement between the memory cell arrays213_1and223_3allocated to the channels CH1and CH3to the memory device200aand may transmit broadcast commands for requesting the data movement between the memory cell arrays213_2and223_4allocated to the channels CH2and CH4to the memory device200a. The external device may transmit a processing command(s) for requesting a processing operation(s) on the data MD, the data PD, or the external data ED by the movement between the memory cell arrays213_1and223_3allocated to the channels CH1and CH3to the memory device200athrough at least one of the channels CH1and CH3and may transmit a processing command(s) for requesting a processing operation(s) on the data MD, the data PD, or the external data ED by the movement between the memory cell arrays213_2and223_4allocated to the channels CH2and CH4to the memory device200athrough at least one of the channels CH2and CH4. As in the memory devices100gand100h, the memory device200amay further include another core die that is stacked on the core die220, is implemented to be identical to the core dies210and220, and is allocated to channels CH5and CH6. In an example embodiment, processing operations that are executed by the ALUs218_12and228_32may be identical to or different from each other. Processing operations that are executed by the ALUs218_22and228_42may be identical to or different from each other. Processing operations that are executed by the ALUs218_12and218_22may be identical to or different from each other. Processing operations that are executed by the ALUs228_32and228_42may be identical to or different from each other. FIG.13illustrates a block diagram of a memory device according to another example embodiment. A difference between a memory device200band the memory device200awill be mainly described. The core die210may further include a common data input/output bus212_12that electrically connects the common data input/output buses202_1and202_2and is common to the channels CH1and CH2. For example, the internal data input/output bus212_12may be electrically connected to the TSVs212_1and212_2and with the internal data input/output buses219_1and219_2. The core die220may further include a common data input/output bus222_34that electrically connects the common data input/output buses202_1and202_2and is common to the channels CH3and CH4. For example, the internal data input/output bus222_34may be electrically connected to the TSVs222_3and222_4and with the internal data input/output buses229_3and229_4. Compared with the memory device200a, the memory device200bmay further include the common data input/output bus212_12and222_34, and thus, the memory device200bmay further support the data movement between the channels CH1and CH2, between the channels CH1and CH4, between the channels CH3and CH4, and between the channels CH3and CH2, in addition to the data movement between the channels CH1and CH3and between the channels CH2and CH4. The external device may transmit broadcast commands for requesting the data movement between the memory cell arrays213_1,213_2,223_3, and223_4allocated to the channels CH1to CH4to the memory device200bthrough the channels CH1to CH4. The external device may transmit a processing command(s) for requesting a processing operation(s) on the data MD, the data PD, or the external data ED by the movement between the memory cell arrays213_1,213_2,223_3, and223_4allocated to the channels CH1to CH4to the memory device200bthrough at least one of the channels CH1to CH4. Also, as in the memory devices100gand100h, the memory device200bmay further include another core die that is stacked on the core die220, is implemented to be identical to the core dies210and220, and is allocated to the channels CH5and CH6. FIG.14illustrates a block diagram of a memory area of a core die included in a memory device according to an example embodiment. A memory area300may indicate the remaining area other than processing areas where the processing circuits118to138,218_1,218_2,228_3, and228_4of the core dies110to130and210to220are disposed and TSV areas where the TSVs111,112,121,122,211_1,212_1,211_2,212_2,221_3,222_3,221_4, and222_4of the core dies110to130and210to220are disposed. The memory area300may include a bank303, a row decoder301, a column decoder302, a command decoder304, an address demultiplexer308, a write driver309, an input/output sense amplifier310, and a data input/output circuit305. The bank303may be a unit for dividing each of the above memory cell arrays113,123,133,213_1,213_2,223_3, and223_4. The number of banks303may be one or more, and a plurality of banks303may be allocated to a channel. The command decoder304may correspond to each of the above command decoders114,124,134,214_1,214_2,224_3, and224_4, may decode a command (e.g., a broadcast command, an active command, a precharge command, a read command, a write command, and a refresh command) included in command and address signals CA, and may control components of the memory area300. The row decoder301may select a word line(s) WL of the bank303corresponding to a row address provided from the address demultiplexer308. The column decoder302may select a column selection line(s) of the bank303corresponding to a column address provided from the address demultiplexer308and bit lines connected to the column selection line(s). The address demultiplexer308may provide the row address to the row decoder301and the column address to the column decoder302. The write driver309may write the write data of the data input/output circuit305to memory cells selected by the row decoder301and the column decoder302. The input/output sense amplifier310may read data from the selected memory cells and may provide the read data to the data input/output circuit305. The data input/output circuit305may correspond to each of the above data input/output circuits115,125,135,215_1,215_2,225_3, and225_4. The data input/output circuit305may include a write circuit306and a read circuit307. The write circuit306may receive data included in the data input/output signals DQ of the common data input/output bus102/202_1/202_2based on a write data strobe signal WDQS, may parallelize the received data, and may provide the parallelized data to the write driver309. The read circuit307may serialize data from the input/output sense amplifier310and may output the data input/output signals DQ including the serialized data to the common data input/output bus102/202_1/202_2based on a read data strobe signal RDQS. Instead of a bidirectional data strobe signal DQS, the write data strobe signal WDQS and the read data strobe signal RDQS may be used to capture the data input/output signals DQ. For another example, instead of the write data strobe signal WDQS and the read data strobe signal RDQS, the bidirectional data strobe signal DQS may be used to capture the data input/output signals DQ. In any case, the (write/read) data strobe signals WDQS/RDQS/DQS may be bidrectionally transmitted between the data input/output circuit115/125/135/215_1/215_2/225_3/225_4and the external device through the common data input/output bus102/202_1/202_2, the data input/output buffering circuit199/299_1/299_2, the pins193/194/195/293_1/294_1/293_2/294_2, and the channels CH1/CH2/CH3/CH4. FIG.15illustrates an electronic device according to an example embodiment. An electronic device1000(or a computing/electronic system) may include a memory device1100, an interposer1200, and a system on chip1300. The memory device1100may include core dies1110to1180and a buffer die1190. Each of the core dies1110to1180may correspond to each of the above core dies110to130,210, and220and may be identically manufactured, and the buffer die1190may correspond to each of the above buffer dies190and290. The number of core dies1110to1180is not limited to the example illustrated inFIG.15. Each of the core dies1110to1180may include a memory area1183corresponding to the memory area300and a processing area1188where the processing circuit118/128/138/218_1/218_2/228_3/228_4is disposed. The buffer die1190may include a physical layer1194(hereinafter referred to as a “PHY”), and the PHY1194may include the components193to199,293_1to299_1, and293_2to299_2described above. The core dies1110to1180and the buffer die1190may be electrically interconnected through a plurality of TSVs1101and a plurality of micro bumps1102. The TSVs1101may include the above TSVs111,112,121,122,131,132,211_1,212_1,211_2,212_2,221_3,222_3,221_4, and222_4. In an example embodiment, the memory device1100may be a general-purpose DRAM device, such as a double data rate dynamic random access memory (DDR SDRAM), a mobile DRAM device, such as a low power double data rate (LPDDR) SDRAM, a graphics DRAM device, such as a graphics double data rate (GDDR) synchronous graphics dynamic random access memory (SGDRAM), or a DRAM device, which provides a high capacity and a high bandwidth, such that Wide I/O, a high bandwidth memory (HBM), HBM2, HBM3, or a hybrid memory cube (HMC). The interposer1200may connect the memory device1100and the system on chip1300. The interposer1200may provide physical paths which connect the PHY1194of the memory device1100and a PHY1340of the system on chip1300and are formed of conductive materials for an electrical connection. The system on chip1300may correspond to the external device described above. The system on chip1300may execute applications, which the electronic device1000supports, by using the memory device1100and may be also referred to as an “application processor (AP)”. The system on chip1300may include the PHY1340that is electrically connected to the PHY1194of the buffer die1190through the interposer1200. The system on chip1300may store data to the memory device1100or may read data from the memory device1100. The system on chip1300may generate various commands (e.g., commands associated with a read or write operation of the memory device1100, the broadcast commands BCMD1to BCMD3, and the processing commands PCMD1and PCMD2) described with reference toFIGS.1to14and may transmit the generated commands to the memory device1100. FIG.16is a diagram illustrating an electronic device according to another example embodiment. An electronic device2000may include a memory device2100including core dies2110to2180and a buffer die2190, and a system on chip2300including a PHY2340, and each of the core dies2110to2180may include a memory area2183and a processing area2188electrically interconnected through TSVs2201and micro bumps2202and respectively corresponding to the memory area1183and the processing area1188. The memory device2100and the system on chip2300may correspond to the memory device1100and the system on chip1300, respectively. The memory device2100may be disposed on the system on chip2300, and the system on chip2300may further include the TSVs2201that are used to implement electrical connections with the memory device2100. FIG.17illustrates a block diagram of system on chips ofFIGS.15and16. A system on chip3300may correspond to the system on chips1300and2300described above, and may include a processor3310, a cache memory3320, a memory controller3330, and a PHY3340. A bus3350may provide a communication path between the processor3310, the cache memory3320, the memory controller3330, and the PHY3340. The processor3310may execute various software (e.g., an application program, an operating system, a file system, and a device driver) loaded to the cache memory3320. The processor3310may include a homogeneous multi-core or a heterogeneous multi-core. For example, the processor3310may include at least one or more of a central processing unit (CPU), an image signal processing unit (ISP), a digital signal processing unit (DSP), a graphics processing unit (GPU), a vision processing unit (VPU), and a neural processing unit (NPU), and the number of processors3310may be one or more. An application program, an operating system, a file system, a device driver, etc. for driving the electronic device1000/2000may be loaded to the cache memory3320. For example, the cache memory3320may be an SRAM device that has a faster data input/output speed than the memory device1100/2100. The memory controller3330may access the memory device1100/2100in a direct memory access (DMA) manner. The memory controller3330may include a command queue3331, a command scheduler3332, a read data queue3333, and a write data queue3334. The command queue3331may store commands and addresses that are generated by the processor3310or are generated under control of the processor3310. A command and an address stored in the command queue3331may be provided to the PHY3340under control of the command scheduler3332. The command scheduler3332may adjust an order of commands and addresses stored in the command queue3331, a time point when a command(s) and an address(es) are input to the command queue3331, a time point when a command(s) and an address(es) are output from the command queue3331, etc. The read data queue3333may store read data that the memory device1100/2100transmits through the PHY3340in response to the read command. The read data stored in the read data queue3333may be provided to the cache memory3320and may be processed by the processor3310. The write data queue3334may store write data to be stored to the memory device1100/2100. The write data stored to the write data queue3334by the write command may be transmitted to the memory device1100/2100through the PHY3340. The components3331to3334may be implemented in the system on chip3300in the form of hardware, software, or a combination thereof. The PHY3340may include a clock generator3341, a command and address generator3342, a data receiver3343, and a data transmitter3344. The clock generator3341may generate a clock CK to be output to the memory device1100/2100, and the number of clocks CK may correspond to the number of channels between the system on chip3300and the memory device1100/2100. The command and address generator3342may receive a command or an address from the command queue3331and may transmit a command CMD or an address ADD to the memory device1100/2100. For example, the command CMD may be one of various commands (e.g., commands associated with a read or write operation of the memory device1100, the broadcast commands BCMD1to BCMD3, and the processing commands PCMD1and PCMD2) described with reference toFIGS.1to14. The data receiver3343may receive read data of the data input/output signal DQ based on the read data strobe signal RDQS (or DQS) from the memory device1100/2100. The data receiver3343may provide the received read data to the read data queue3333. The data transmitter3344may receive write data from the write data queue3334. The data transmitter3344may transmit the received write data to the memory device1100/2100based on the write data strobe signal WDQS (or DQS). A memory device according to an example embodiment may execute the data movement between core dies based on a broadcast command from an external device, and thus, a latency between the memory device and the external device due to the data movement may be decreased. While example embodiments have been described, it will be apparent to those of ordinary skill in the art that various changes and modifications may be made thereto without departing from the spirit and scope of the inventive concept as set forth in the following claims.
61,940
11860804
DETAILED DESCRIPTION OF THE EMBODIMENTS The following description is written by referring to terms of this technical field. If any term is defined in this specification, such term should be interpreted accordingly. The disclosure herein includes a direct memory access (DMA) controller, an electronic device using the DMA controller, and a method of operating the DMA controller. On account of that some or all elements of the controller and the electronic device could be known, the detail of such elements is omitted provided that such detail has little to do with the features of this disclosure, and that this omission nowhere dissatisfies the specification and enablement requirements. A person having ordinary skill in the art can choose components or steps equivalent to those described in this specification to carry out the present invention, which means that the scope of this invention is not limited to the embodiments in the specification. In the following embodiments, the DMA controller includes two DMA channels or more. However, in other embodiments, the DMA controller may include only one DMA channel. When the DMA controller includes only one DMA channel, operating the DMA controller is equivalent to operating the DMA channel, and vice versa. FIG.1is a functional block diagram of a DMA controller according to an embodiment of the present invention. The DMA controller100includes a control circuit110, multiple DMA channels120(including, but not exclusively, the two DMA channels depicted in the figure: the DMA channel0(120-0) and the DMA channel1(120-1)), a configuration interface130, and a master interface140. Each DMA channel120(120-0or120-1) includes a register file121(121-0or121-1), and each register file121(121-0or121-1) includes a mode register122(122-0or122-1), a memory address register124(124-0or124-1), and a byte count register126(126-0or126-1). The control circuit110may also be referred to as an arbitrator of the DMA controller100and may be hardware or a combination of software and hardware. When the control circuit110is embodied by hardware, the control circuit110may be a finite state-machine (FSM) embodied by logic circuits. When the control circuit110is a combination of software and hardware, the control circuit110includes a computing unit and a memory. A computing unit is a circuit or electronic component (such as a microprocessor, a micro-processing unit, a digital signal processor, or an application specific integrated circuit (ASIC)) that has the program execution capability. The computing unit executes the program codes or program instructions stored in the memory to carry out the functions of the control circuit110. The DMA channel120can operate in a privilege mode or a normal mode, and the DMA channel0(120-0) and the DMA channel1(120-1) are independent of each other. For example, the DMA channel0(120-0) and the DMA channel1(120-1) can both be operating in the privilege mode or the normal mode at the same time, or alternatively, one of which can be operating in the privilege mode while the other of which is operating in the normal mode. When the register value of the mode register122of the DMA channel120is the first value (e.g., logic 1), the DMA channel120operates in the privilege mode, and when the register value of the mode register122of the DMA channel120is the second value (e.g., logic 0), the DMA channel120operates in a normal mode. When the DMA controller100or the DMA channel120is set to the privilege mode, the DMA controller100or the DMA channel120operates in the privilege mode, and all subsequent setting or reading operations must be performed using the privilege mode control commands. When the normal mode control command attempts to read the DMA controller100or the DMA channel120, or attempts to transfer data by setting the DMA controller100or the DMA channel120, the DMA controller100operating in the privilege mode or the DMA channel120operating in the privilege mode rejects these operations. In some embodiments, when the normal mode software or hardware attempts to read the settings of the DMA controller100operating in the privilege mode or the settings of the DMA channel120operating in the privilege mode, the DMA controller100operating in the privilege mode or the DMA channel120operating in privilege mode replies “0,” reserved value(s), or random value(s), rather than the genuine value, to prevent the normal mode software or hardware from knowing the settings of the DMA controller100operating in the privilege mode or the settings of the DMA channel120operating in the privilege mode. The configuration interface130and the master interface140are coupled, through a bus200, to the processor300(such as a central processing unit, a microprocessor, a micro-processing unit, a digital signal processor, or an ASIC), a privilege memory400, and a normal memory500. The bus200may also be an interconnect matrix or a bus matrix. The privilege memory400and the normal memory500can be two separate physical memories (such as dynamic random access memories (DRAM)) or different blocks or areas of the same physical memory (i.e., privilege/normal block, or privilege/normal area). The processor300transmits the control command CM through the bus200, and the DMA controller100receives the control command CM through the configuration interface130. The control command CM can be used to set the register value of the mode register122of the DMA channel120. The control command CM includes a privilege attribute, and the processor300generates a privilege mode control command CM or a normal mode control command CM by controlling the value of the privilege attribute. More specifically, when operating in the privilege mode, the processor300generates a control command CM whose privilege attribute is of a first logical value (e.g., logic 1); when operating in the normal mode, the processor300generates a control command CM whose privilege attribute is of a second logical value (e.g., logic 0). In some embodiments, the control circuit110sets the mode register122of the target DMA channel120according to the control command CM. More specifically, the control circuit110sets the mode register122of the target DMA channel120based on the privilege attribute. For example, when the privilege attribute of the control command CM is of the first logical value (e.g., logic 1), the control circuit110sets the register value of the mode register122of the target DMA channel120to the first logical value; when the privilege attribute of the control command CM is of the second logical value (e.g., logic 0), the control circuit110sets the register value of the mode register122of the target DMA channel120to the second logical value. In some embodiments, the configuration interface130may be an Advanced Peripheral Bus (APB), and the privilege attribute is one of the bits (e.g., bit zero, namely, Pprot[0]) of the protection signal (Pprot). In other embodiments, the configuration interface130may be an Advanced High-performance Bus (AHB) or other interfaces. The processor300, when operating in the privilege mode, can read the settings that another processor, which is not shown and operates in the normal mode, made to the DMA controller100or the DMA channel120. The processor300, when operating in the privilege mode, can further control the behavior of the DMA controller100and/or the DMA channel120. For example, when the DMA controller100or the DMA channel120is set to the privilege mode, another processor operating in the normal mode cannot obtain the DMA controller100or the DMA channel120and use it to transfer data. When the DMA channel120is operating in the privilege mode, the DMA channel120can access the privilege memory400and the normal memory500. When the DMA channel120is operating in the normal mode, the DMA channel120can access the normal memory500but cannot access the privilege memory400. More specifically, the DMA channel120transmits the read/write command CRW to the privilege memory400and/or the normal memory500through the master interface140and the bus200. The master interface140can distinguish between privilege mode commands and normal mode commands. The read/write command CRW contains the privilege attribute, and the privilege memory400determines whether to allow read and/or write operations based on the privilege attribute. For example, when the DMA channel120is operating in the privilege mode, the privilege attribute of the read/write command CRW that the DMA channel120issues is of the first logical value (corresponding to the privilege mode), causing the privilege memory400and the normal memory500to permit read and/or write operations; when, on the other hand, the DMA channel120is operating in the normal mode, the privilege attribute of the read/write command CRW that the DMA channel120issues is of the second logical value (corresponding to the normal mode), causing the privilege memory400not to permit read and/or write operations but causing the normal memory500to permit read and/or write operations. In some embodiments, the master interface140may be the APB, the AHB, or an Advanced eXtensible Interface, (AXI). In some embodiments, the bus200determines whether to allow the DMA controller100or the DMA channel120to access the privilege memory400. FIG.2is a flowchart of a method of operating a DMA controller according to an embodiment of the present invention. At first, the processor300operates in the privilege mode (step S210) and needs to find an idle (i.e., not in use) DMA controller or DMA channel (step S220). In some embodiments, the processor300queries the DMA controller100or the DMA channel120about its state with a query command QM, for example, visiting each DMA channel120of the DMA controller100by polling. In response to the polling signal, the DMA channel120generates a reply content RC, and the reply content RC is associated with the operation mode of the processor300(i.e., the privilege mode or the normal mode), as illustrated inFIGS.3A and3B. FIG.3Ashows the reply content RC that the DMA channel120generates in response to the polling signal from a processor (e.g., the processor300) operating in the privilege mode, andFIG.3Bshows the reply content RC that the DMA channel120generates in response to the polling signal from a processor (e.g., the processor300) operating in the normal mode. It is assumed in the examples ofFIGS.3A and3Bthat the current operation modes of DMA channel0to DMA channel3are normal mode, normal mode, privilege mode, and privilege mode, respectively, and that the current states of DMA channel0to DMA channel3are busy, idle, idle, and busy, respectively. In reference toFIG.3A, when the processor300is operating in the privilege mode, the reply content RC that the DMA channel120generates includes the current operation mode and the genuine state (i.e., idle or busy) of the DMA channel120. The genuine state refers to the current state which has not been adjusted, modified or changed of the DMA channel120. Therefore, the processor300, when operating in the privilege mode, can know the current operation mode and the genuine state of the DMA channel120. In reference toFIG.3B, when the processor300is operating in the normal mode, the reply content RC that the DMA channel120generates includes the state but not the operation mode, and the state in the reply content RC may not necessarily be the current genuine state of the DMA channel120. More specifically, when the DMA channel operating in the normal mode receives a polling signal from the processor300which is operating in the normal mode, the DMA channel operating in the normal mode replies the current state but does not reply the operation mode; when the DMA channel operating in the privilege mode receives a polling signal from the processor300which is operating in the normal mode, the DMA channel operating in the privilege mode always replies “busy” and does not reply the operation mode. In other words, in spite of being in the idle state, the DMA channel2replies a fake state or a dummy state to prevent the processor300which is operating in the normal mode from accessing the DMA channel which is operating in the privilege mode. Therefore, the processor300, when operating in the normal mode, can know the genuine state of the DMA channel operating in the normal mode but cannot know the genuine state of the DMA channel operating in the privilege mode, and the processor300, when operating in the normal mode, cannot know the operation mode of the DMA channel With such a design, the processor300, when operating in the normal mode, cannot set the DMA controller100operating in the privilege mode or the DMA channel120operating in the privilege mode. In some embodiments, the processor300, when operating in the normal mode, can query the DMA controller100or the DMA channel120about whether it is idle but cannot stop the DMA controller100or the DMA channel120or cannot control the DMA controller100or the DMA channel120to leave the privilege mode. In some embodiments (as shown inFIG.4), the DMA channel120operating in the privilege mode utilizes the selection circuit600(e.g., a multiplexer) to reply the genuine state or the dummy state based on the privilege attribute of the control command CM. When the privilege attribute is logic 1 (corresponding to the privilege mode), the DMA channel120replies the genuine state; when the privilege attribute is logic 0 (corresponding to the normal mode), the DMA channel120replies the dummy state. Returning toFIG.2, when the processor300does not find an idle DMA controller or DMA channel (i.e., the result of step S220is NO), the processor300continues to search for an idle DMA controller or DMA channel (step S220). Upon finding an idle DMA controller or DMA channel (i.e., the result of step S220is YES), the processor300controls the idle DMA channel to operate in the privilege mode by changing the register value of the mode register122of the idle DMA channel (step S230). After setting the mode register122, the processor300proceeds to set the memory address register124and the byte count register126of the DMA channel through other control commands (step S240). For example, the processor300can store the address of the to-be-read/written memory block in the memory address register124and store the amount of data in the byte count register126. After that, the DMA channel120performs data transfer by sending a read/write command CRW through the master interface140according to the register value in the memory address register124and the register value in the byte count register126(step S250). After the data transfer is finished (i.e., the result of step S260is YES), the DMA channel120issues an interrupt to notify the processor300that the data transfer has been finished, and then the processor300determines whether to control the DMA channel to operate in the normal mode (step S270). When the processor300wants to continue using the DMA channel, the processor300does not control the DMA channel to operate in the normal mode (i.e., the result of step S270is NO) and then continues to select the same DMA channel in step S220. When the processor300does not continue using the DMA channel, the processor300controls the DMA channel to operate in the normal mode (i.e., the result of step S270is YES). After clearing other registers of the DMA channel (including but not exclusively the memory address register124and the byte count register126), the processor300controls the DMA channel to operate in the normal mode by changing the register value of the mode register122of the DMA channel (step S280), so that other processors operating in the normal mode can find the DMA channel in step S220. Although the flow inFIG.2takes the processor300operating in the privilege mode as an example, people having ordinary skill in the art can apply the present invention to the processor300operating in the normal mode according to the above discussions, and the details are omitted for brevity. The DMA controller or DMA channel of the present invention is applied to an electronic device10(e.g., devices with computing capabilities and data storage capabilities (such as a computer and a portable electronic device), or a system-on-a-chip (SoC)), and the processor300may be the central processing unit, microprocessor, micro-processing unit, digital signal processor, or ASIC of the electronic device10. In some embodiments, the DMA controller or DMA channel of the present invention has a privilege mechanism to protect confidential or sensitive data in the electronic device10.FIG.5shows the flow of the privilege mechanism. The DMA channel or DMA controller operating in the privilege mode keeps monitoring whether the number of the normal mode control commands received is greater than the threshold value (steps S510and S520). A normal mode control command is the command whose privilege attribute corresponds to the normal mode and which is usually issued by a processor operating in the normal mode. When the number of normal mode control commands that the DMA channel operating in the privilege mode has received is greater than the threshold value (i.e., the result of step S520is YES, which means it is likely that a malicious person is attempting to steal the data in the privilege memory400), the DMA channel operating in the privilege mode issues an interrupt INTR (step S530). Then, the processor300, when operating in the privilege mode, receives the interrupt INTR and restarts or shuts down the electronic device10in response to the interrupt INTR (step S540) to reduce the risk of data theft. In some embodiments, the threshold value can be zero, in which case steps S530and S540are performed provided that the DMA channel or DMA controller operating in the privilege mode receives one normal mode control command. In some embodiments, the reliability of the privilege mechanism is improved by restraining the processor operating in the normal mode from receiving (or even knowing the presence of) the interrupt INTR. In summary, the present invention provides a DMA controller and/or DMA channel that can operate in the privilege mode or the normal mode, and a method of operating the DMA controller and/or DMA channel. The DMA controller operating in the normal mode and the DMA channel operating in the normal mode cannot obtain the data transferred by the DMA controller operating in the privilege mode and the data transferred by the DMA channel operating in the privilege mode. Since a person having ordinary skill in the art can appreciate the implementation detail and the modification thereto of the present method invention through the disclosure of the device invention, repeated and redundant description is thus omitted. Please note that there is no step sequence limitation for the method inventions as long as the execution of each step is applicable. Furthermore, the shape, size, and ratio of any element and the step sequence of any flowchart in the disclosed figures are exemplary for understanding, not for limiting the scope of this invention. The aforementioned descriptions represent merely the preferred embodiments of the present invention, without any intention to limit the scope of the present invention thereto. Various equivalent changes, alterations, or modifications based on the claims of the present invention are all consequently viewed as being embraced by the scope of the present invention.
19,499
11860805
DETAILED DESCRIPTION OF EMBODIMENTS The following clearly describes the technical solutions in the embodiments of the present disclosure with reference to the accompanying drawings in the embodiments of the present disclosure. Apparently, the described embodiments are some but not all of the embodiments of the present disclosure. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments of the present disclosure shall fall within the protection scope of the present disclosure. When there is only Type-C interface and there is no 3.5 mm earphone interface, performing charging and earphone functions simultaneously requires a one-to-two adapter. This solution is ordinary charging and the charging speed is slow. Various fast charging methods commonly used need to occupy D+ and D− pins of the interface, and an audio output left channel L pin and an audio output right channel R pin of the earphone are generally used as these two pins. As a result, when the earphone function is used, the fast charge mode cannot be used, which affects user experience. On the mobile terminal in the related technology and the one-to-two adapter (switching between charging and earphone functions), a DP pin and a DN pin are occupied by an L pin and an R pin respectively. As a result, when the earphone is used, only ordinary voltage bus (VBUS) charging can be used and the fast charging mode cannot be used, and charging cannot be controlled, affecting security. On this basis, an embodiment of the present disclosure provides a terminal device. As shown inFIG.1, the terminal device includes: a first Type-C interface, a first switch unit, a second switch unit, and a controller. The first Type-C interface is connected to a converter, and as shown inFIG.2, the first Type-C interface includes: a first group of pins and a second group of pins. The first group of pins may be connected to the earphone interface or the Type-C charging interface of the converter, and the second group of pins may be connected to the Type-C charging interface or the earphone interface of the converter. When the first group of pins are connected to the earphone interface of the converter, the second group of pins are connected to the Type-C charging interface of the converter. When the first group of pins are connected to the Type-C charging interface of the converter, the second group of pins are connected to the earphone interface of the converter. The first group of pins include a first pin and a second pin, the second group of pins include a third pin and a fourth pin; where the first pin is used as a first earphone pin and a first data transmission pin of the first Type-C interface, and the second pin is used as a second earphone pin and a second data transmission pin of the first Type-C interface. The third pin is used as a first earphone pin and a first data transmission pin of the first Type-C interface, and the fourth pin is used as a second earphone pin and a second data transmission pin of the first Type-C interface. For example, the first group of pins are A6and A7pins of the first Type-C interface, and the second group of pins are B6and B7pins of the first Type-C interface. Certainly, the first group of pins may also be B6and B7pins, and the second group of pins may also be A6and A7pins. A6is the first pin, B6is the third pin, and A6and B6are both used as the first earphone pin and the first data transmission pin of the first Type-C interface and may be DP/R pin. A7is the second pin, B7is the fourth pin, and A7and B7are both used as the second earphone pin and the second data transmission pin of the first Type-C interface and may be DN/L pin. That is, the first earphone pin (Rpin) is used as the first data transmission pin (DPpin), and the second earphone pin (Lpin) is used as the second data transmission pin (DNpin). When the first Type-C interface shown inFIG.2is plugged into a digital earphone, A6, A7, B6, and B7pins output digital audio signals. When the first Type-C interface shown inFIG.2is plugged into a universal serial bus (USB) charger, A6, A7, B6, and B7pins transmit USB signals and enter the fast charging mode. The earphone interface is a 3.5 mm earphone interface. The first group of pins are connected to an application processor earphone interface or an application processor fast charge interface of the terminal device through the first switch unit. The second group of pins are connected to the application processor earphone interface or the application processor fast charge interface of the terminal device through the second switch unit. The application processor earphone interface includes an R pin and an L pin. The application processor fast charging interface can be used for fast charging through related fast charging protocols. When the grounding impedance value of the first group of pins is within a preset range, the controller determines that the first group of pins are connected to the earphone interface of the converter, controls the first switch unit to connect the first group of pins to the application processor earphone interface, and controls the second switch unit to connect the second group of pins to the application processor fast charge interface;or when the grounding impedance value of the second group of pins is within the preset range, determines that the second group of pins are connected to the earphone interface of the converter, controls the second switch unit to connect the second group of pins to the application processor earphone interface, and controls the first switch unit to connect the first group of pins to the application processor fast charge interface. Herein, when the grounding impedance value of the first group of pins is within a preset range, the controller determines that the first group of pins are used as the earphone interface, controls the first switch unit to connect the first group of pins to the application processor earphone interface, determines that the second group of pins are used as the data transmission interface, and controls the second switch unit to connect the second group of pins to the application processor fast charge interface. When the grounding impedance value of the second group of pins is within the preset range, the controller determines that the second group of pins are used as the earphone interface, controls the second switch unit to connect the second group of pins to the application processor earphone interface, determines that the first group of pins are used as the data transmission interface, and controls the first switch unit to connect the first group of pins to the application processor fast charge interface. When connecting the first group of pins to the application processor fast charge interface, a master control unit (MCU) can perform fast charging according to the fast charge protocol, and when connecting the second group of pins to the application processor fast charge interface, a main control unit MCU can perform fast charging according to the fast charging protocol. The controller can be a control module of an application processor (AP). For the terminal device of the embodiments of the present disclosure, a group of pins in the first group of pins and the second group of pins are first determined as the earphone pin, and then the remaining group of pins are used as the charging pin, so that the earphone function and the fast charging function can be implemented at the same time. According to the embodiments of the present disclosure, the controller controls one of the first switch unit and the second switch unit to connect to the application processor earphone interface according to the grounding impedance values of the first group of pins or the second group of pins, and controls the remaining switch unit to connect to the application processor fast charge interface, so that users can also perform fast charging in fast charge mode while using the earphone function. According to the terminal device of the embodiments of the present disclosure, the controller controls one of the first switch unit and the second switch unit to connect to the application processor earphone interface according to the grounding impedance values of the first group of pins or the second group of pins, and the remaining switch unit may connect to the application processor fast charge interface, so that users can also perform fast charging in fast charge mode while using the earphone function. Optionally, the first switch unit includes a first switching switch and a second switching switch. The second switch unit includes a third switching switch and a fourth switching switch. The first switching switch (switch01) is respectively connected to the second pin (A7pin) of the first group of pins and the first earphone pin (R pin) of the application processor earphone interface, or the first switching switch is respectively connected to the second pin (A7pin) of the first group of pins and the first data transmission pin (DP pin) of the application processor fast charge interface. The second switching switch (switch02) is respectively connected to the first pin (A6pin) of the first group of pins and the second earphone pin (L pin) of the application processor earphone interface, or the second switching switch is respectively connected to the first pin (A6pin) of the first group of pins and the second data transmission pin (DN pin) of the application processor fast charge interface. The third switching switch (switch03) is respectively connected to the fourth pin (B7pin) of the second group of pins and the first earphone pin (R pin) of the application processor earphone interface, or the third switching switch is respectively connected to the fourth pin (B7pin) of the second group of pins and the first data transmission pin (DP pin) of the application processor fast charge interface. The fourth switching switch (switch04) is respectively connected to the third pin (B6pin) of the second group of pins and the second earphone pin (L pin) of the application processor earphone interface, or the fourth switching switch is respectively connected to the third pin (B6pin) of the second group of pins and the second data transmission pin (DN pin) of the application processor fast charge interface. Optionally, the first switching switch, the second switching switch, the third switching switch, and the fourth switching switch are all single-pole double-throw switches;a non-movable port of the first switching switch is connected to the second pin of the first group of pins, and a movable port of the first switching switch is connected to the first earphone pin of the application processor earphone interface or the first data transmission pin of the application processor fast charge interface;a non-movable port of the second switching switch is connected to the first pin of the first group of pins, and a movable port of the second switching switch is connected to the second earphone pin of the application processor earphone interface or the second data transmission pin of the application processor fast charge interface;a non-movable port of the third switching switch is connected to the fourth pin of the second group of pins, and a movable port of the third switching switch is connected to the first earphone pin of the application processor earphone interface or the first data transmission pin of the application processor fast charge interface; anda non-movable port of the fourth switching switch is connected to the third pin of the second group of pins, and a movable port of the fourth switching switch is connected to the second earphone pin of the application processor earphone interface or the second data transmission pin of the application processor fast charge interface. In the terminal device of the embodiments of the present disclosure, A6pin and B7pin of the first Type-C interface are separated and are connected to different switch units, and A7pin and B6pin are separated and are connected to different switch units. Correspondingly, in the adapter, DP pin and R pin are separated and DN pin and L pin are separated. A6and A7are DP pin and DN pin respectively, and B6and B7are R pin and L pin respectively. A6pin and A7pin of the adapter are connected to A6pin and A7pin of a Type-C female socket (Type-C charging interface) of a one-to-two adapter cable (can also be connected to B6pin and B7pin of the female socket), and B6and B7of the one-to-two adapter are connected to R pin and L pin of the 3.5 mm earphone socket (earphone interface). In the embodiments of the present disclosure, after the 3.5 mm analog earphone and the charger are inserted into the terminal device through the adapter, CC1pin and CC2pin of the first Type-C interface are short-circuited to the ground. The mobile terminal detects through CC1pin and CC2pin to determine whether a slave device is an analog earphone. For example, if voltages of CC1pin and CC2pin are less than a preset voltage threshold, it is determined that the slave device is an analog earphone device. Grounding impedances of A6pin, A7pin, B6pin, and B7pin are detected, and if the grounding impedances are within the preset range, it is determined that it is the earphone type. For example, if it is detected that grounding impedances of A6and A7pins are within the preset range, it is determined that it is the earphone type, and the switch is controlled to switch to a corresponding channel, that is, A6and A7pins are connected to the application processor earphone interface of the terminal device. If the impedances of B6and B7pins are outside the preset range, it is determined that it is the USB type, and the switch is controlled to switch to a corresponding channel, that is, B6and B7pins are connected to the application processor fast charge interface, and an earphone type channel is switched to earphone output and a USB type channel is switched to a USB channel. The charger performs VBUS charging on the mobile terminal. After the mobile terminal detects the VBUS charging, the mobile terminal starts fast charging protocol charging to perform fast charging and earphone functions. As shown inFIG.1, an embodiment of the present disclosure further provides an adapter, including:a second Type-C interface, and an earphone interface and a Type-C charging interface that are connected to the second Type-C interface. The earphone interface may be a 3.5 mm earphone interface, and the earphone is connected through the earphone interface. The Type-C charging interface is used to connect to the charger, and fast charging can be achieved through the Type-C charging interface. The earphone pin is connected to the earphone interface, the data transmission pin is connected to the Type-C charging interface, the second Type-C interface is connected to the terminal device, and the second Type-C interface includes the earphone pin and the data transmission pin. The earphone pin is connected to the first group of pins of the terminal device, and the data transmission pin is connected to the second group of pins of the terminal device;or the earphone pin is connected to the second group of pins of the terminal device, and the data transmission pin is connected to the first group of pins of the terminal device. Optionally, the earphone pin includes: a first earphone pin (R pin) and a second earphone pin (L pin), and the data transmission pin includes a first data transmission pin (DP pin) and a second data transmission pin (DN pin). The second Type-C interface may be a Type-C male socket used to connect to the terminal device, and the first Type-C interface may be a Type-C female socket that is connected to the adapter. The first group of pins may be A6and A7pins of the first Type-C interface of the terminal device, and the second group of pins may be B6and B7pins of the first Type-C interface. Certainly, the first group of pins can also be B6and B7pins of the first Type-C interface, and the second group of pins are A6and A7pins of the first Type-C interface. The first earphone pin and the second earphone pin are connected to two pins of the first group of pins in a one-to-one correspondence, and the first data transmission pin and the second data transmission pin are connected to two pins of the second group of pins in a one-to-one correspondence. Alternatively, the first earphone pin and the second earphone pin are connected to two pins of the second group of pins in a one-to-one correspondence, and the first data transmission pin and the second data transmission pin are connected to two pins of the first group of pins in a one-to-one correspondence. Optionally, R pin is connected to R pin, L pin is connected to L pin, DP pin is connected to DP pin, and DN pin is connected to DN pin. In the adapter of the embodiments of the present disclosure, the first data transmission pin (DP pin) and the first earphone pin (R pin) of the second Type-C interface are separated, and the second data transmission pin (DN pin) and the first earphone pin (L pin) are separated. DP and DN pins are connected to the Type-C charging interface, and R pin and L pin are connected to the earphone interface. Correspondingly, A6pin and B7pin of the first Type-C interface of the terminal device are separated and are connected to different switch units, and A7pin and B6pin are separated and are connected to different switch units, so that the terminal device can achieve the fast charging function and the earphone function at the same time. Optionally, the Type-C charging interface includes: a first data transmission pin and a second data transmission pin. The first data transmission pin of the Type-C charging interface is connected to the first data transmission pin of the second Type-C interface. The second data transmission pin of the Type-C charging interface is connected to the second data transmission pin of the second Type-C interface. The first earphone pin of the earphone interface is connected to the first earphone pin of the second Type-C interface. The second earphone pin of the earphone interface is connected to the second earphone pin of the second Type-C interface. As shown inFIG.1, DP pin of the Type-C charging interface is connected to DP pin of the second Type-C interface, DN pin of the Type-C charging interface is connected to DN pin of the second Type-C interface, R pin of the earphone interface is connected to R pin of the second Type-C interface, and L pin of the earphone interface is connected to L pin of the second Type-C interface. In the embodiments of the present disclosure, DP pin and R pin of the second Type-C interface of the adapter are separated, and DN and L pin are separated. A6and A7pins of the adapter are DP and DN pins respectively, and B6and B7pins are R and L pins respectively. A6and A7pins of the adapter are connected to A6and A7pins of the Type-C female socket (Type-C charging interface) of the one-to-two adapter cable (can also be connected to B6and B7pins of the female socket), and B6and B7of the one-to-two adapter are connected to R and L pins of the 3.5 mm earphone socket (earphone interface). An embodiment of the present disclosure further provides a charging method, applied to the foregoing terminal device. As shown inFIG.3, the charging method includes:Step301: Detect a grounding impedance value of a target pin, to obtain a detection result, where the target pin includes a first group of pins and/or a second group of pins. In the embodiments of the present disclosure, the terminal device is connected to the adapter, and this step may include: detecting whether an external device connected to the adapter is an earphone device; and when the external device connected to the adapter is an earphone device, detecting a grounding impedance value of a target pin, to obtain a detection result, where the target pin includes a first group of pins and/or a second group of pins. As shown inFIG.4, in the embodiments of the present disclosure, the terminal device includes an application processor AP control module, an AP earphone interface (including R pin and L pin), an AP USB interface (including DP pin and DN pin), a switch module (a first switch unit and a second switch unit), and a first Type-C interface. The first Type-C interface is connected to a one-to-two adapter, and the one-to-two adapter is connected to a charger and a 3.5 mm earphone. The AP control module is responsible for detecting CC logic, determining an inserted device, and controlling switching of a state of the switch module, or the like. The AP earphone interface outputs an analog earphone signal and is usually a codec. The AP USB interface is a transmission charging protocol interface. The switch module is responsible for switching between left and right channel output and a USB port of the earphone. Optionally, the AP control module pulls CC1pin and CC2pin to ground. After pulling CC1pin and CC2pin to ground, the resistance is less than Ra and is generally 1.2K. If voltages of CC1pin and CC2pin are less than a preset voltage threshold, it is determined that the external device is the earphone device; otherwise, it is determined that the external device is another device.Step302: When the detection result is that the grounding impedance value of the first group of pins is within a preset range, control the first group of pins to connect to the application processor earphone interface of the terminal device, and control the second group of pins to connect to the application processor fast charge interface of the terminal device. Optionally, the first switch unit is controlled to connect the first group of pins to the application processor earphone interface, and the second switch unit is controlled to connect the second group of pins to the application processor fast charge interface.Step303: When the detection result is that the grounding impedance value of the second group of pins is within the preset range, control the second group of pins to connect to the application processor earphone interface, and control the first group of pins to connect to the application processor fast charge interface. Optionally, the second switch unit is controlled to connect the second group of pins to the application processor earphone interface, and the first switch unit is controlled to connect the first group of pins to the application processor fast charge interface. The foregoing step302and step303are in parallel. In the technical solutions of the embodiments of the present disclosure, by changing the peripheral circuit structure of the Type-C interface of the mobile terminal, adding electronic switches, separating a Type-C earphone (L/R) channel and a USB (DP/DN) channel, and designing reasonable system logic, users can achieve the fast charging function and the earphone function at the same time after inserting a related device. In the charging method of the embodiments of the present disclosure, it is detected whether the external device connected to the adapter is an earphone device; when the external device is an earphone device, a grounding impedance value of a target pin is detected, to obtain a detection result, where the target pin includes a first group of pins and/or a second group of pins. When the detection result is that the grounding impedance value of the first group of pins is within a preset range, it is determined that the first group of pins are connected to the earphone interface of the converter, the first group of pins are controlled to connect to the application processor earphone interface of the terminal device, and the second group of pins are controlled to connect to the application processor fast charge interface of the terminal device; or when the detection result is that the grounding impedance value of the second group of pins is within the preset range, it is determined that the second group of pins are connected to the earphone interface of the converter, the second group of pins are controlled to connect to the application processor earphone interface, and the first group of pins are controlled to connect to the application processor fast charge interface. In this way, users can also perform fast charging in fast charging mode while using the earphone function. The terminal device in the embodiments of this disclosure includes a device formed by components such as a radio frequency unit, a network module, an audio output unit, an input unit, a sensor, a display unit, a user input unit, an interface unit, a memory, a processor, and a power supply. The embodiments in this specification are described in a progressive manner. Each embodiment focuses on a difference from another embodiment. For a same or similar part of the embodiments, refer to each other. Although some optional embodiments in the embodiments of this disclosure have been described, persons skilled in the art can make changes and modifications to these embodiments once they learn the basic inventive concept. Therefore, the following claims are intended to be construed as to cover optional embodiments and all changes and modifications falling within the scope of the embodiments of this disclosure. Finally, it should be noted that in this specification, relationship terms such as “first” and “second” are merely used to distinguish one entity or operation from another, and do not necessarily require or imply that any actual relationship or sequence exists between these entities or operations. Moreover, the terms “include”, “comprise”, or their any other variants are intended to cover a non-exclusive inclusion, so that a process, a method, an article, or a terminal device that includes a list of elements not only includes those elements but also includes other elements that are not listed, or further includes elements inherent to such a process, method, article, or terminal device. Without being subject to further limitations, an element defined by a phrase “including” does not exclude presence of other identical elements in the process, method, article, or terminal device that includes the very element. The foregoing descriptions are merely optional embodiments of this disclosure, but are not intended to limit this disclosure. Any modification, equivalent replacement, or improvement made without departing from the spirit and principle of this disclosure shall fall within the protection scope of this disclosure.
26,591
11860806
An exemplary microcontroller system100as shown inFIG.1comprises a master microcontroller unit102and a general purpose input/output (GPIO)108. The microcontroller system100further comprises a first slave microcontroller unit104and a second slave microcontroller unit106. The slave microcontroller units104,106, the master microcontroller unit102and GPIO108are connected via a bus110. This allows signals to be transmitted between the different components of the microcontroller system102. In an initial state, for example, when the microcontroller system100is started up from a ‘standby/OFF’ state, the GPIO108is controlled by the master microcontroller unit102, so that neither the first slave microcontroller unit104nor the second slave microcontroller unit106has control of the GPIO108. As such, these components are initially inactive. After start-up, the microcontroller system100may then engage in an initial configuration. This initial configuration may be initiated by the master microcontroller unit102. During initial configuration the master microcontroller unit102transmits a selection signal to the register field corresponding to the GPIO108. This selection signal may indicate that the GPIO108is now to be controlled by the first slave microcontroller unit104. The first slave microcontroller unit104may then begin communicating with and controlling the GPIO108. The first slave microcontroller unit104begins communicating with and controlling the GPIO108as a result of reading the active configuration from the master microcontroller unit102through a generic microcontroller unit to microcontroller bus interface such as the bus110shown inFIG.1. However, the first slave microcontroller104may only read the active configuration from the master microcontroller unit102at the discretion of the master microcontroller unit. At a later point, for example when the second microcontroller unit is required to be actively functioning, the master microcontroller unit102transmits another, different selection signal to the register field corresponding to the GPIO108. This selection signal indicates that the GPIO is now controlled by the second slave microcontroller unit106. As such, the first slave microcontroller unit104ceases to communicate with and control the GPIO108. The second slave microcontroller unit106instead begins to communicate with and control the GPIO108. Using a selection signal transmitted by the master microcontroller unit102helps to avoid errors occurring due to the two slave microcontrollers104,106trying to control the GPIO simultaneously. At a further later point, the master microcontroller unit102may transmit another, different selection signal to the register field corresponding to the GPIO108. This selection signal indicates that the GPIO108is now controlled by the master microcontroller unit once more. The master microcontroller unit102may transmit selection signals to change the microcontroller unit with control over the GPIO at any point/as required for functioning of the microcontroller system100. A second exemplary microcontroller system200is shown inFIG.2. The microcontroller system200includes a master microcontroller unit202, two slave microcontroller units204,206, a bus210and a first GPIO208. All of these components are similar to those in the microcontroller system100shown inFIG.1. The microcontroller system200ofFIG.2however further comprises a second GPIO216, and two peripherals212,214. The first peripheral212is associated with the first slave microcontroller unit204. The second peripheral214is independent of the slave microcontroller units204,206and may be directly controlled by the master microcontroller. The master microcontroller unit202can transmit a selection signal to the register field corresponding to the first GPIO208which indicates that the first GPIO208is now controlled by the first slave microcontroller unit204. The first slave microcontroller unit may then transmit a selection signal which assigns the first GPIO208to the first peripheral212which forms part of the first slave microcontroller unit204. The first peripheral212then has control over the first GPIO208. The first peripheral212may begin communicating with and controlling the GPIO208as a result of reading the active configuration from the master microcontroller unit202through a bus interface such as the bus210shown inFIG.2, or as a result of reading the active configuration from the first slave microcontroller unit204. However, the first slave microcontroller204may only read the active configuration at the discretion of the master microcontroller unit202and/or the first slave microcontroller unit204. It may also be possible (although not shown in the embodiments shown inFIG.2) for the master microcontroller to directly assign control of a GPIO to the first peripheral which comprises part of a slave microcontroller unit204. The master microcontroller unit202will initially also have control over the second GPIO216. The master microcontroller unit202can transmit a selection signal to the register field corresponding to the second GPIO216which indicates that the second GPIO216is now controlled by the second slave microcontroller unit206. At a later point, the master microcontroller unit may transmit a selection signal to the register field corresponding to the second GPIO216which indicates that the first GPIO216is now to be controlled by the second peripheral unit214. The second slave microcontroller unit206therefore no longer communicates with the second GPIO216, and the second peripheral unit214communicates with the second GPIO216. FIG.3shows an exemplary microcontroller system300with three microcontroller systems302,304,306. The application microcontroller unit302acts as the master microcontroller system. The network microcontroller unit304and the digital signal processor microcontroller unit306both act as slave microcontroller units. During ‘start-up’ or upon reset of the microcontroller system300, the application microcontroller unit302controls (and therefore has access to) a plurality of GPIOs308. The network microcontroller unit304and digital signal processor microcontroller unit306remain inactive/disconnected from the plurality of GPIOs308during ‘start-up’ or reset. After ‘start-up’ or reset, the application microcontroller unit302assigns a subset314of the plurality of GPIOs308to the network microcontroller unit304. The network microcontroller unit304may then manage external low-noise amplifiers, power amplifiers and the wireless co-existence interface connected to the subset of GPIOs314. After ‘start-up’ or reset, the application microcontroller unit302assigns a second subset312of the plurality of GPIOs308to the digital signal processor microcontroller unit306. The digital signal processor microcontroller unit306may then communicate with an external digital-to-analog converter and pulse-density modulation unit connected to the second subset of GPIOs312. The application microcontroller unit302will separately communicate with an external power management integrated circuit (not shown) to control the power provided to the microcontroller system300. By assigning smaller sets of GPIOs314,312to the network microcontroller unit304and digital signal processor microcontroller unit306respectively, the number of GPIOs308through which unintentional bugs or malicious code in the slave microcontroller units304,306can enter the rest of the microcontroller system is reduced (i.e. the ‘attack surface’ of the microcontroller system is reduced). These unintentional bugs or malicious code can also not tamper with critical components such as the external power management integrated circuit in the arrangement described above, as the external power management integrated circuit remains separated from the plurality of GPIOs308and slave microcontroller units304,306. In the configuration seen inFIG.3, it may not be possible for the network microcontroller unit304to read the active configuration of the GPIOs from the application microcontroller unit302through a bus interface such as the bus310. However, in such configurations it would be possible for the information regarding the assignment of control of the GPIOs to be shared through a software based scheme. Thus it will be appreciated by those skilled in the art that a microcontroller system according to embodiments of the present invention may help to reduce unintentional or malicious GPIO usage, and may help to decrease start-up times associated with the microcontroller system. It will further be appreciated however that many variations of the specific arrangements described herein are possible within the scope of the invention as defined in the claims.
8,742
11860807
DESCRIPTION OF THE EMBODIMENTS The technical solution will be described clearly and completely, in the following in combination with drawings according to an embodiment of the present application. Obviously, the described embodiment is a part of embodiment of present application, not all of the embodiments. Based on embodiment in present application, all of the other embodiments obtained by those who are skilled in the art without creative works, falls into the protection scope of present application. The USB data communication method based on a hybrid USB Network according to the present application may be applied in an application environment as shown inFIG.1. The USB data communication method based on a hybrid USB Network may be applied in a USB data communication device based on a hybrid USB Network. The USB data communication device based on a hybrid USB Network includes a docking station terminal and a client terminal. In particularly, the docking station terminal communicates with the client terminal via the hybrid USB Network, and the client terminal is a program configured to provide a local service for the client. As illustrated inFIG.2, the docking station terminal includes a USB docking station, a network module and a soft switching module. The USB docking station is a traditional USB docking module. The network module in the docking station terminal is configured for transmission of USB communication data via a network. The client terminal may select a USB docking function provided by the USB docking module or a network USB function provided by the network module in the docking station module according to a requirement. The USB data communication method based on a hybrid USB Network includes the following steps executed by a docking station terminal:obtaining a USB data monitoring command carrying an operation mode;when the operation mode is an automatic mode, monitoring a data communication status of a USB input and output interface, particularly, the USB input and output interface is an upstream interface;when the data communication status is a no input and output information, monitoring data of a network input data interface of a network module in the docking station terminal, particularly, the network module in the docking station includes a wireless chipset and Ethernet chipset;when the network input data interface obtains a data sending request sent by a client terminal via the hybrid USB Network, in which the data sending request includes network data and a target transmission device, converting the network data into a USB communication data via a soft switching module in the docking station terminal, which means converting the network communication data and USB communication data to each other via a docking station terminal software converting module; andsending, via a USB output data interface, the USB communication data to the target transmission device. The USB data communication method based on a hybrid USB Network includes the following steps executed by a client terminal:sending a docking station terminal finding request to the hybrid USB Network via a network module in the client terminal;when there is any docking station terminal on the hybrid USB Network, connecting to the docking station to form a communication channel between the client terminal and the docking station terminal;sending a data sending request to the docking station terminal via the hybrid USB Network, in which the data sending request comprises a network data and a target transmission device;if the communication channel is in a sharing mode, sending the network data to the target transmission device via the docking station on the hybrid USB Network: In an embodiment, as illustrated inFIG.3, a USB data communication method based on a hybrid USB Network is provided. In an example, the method, which is applied in a client terminal and a docking station terminal of a USB data communication device based on a hybrid USB Network inFIG.1, particularly includes the following steps:S11: the client terminal sends a docking station terminal finding request to the hybrid USB Network via a network module in the client terminal. In particular, the client terminal is a terminal configured to connect to the (USB) docking station via the hybrid USB Network (including a wired Ethernet and a wireless WIFI network, etc.) and control the device (e.g., target transmission devices such as projector and printer) connected to the docking station terminal via a USB cable. Particularly, docking station terminal finding request is a request that the client terminal finds that whether an available docking station terminal exists in the current hybrid USB Network.S12: when there is any docking station terminal on the hybrid USB Network, the client terminal connects to the docking station terminal to form a communication channel between the client terminal and the docking station terminal. Particularly, the docking station terminal and the client terminal can establish a communication channel therebetween with a handshake protocol etc., which may include an Ethernet and a wireless network, for transmitting network data.S13: the client terminal sends a data sending request to the docking station terminal via the hybrid USB Network, in which the data sending request comprises a network data and a target transmission device.S21: the docking station terminal obtains a USB data monitoring command carrying an operation mode. In particular, in the present embodiment, the operation mode can be set to an automatic mode and a manual mode. The automatic mode can automatically determine whether the docking station terminal currently adapts a USB mode or a network mode. It can be understood that, in the manual mode, the current operation mode is manually set as the USB mode or the network mode. The USB data monitoring command is a request for obtaining USB communication data via an input or output data interface in docking station terminal.S22: when the operation mode is an automatic mode, the docking station terminal monitors a data communication status of a USB input and output interface. Particularly, when the operation mode of the docking station terminal is the automatic mode, the docking station terminal determines whether the current operation mode is the USB mode or the network mode in the automatic mode. It can be understood that, when there is a USB cable in the USB input and output interface of the docking station terminal, and the USB input and output interface is in a status of presence of input or output data, the docking station terminal determines that the current operation mode is the USB mode in the automatic mode; otherwise, determines that the current operation mode is the network mode.S23: when the data communication status is a no input and output information status, the docking station terminal monitors data of a network input data interface of a network module in the docking station terminal. In particular, the network module in the docking station terminal illustrated inFIG.4includes a USB hub, a MCU component configured to convert the USB communication data into the network data, as well as a wireless chipset and Ethernet chipset configured for network processing. Particularly, when the data communication status is a no input and output information status, the docking station terminal automatically monitors data of the network input data interface of the network module in the docking station terminal, to adjust the current operation mode to the network module in the automatic mode.S24: when the network input data interface obtains a data sending request sent by a client terminal via the hybrid USB Network, wherein the data sending request includes a network data and a target transmission device, the docking station terminal converts the network data into a USB communication data via a soft switching module in the docking station terminal. In particular, the network data is command data sent to the USB device terminal by the client terminal via the hybrid USB Network. The soft switching module in the docking station is configured to convert network data into USB communication data with the soft handoff, and send, via the USB docking station terminal, the converted USB communication data to a target transmission device that can identify the data.S25: the docking station terminal sends the USB communication data to the target transmission device via a USB output data interface. Particularly, the target transmission device, such as a printer or a screen, receives the USB communication data from the client terminal, and prints or displays the same.S14: when the communication channel is in a sharing mode, the client terminal sends the network data to the target transmission device via the docking station terminal on the hybrid USB Network. In particular, the sharing mode refers to a mode that a plurality of client terminals can share a plurality of USB devices (target transmission device) via the docking station terminal. Particularly, when the hybrid USB Network mode is adapted, as illustrated inFIG.5, a plurality of docking stations may exist in a network, and devices of each docking station may be accessed and used by a plurality of client terminals at the same time. In the present embodiment, a plurality of client terminals, a plurality of docking station terminals as well as a plurality of USB devices connected to each docking station terminal, all of which belong to the same network, can transmit network communication data or USB communication data with each other under deployment by a network device (e.g., a router). In the method according to the present embodiment, the current target transmission device can be used by a client terminal that currently uses the current target transmission device and another client terminal simultaneously, without waiting for the release by the former client terminal. In the USB data communication method based on a hybrid USB Network according to the present embodiment, the docking station terminal and the client terminal may control the connected USB devices via a hybrid USB Network that includes a wireless network and a wired Ethernet, which can eliminate the limitation that data is transmitted only by means of a USB cable. By expanding the data transmission via a docking station terminal to Ethernet data and wireless data, so that the data transmission will not be limited by a USB cable, the space of data transmission is expanded and the speed of data transmission is accelerated. The docking station terminal can be automatically switched to a network mode by means of the automatic mode of the docking station terminal, which improves the flexibility of the USB data communication method based on a hybrid USB Network. With the sharing mode in the client terminal, no matter whether the target transmission device on the hybrid USB Network is in an occupied state, the target transmission device can be simultaneously occupied, which improves the usage efficiency of the target transmission device and usage convenience of individual client terminals under the sharing network mode. In a particular embodiment, as illustrated inFIGS.6and7, after the step S12, i.e., after forming a communication channel between the client terminal and the docking station terminal, the method particularly includes the following steps:S121: when the communication channel is in a personal mode and the client terminal is not a first connection terminal, the client terminal waits for the docking station terminal to send a communicable command; andS122: when receiving a communicable command sent by the docking station terminal, the client terminal sends the network data to the target transmission device via the docking station terminal on the hybrid USB Network. In particular, the personal mode refers to a mode in which a client terminal (a first connection terminal) that first connects to the docking station terminal occupies all of the USB devices. The other clients can only use the released USB device when the network connection of the currently connected client terminal is terminated. In a particular embodiment, as illustrated inFIGS.8and9, after the step S21, i.e., after obtaining a USB data monitoring command carrying an operation mode, the method further particularly includes the following steps:S211: when the operation mode is a USB mode, the docking station terminal continuously monitors the data of the USB input and output interface; andS212: when the operation mode is a network mode, the docking station terminal continuously monitors the data of the network input data interface of the network module in the docking station terminal. Particularly, the USB data communication method based on a hybrid USB Network according to the present embodiment further includes a manual mode, in which the docking station terminal can be manually set to the USB mode or the network mode. The setting can be achieved with a physical switch and so on. In some embodiments, if the network mode is set in the manual mode, and when a USB cable is connected, the connected device can be charged via the docking station and the USB cable. In a particular embodiment, after the step S22, i.e., after monitoring a data communication status of a USB input and output interface, the method further particularly includes the following step:when the data communication status is a status of presence of input and output information, the docking station terminal continuously monitors the data of the USB input and output interface. Particularly, if the data communication status is presence of input and output information, it indicates that there is data transmission between a client terminal and a USB device via the USB cable currently. Therefore, the mode of transmitting data via the USB cable under the automatic mode should be maintained. In the USB data communication method based on a hybrid USB Network according to the present embodiment, the docking station terminal and the client terminal may control the connected USB devices via a hybrid USB Network that includes a wireless network and a wired Ethernet, which can eliminate the limitation that data is transmitted only by means of a USB cable. By expanding the data transmission via a docking station terminal to Ethernet data and wireless data, so that the data transmission will not be limited by a USB cable, the space of data transmission is expanded and the speed of data transmission is accelerated. The docking station terminal can be automatically switched to a network mode by means of the automatic mode of the docking station terminal, which improves the flexibility of the USB data communication method based on a hybrid USB Network. With the sharing mode in the client terminal, no matter whether the target transmission device on the hybrid USB Network is in an occupied state, the target transmission device can be simultaneously occupied, which improves the usage efficiency of the target transmission device and usage convenience of individual client terminals under the sharing network mode. In an embodiment, a USB data communication device based on a hybrid USB Network is provided. The USB data communication device based on a hybrid USB Network corresponds to the USB data communication method based on a hybrid USB Network in above embodiments. As illustrated inFIG.10, the USB data communication device based on a hybrid USB Network includes a client terminal10and a docking station terminal20. In particular, function modules in the client terminal are described in detail below:a docking station terminal finding request unit11, configured to send a docking station terminal finding request to the hybrid USB Network via a network module in the client terminal;a communication channel forming unit12, configured to connect to the docking station, and form a communication channel between the client terminal and the docking station terminal, when there is any docking station terminal on the hybrid USB Network;a data sending request sending unit13, configured to send a data sending request to the docking station terminal via the hybrid USB Network, in which the data sending request comprises a network data and a target transmission device; anda network data sending unit14, configured to send the network data to the target transmission device via the docking station terminal on the hybrid USB Network when the communication channel is in a sharing mode. Optionally, the network module in the client terminal, as illustrated inFIG.11, includes: a USB network transfer protocol module, a connecting manager module, a device manager module, and a virtual USB Device manager module. In particular, the connecting manger module, as illustrated inFIG.7, is configured to try to search for a USB docking station terminal on the hybrid USB Network; try to connect to the USB docking station terminal if the USB docking station is found; and find a connectable USB device on the docking station terminal. The USB network transfer protocol module is configured to convert the USB communication data of an actual USB device provided from the docking station terminal, and communicate with a USB device manager connected to a PC in the network. The device manager module and the virtual USB Device manager module enables a USB device identified on the network to be identified as a device connected to a real client terminal, and allow drive program of the actual USB device to be loaded and operate on the operation system. In an embodiment, a USB data communication device based on a hybrid USB Network is provided. The USB data communication device based on a hybrid USB Network corresponds to the USB data communication method based on a hybrid USB Network mentioned in the above embodiments. The USB data communication device based on a hybrid USB Network includes a docking station terminal. Function modules in the docking station terminal are described in detail below:a monitoring command obtaining unit21, configured to obtain a USB data monitoring command carrying an operation mode;a communication status monitoring unit22, configured to monitor a data communication status of a USB input and output interface when the operation mode is an automatic mode;a data monitoring unit23, configured to monitor a data of a network input data interface of a network module in the docking station terminal when the data communication status is a no input and output information status;a network data conversion unit24, configured to, when the network input data interface obtains a data sending request sent by a client terminal via the hybrid USB Network, wherein the data sending request includes a network data and a target transmission device, convert the network data into a USB communication data via a soft switching module in the docking station; anda USB communication data sending unit25, configured to send, via a USB output data interface, the USB communication data to the target transmission device. Optionally, the software module in the docking station terminal, as illustrated in FIG.12, includes: a USB device management tree, a USB device manager, a terminal connector and a client terminal connecting manager. Particularly, the software module in the docking station terminal includes: a USB device management tree with USB devices managing physical connection, a USB device manager for managing USB device information, an endpoint manager that determines a communication method and processes the actual data communication according to the type of the USB device, and a connecting manager for managing the actual communication connection to the client terminal. A specific limitation related to the USB data communication device based on a hybrid USB Network can refer to the limitation of the USB data communication method based on a hybrid USB Network in the above, which will not be reiterated herein. The above individual modules in the USB data communication device based on a hybrid USB Network can be implemented in whole or in part by software, hardware and their combination. The above modules can be embedded in or independent of the processor in the computer device, or can be stored in the memory in the computer device in the form of software, so that the processor can call and execute the corresponding operations of the above individual modules. In an embodiment, a computer device is provided. The computer device can be the client terminal, in which the internal structure diagram can be illustrated asFIG.13. The computer device includes a processor, a memory, a network interface and a data base that are connected via a bus of the device. In particular, the processor of the computer device is configured to provide calculation and control ability. The memory of the computer device includes a nonvolatile storage medium and an internal storage. The nonvolatile storage medium stores an operation device, a computer program, and a data base. The internal storage provides an environment for the operation of the operation device and the computer program in the nonvolatile storage medium. The data base of the computer device is configured to store the data need to be stored in the USB data communication method based on a hybrid USB Network. The network interface of the computer device is configured to connect and communicate with external terminal via network. The computer program implements the USB data communication method based on a hybrid USB Network when is executed by the processor. In an embodiment, a computer device is provided, which includes a memory, a processor, and a computer program stored in the memory and is able to run on the processor. The processor implements the USB data communication method based on a hybrid USB Network in the above embodiment when executes the computer program, such as step S11to step S25illustrated inFIG.3. Alternatively, the processor implements the function of individual modules/units in the USB data communication device based on a hybrid USB Network in above embodiment when executes the computer program, such as the function of the docking station terminal finding request unit11to the USB communication data sending unit25illustrated inFIG.10. To avoid repetition, it will not be reiterated herein. It should be appreciated by those who are skilled in the art that the implement of the whole or part of the process in the method of above embodiment, can be completed by the instruction of the computer program to corresponding hardware. The computer program can be stored in a nonvolatile computer readable storage medium, which can include the process of above individual method embodiments. In particular, any reference to memory, storage, database or other media in each embodiment provided in present application, can include nonvolatile and/or volatile memory. Nonvolatile memory can include read-only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically-erasable programmable ROM (EEPROM) or flash. Volatile memory can include random access memory (RAM) or external cache. As a description but not limitation, RAM can be many forms, such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), dual data rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronization link (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM) and memory bus dynamic RAM (RDRAM) etc.. Those who are skilled in the art can clearly understand that, for a convenient and brief description, only the division of the above function units and modules are in exemplary illustration. In practical applications, the above function can be distributed and completed by different function unit and module, which means that the internal structure of the device is divided into different function units or modules, to complete the whole or part of the function in above description. The above embodiment is only used to illustrate the technical solution of present application, and not the limitation of it; although a detailed description referring to above embodiment is made for present application, those who are skilled in the art should understand that: the technical solution recorded in above individual embodiments can still be modified, or a part of the technical features herein can be in equivalent substitution; however, these modifications and substitutions do not make the essence of the corresponding technical solutions depart from the spirit and scope of the technical solutions in the embodiments of the present application, and should be included in the scope of protection of the present application.
24,842
11860808
The figures are not necessarily drawn to scale and elements of similar structures or functions are generally represented by like reference numerals for illustrative purposes throughout the figures. The figures are only intended to facilitate the description of the various embodiments described herein. The figures do not describe every aspect of the teachings disclosed herein and do not limit the scope of the claims. DETAILED DESCRIPTION Each of the features and teachings disclosed herein can be utilized separately or in conjunction with other features and teachings to provide a system and method for supporting multi-path and/or multi-mode NVMeoF devices. Representative examples utilizing many of these additional features and teachings, both separately and in combination, are described in further detail with reference to the attached figures. This detailed description is merely intended to teach a person of skill in the art further details for practicing aspects of the present teachings and is not intended to limit the scope of the claims. Therefore, combinations of features disclosed above in the detailed description may not be necessary to practice the teachings in the broadest sense, and are instead taught merely to describe particularly representative examples of the present teachings. In the description below, for purposes of explanation only, specific nomenclature is set forth to provide a thorough understanding of the present disclosure. However, it will be apparent to one skilled in the art that these specific details are not required to practice the teachings of the present disclosure. Some portions of the detailed descriptions herein are presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are used by those skilled in the data processing arts to effectively convey the substance of their work to others skilled in the art. An algorithm is here, and generally, conceived to be a self-consistent sequence of steps leading to a desired result. The steps are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like. It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the below discussion, it is appreciated that throughout the description, discussions utilizing terms such as “processing,” “computing,” “calculating,” “determining,” “displaying,” or the like, refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices. The algorithms presented herein are not inherently related to any particular computer or other apparatus. Various general-purpose systems, computer servers, or personal computers may be used with programs in accordance with the teachings herein, or it may prove convenient to construct a more specialized apparatus to perform the required method steps. The required structure for a variety of these systems will appear from the description below. It will be appreciated that a variety of programming languages may be used to implement the teachings of the disclosure as described herein. Moreover, the various features of the representative examples and the dependent claims may be combined in ways that are not specifically and explicitly enumerated in order to provide additional useful embodiments of the present teachings. It is also expressly noted that all value ranges or indications of groups of entities disclose every possible intermediate value or intermediate entity for the purpose of an original disclosure, as well as for the purpose of restricting the claimed subject matter. It is also expressly noted that the dimensions and the shapes of the components shown in the figures are designed to help to understand how the present teachings are practiced, but not intended to limit the dimensions and the shapes shown in the examples. The present disclosure describes a system that can support both the NVMe and NVMeoF protocols, and various types of fabric-attached SSDs (eSSDs). In some embodiments, an eSSD refers to an SSD that can support the NVMeoF protocols. When configured to support the NVMeoF standard, the system can support various fabrics including not only Ethernet, but also, Fibre Channel, InfiniB and, and other network fabrics. For the convenience of illustration, the following examples and embodiments can show an Ethernet-attached NVMeoF devices. However, it is noted that any other type of NVMeoF devices can be used without deviating from the scope of the present disclosure. The present system provides a single platform and common building blocks that can support both single and dual pathing systems compatible with both NVMe and NVMeoF devices. According to one embodiment, the common building blocks that support single pathing and dual pathing NVMe and NVMeoF devices include a mid-plane, a chassis, a fan assembly. The present system can scale linearly by adding more similar devices and/or chassis. The present system may also include other building blocks including, but not limited to, a full-width and a half-width switch boards, and an X86 motherboard. The fabric-attached SSD (eSSD) disclosed herein is a single common device that can be used in multiples systems compatible with NVMe and NVMeoF standards. In this sense, the fabric-attached SSD is also referred to as a multi-mode NVMeoF device. The present system provides a platform that can supporting various types of NVMe and NVMeoF devices in non-high availability (non-HA) mode (i.e., single-path input/output (I/O) or HA mode (i.e., multi-path I/O) with minimum hardware changes. According to one embodiment, the multi-mode NVMeoF device can support either NVMe or NVMeoF standard by detecting product information from a known location. For example, the product information used for self-configuration is stored in the chassis is a vital product data (VPD). During the start-up, the multi-mode NVMeoF device can retrieve the VPD from the chassis and configure itself based on the VPD. However, it is noted that the multi-mode NVMeoF device can be configured in various manners without deviating from the scope of the present disclosure. For example, the multi-mode NVMeoF device can be configured by a control command over the PCIe bus issued by a BMC of the switch to which the multi-mode NVMeoF device is connected. According to one embodiment, the multi-mode NVMeoF device can be configured in a single port NVMe mode, a dual port NVMe mode, a single port NVMeoF mode, and a dual port NVMeoF mode. Table 1 shows example use of the U.2 connector according to the configuration of the multi-mode NVMeoF device. When configured as an NVMe device, the multi-mode NVMeoF device can be configured in either the single port NVMe mode or the dual port NVMe mode. In the single port NVMe mode, the PCIe lanes 0-3 of the U.2 connector are used to carry PCIe signals. In the dual port NVMe mode, the PCIe lanes are split into 2 by 2 lanes; the PCIe lanes 0 and 1 are used for the first port, and the PCIe lanes 2 and 3 are used for the second port. When configured as an NVMeoF device, the multi-mode NVMeoF device can be configured in either the single port NVMeoF mode or the dual port NVMeoF mode. In the single port NVMeoF mode, the PCIe lanes are split into 2 by 2 lanes but only the PCIe lanes 0 and 1 are used to carry PCIe signals, and the PCIe lanes 2 and 3 are not used. The first pair of the SAS port 0 is used for the Ethernet port 0 (first port), and the SAS port 1 is not used. In the dual port NVMeoF mode, the PCIe lanes are split into 2 by 2 lanes, and the PCIe lanes 0 and 1 are used as a control plane for the first Ethernet port, and the PCIe lanes 2 and 3 are used as a control plane for the second Ethernet port. The first pair of the SAS port 0 is used for the Ethernet port 0 (first port), and the SAS port 1 is used for the Ethernet port 1 (second port). TABLE 1Example use of U.2 connectorPCIe lanes 0&1PCIe lanes 2&3PCIe lanes 0-3SAS Port 0 and 1Configurationof U.2 connectorof U.2 connectorof U.2 connectorof U.2 connectorSingle portYesNot usedNVMeDual portYes - used asYes - used asSplit into 2Not usedNVMefirst portsecond portby 2 lanesSingle portYes - used asNot usedSplit into 2First pair of SAS port 0 usedNVMeoFcontrol planeby 2 lanesfor Ethernet port 0 (firstfor firstport)Ethernet portSAS port 1 is not usedDual portYes - used asYes - used asSplit into 2First pair of SAS port 0 usedNVMeoFcontrol planecontrol planeby 2 lanesfor Ethernet port 0 (firstfor firstfor secondport)Ethernet portEthernet portSecond pair of SAS port 1used for Ethernet port 1(second port) If the product information is stored in a chassis, the two lanes (in a single port mode) or four lanes (in a dual port mode) of the PCIe bus on the U.2 connector are driven by a PCIe engine. In this case, the multi-mode NVMeoF device can disable the Ethernet engine(s), and the NVMe protocols and functionalities are supported or enabled. If the product information is stored in an NVMeoF chassis, the Ethernet ports use only PCIe lanes 2 and 3, or Serial Attached SCSI (SAS) pins depending on the design of the multi-mode NVMeoF device. The present multi-mode NVMeoF device can operate in two distinct modes, namely, an NVMe mode and an NVMeoF mode. In the NVMe mode, the multi-mode NVMeoF device behaves as an NVMe device. The PCIe pins of the U.2 connector can be connected to the PCIe x4 module111. The PCIe bus can be shared by data and control. In one embodiment, in the NVMeoF mode, the multi-mode NVMeoF device can be configured in a single-path mode or a dual-path mode. In the single path mode, one PCIe x2 is used for control plane and is connected to one motherboard. In the dual-path mode, two PCIe x2 are used for control plane and are connected to two motherboards. In another embodiment, the NVMeoF device can use SAS pins for the Ethernet ports in the NVMeoF mode. In the non-HA NVMeoF mode, the two lanes of the PCIe bus are used for standard features through a control plane. In the dual-port HA mode, the four lanes of the PCIe bus are split into two X2 lanes and used for port A and port B, respectively. The existing PCIe software and driver may be used as unmodified for the multi-mode NVMeoF device. Because the multi-mode NVMeoF device can operate both in the NVMe and NVMeoF modes, the cost for developing and deploying the devices can be reduced because the same devices can be used in the NVMe mode and the NVMeoF mode. For the similar reason, the multi-mode NVMeoF device can have a faster time to the market. The multi-mode NVMeoF device can be used in various products and chassis. The two lanes of the PCIe bus are reserved for standard features through a control plane. A CPU, a baseboard management controller (BMC), and other devices can use the two lanes of the PCIe bus as a control plane to communicate to each NVMeoF device inside the chassis at no additional cost. The NVMe mid-plane can be used as unmodified, and there is no need for a new connector on the NVMeoF device due to the additional new pins. FIG.1illustrates a block diagram of an example NVMeoF device, according to one embodiment. The NVMeoF device101includes a PCIe X4 module111(e.g., PCIe X4 Gen3 module) and various hardware and protocol stacks including, but not limited to, an Ethernet network interface card (NIC)112, and a TCP/IP offload engine113, an RDMA controller115, an NVMeoF protocol stack116. The NVMeoF device101can support up to two PCIe X2 buses151and152and two Ethernet ports153and154that are connected to a switch motherboard (not shown) over the mid plane161depending on a mode of operation. The two PCIe X2 buses151and152and the two Ethernet ports153and154are connected to a U.2 connector121of the NVMeoF device101. According to one embodiment, the NVMeoF device101can be configured to as an NVMe device. In the NVMe mode, a mode selector160can configure the NVMeoF device101to use all of the four lanes (in a single port mode) or only two lanes (in a dual port mode) of the four lanes of the PCIe bus to carry PCIe signals. The PCI x4 bus is connected to a midplane, and the PCIe bus is shared between data and control signals. According to another embodiment, the NVMeoF device101can be configured as an NVMeoF device. In the NVMeoF mode, the mode selector160can configure the NVMeoF device101to use the two lanes of the PCIe X2 bus151to carry PCIe signals. The NVMeoF device101can further configure the remaining two lanes of the PCIe bus to carry Ethernet signals over the two Ethernet ports153and154. In the NVMeoF mode, the two PCIe X2 lanes are directly transported to the PCIe X4 module111, and signals over the remaining two PCIe X2 lanes are carried over the Ethernet ports153and154and buffered in the buffer122to be transported to the Ethernet NIC112of the NVMeoF device101. The operational mode of the NVMeoF device101can be self-configured or externally set. For example, the NVMeoF device101can self-configure its operational mode using a physical pin (e.g., a presence pin on the chassis of the switch motherboard) or by an in-band command from a BMC (e.g., BMC201ofFIG.2) of the switch motherboard. The manageability information retrieved through Ethernet is referred to as “in-band” information whereas the manageability information retrieved through the PCIe bus is referred to as “out-of-band” information. The NVMeoF device101can push various signals and perform various services over the PCIe ports151and152using the unused PCI X2 bus over the U.2 connector. Examples of the signals that can be pushed include, but are not limited to, health status information, field-replaceable unit (FRU) information, and sensor information of the NVMEOF device101. Examples of the services that can be pushed over the PCIe ports include, but are not limited to, discovery services to a BMC or a CPU that is local to the switchboard and download services for a new NVMeoF device firmware for performing a firmware upgrade. The NVMeoF device101can push some device-specific information directly to a BMC of the switch motherboard over the PCI X2 bus151over a control plane established between the switch motherboard and the NVMeoF device101. Examples of such device-specific information that can be carried over the control plane include, but are not limited to, discovery information and FRU information of the NVMEOF device101. This can reduce the burden of the BMC for polling the status of the NVMeoF device101. The device-specific information may be communicated between the NVMeoF device101and the BMC using a new device command. The NVMeoF device101can support high availability (HA) multipath I/O with only the two PCIe lanes151and152of the PCIe X2 bus. FIG.2illustrates a block diagram of an example switch motherboard, according to one embodiment. The switch motherboard201has an uplink Ethernet ports211, downlink Ethernet ports212, a local CPU202, a BMC203, an Ethernet switch204, and a PCIe switch205. A number of eSSDs can be connected to the switch motherboard201. According to one embodiment, the eSSD is an NVMeoF device that can be configured to work as an NVMe device or an NVMeoF device depending on the mode of operation. Each of the eSSDs can be connected to the switch motherboard201via a U.2 connector as shown inFIG.1and configured to connect to the switch motherboard201via several high-speed Molex connectors that collectively carrying all PCIe X2 bus213and the downlink Ethernet ports212and other non-high speed control signals such as SMBus, reset, clock, etc. The switch motherboard201can push various signals to each of the eSSDs and perform various services on each of the eSSDs over the PCIe X2 bus213and/or the downlink Ethernet ports212over the midplane261. For example, the switch motherboard201can receive device-specific information from each of the eSSDs over the Ethernet ports212, including, but not limited to, health status information, field-replaceable unit (FRU) information, and sensor information of the eSSD. The switch motherboard201can also perform various services over the Ethernet ports212including, but not limited to, discovery services to a BMC or a local host CPU and download services for a new eSSD firmware for performing a firmware upgrade. FIG.3illustrates a block diagram of an example NVMeoF device, according to another embodiment. The NVMeoF device301includes a PCIe X4 module311(e.g., PCIe X4 Gen3 module) and various hardware and protocol stacks including, but not limited to, an Ethernet network interface card (NIC)312, and a TCP/IP offload engine313, an RDMA controller315, an NVMeoF protocol stack316. The NVMeoF device301can support two PCIe X2 buses351and352and two Ethernet ports353and354that are connected to a switch motherboard (not shown) over the mid plane361. The PCIe X2 buses351and352and the two Ethernet ports353and354are connected to a U.2 connector321of the NVMeoF device301. According to one embodiment, the NVMeoF device301can use the unused SAS pins of the U.2 connector321for Ethernet signals instead of using the PCIe lanes153and154as shown inFIG.1. Because the NVMeoF device301uses the SAS pins for the Ethernet ports353and354, the NVMeoF device301can support multi-path I/Os and multiple protocols without suffering from a bandwidth issue. FIG.4illustrates a block diagram of an example NVMeoF device configured as an NVMe device operating in a HA mode, according to one embodiment. In this example, the NVMeoF device401is configured as an NVMe device and can support multi-path I/Os using a U.2 connector421. A two half-width switch includes two switch controllers460A and460B is contained in one 2U chassis. The NVMeoF device401is connected to both the switch controllers460A and460B via the U.2 connector over the midplane461. The switch controller460A can support two lanes of the PCIe bus and an Ethernet port A while the switch controller460B can support the remaining two lanes of the PCIe bus and an Ethernet port B. The NVMeoF device401can connect to the switch controller460A over the two-lane PCIe bus451and the Ethernet port A453. In addition, the NVMeoF device401can connect to the switch controller460B over the two-lane PCIe bus452and the Ethernet port B454. FIG.5illustrates a block diagram of an example switch including two switch motherboards, according to one embodiment. The switch500includes two switch motherboards501A and501B to support multi I/O in a dual port configuration (in a HA mode). The switch motherboard501A includes an Ethernet switch504A and a PCIe switch505A, and the switch motherboard501B includes an Ethernet switch504B and a PCIe switch505B. Each of the switch motherboards501A and501B can include other components and modules, for example, a local CPU, a BMC, uplink Ethernet ports, downlink Ethernet ports, etc. as shown in the example switch motherboard201shown inFIG.2. Several eSSDs can be plugged into device ports of the switch. For example, each of the eSSDs is connected to the switch using a U.2 connector. Each eSSD can connect to both the switch motherboard501A and the switch motherboard501B. In the present example, the eSSDs plugged into the switch500are configured as an NVMeoF device requiring connectivity to the switch500over the midplane561via the PCIe bus and Ethernet ports. According to one embodiment, the Ethernet signals between the switch500and the eSSDs can use SAS pins S2, S3, S5, and S6 for the primary Ethernet port553to the switch motherboard501A. The Ethernet signals can also use S9, S10, S12, and S13 for the secondary Ethernet port554to the switch motherboard501B. E25 pin of each U.2 connector can be used to enable the dual port configuration. PCIe signals can be carried over to PCIe buses551and552between the respective switch motherboards501A and501B and each of the eSSDs. The eSSD can self-configure its operational mode using a physical pin (e.g., a presence pin on the chassis of the switch) or by an in-band command from a BMC of the switch motherboard. According to one embodiment, the switch500can support10G Ethernet, and the midplane561is a common midplane that can support both a HA mode and a non-HA mode. Depending on the system configuration, signal integrity may need to be tested to ensure that the common midplane561can support for both configurations. If the signal integrity is not sufficient, the system can have two midplanes including the first midplane for the HA mode and the second midplane for the non-HA mode. According to one embodiment, a system includes a fabric switch including a motherboard, a baseboard management controller (BMC), a network switch configured to transport network signals, and a PCIe switch configured to transport PCIe signals; a midplane; and a plurality of device ports. Each of the plurality of device ports is configured to connect a storage device to the motherboard of the fabric switch over the midplane and carry the network signals and the PCIe signals over the midplane. The storage device is configurable in multiple modes based on a protocol established over a fabric connection between the system and the storage device. The storage device may have a U.2 connector. The storage device may support both NVMe and NVMeoF protocols. The midplane may support both a high availability (HA) mode and a non-HA mode. The network signals may be carried over unused pins of the connector. The network signals may provide discovery services or download services for a new firmware of the storage device. The network signals may include device-specific information including one or more of health status information, field-replaceable unit (FRU) information, and sensor information of the storage device, and the device-specific information may be transported to the BMC over the midplane via PCIe lanes. The storage device may be configured to operate in a HA mode or a non-HA mode. According to another embodiment, an NVMeoF includes: a PCIe module; a network engine; and a connector configured to connect to a switch motherboard over a midplane and carry PCIe signals over the midplane. The PCIe module transports PCIe signals to the switch over the PCIe bus, and the network engine transport network signals to the switch over Serial Attached SCSI (SAS) pins of the connector. The connector may be a U.2 connector. The network signals may include device-specific information including one or more of health status information, FRU information, and sensor information of the NVMeoF device. The device-specific information may be carried to a BMC of the switch over the midplane. The network signals may provide discovery services or download services for a new firmware of the NVMeoF device. The switch may include two switch boards including a primary Ethernet port and a secondary Ethernet port. SAS pins S2, S3, S5, and S6 may be used for the primary Ethernet port, and SAS pins S9, S10, S12, and S13 may be used for the secondary Ethernet port. The NVMeoF device may be configured to operate in a HA mode or a non-HA mode. According to yet another embodiment, a system includes: a switch and a plurality of NVMeoF devices. Each NVMeoF device is configured to be coupled to the switch using a connector. The connector is configured to transport the PCIe signals to the switch over a PCIe bus and transport network signals to the switch over a network bus. The connector may be a U.2 connector. The PCIe signals may be transported over two PCIe lanes of the PCIe bus, and the network signals may be transported over the remaining two PCIe lanes of the PCIe bus. The network signals may be transported over SAS pins. The network signals may include device-specific information including one or more of health status information, FRU information, and sensor information of each NVMeoF device. The network signals may provide discovery services or download services for a new firmware of each NVMeoF device. Each NVMeoF device of the NVMeoF devices may be configured to operate in a HA mode or a non-HA mode. The above example embodiments have been described hereinabove to illustrate various embodiments of implementing a system and method for supporting multi-path and/or multi-mode NVMeoF devices. Various modifications and departures from the disclosed example embodiments will occur to those having ordinary skill in the art. The subject matter that is intended to be within the scope of the invention is set forth in the following claims.
25,351
11860809
Skilled artisans will appreciate that elements in the figures are illustrated for simplicity and clarity and have not necessarily been drawn to scale. For example, the dimensions of some of the elements in the figures may be exaggerated relative to other elements to help to improve understanding of embodiments of the present invention. The apparatus and method components have been represented where appropriate by conventional symbols in the drawings, showing only those specific details that are pertinent to understanding the embodiments of the present invention so as not to obscure the disclosure with details that will be readily apparent to those of ordinary skill in the art having the benefit of the description herein. DETAILED DESCRIPTION Examples disclosed herein are directed to a computing device, comprising: a housing defining an exterior of the computing device; a controller supported within the housing; a first communication port disposed on the exterior; a second communication port disposed on the exterior; a port-sharing subsystem supported within the housing, having (i) a first state to connect the controller with the first communication port, exclusive of the second communication port, and (ii) a second state to connect the controller with the first communication port and the second communication port; the controller configured to: detect engagement of an external device with the first communication port; obtain connection parameters from the external device; based on the connection parameters, set the port-sharing subsystem in either the first state or the second state; and establish a connection to the external device via the port-sharing subsystem and the first communication port. Additional examples disclosed herein are directed to a method, comprising: detecting engagement of an external device with a first communication port disposed on an exterior of a computing device; obtaining connection parameters from the external device; based on the connection parameters, setting a port-sharing subsystem in either (i) a first state to connect the controller with the first communication port, exclusive of a second communication port disposed on the exterior, or (ii) a second state to connect the controller with the first communication port and the second communication port; and establishing a connection to the external device via the port-sharing subsystem and the first communication port. FIG.1illustrates a computing device100viewed from a front (upper portion ofFIG.1), and from a back thereof (lower portion ofFIG.1). The computing device100(also referred to herein simply as the device100) can be a mobile computing device, such as a smart phone, a tablet computer, a wearable computing device, or the like. In other examples, the computing device need not be mobile. The device100includes a housing104defining an exterior of the device100, and containing or otherwise supporting various components of the device100. As illustrated inFIG.1, the housing104supports a display108of the device100, e.g., controllable by one or more internal components of the device100to render text, graphics and the like thereon. The device100can also include an input device, such as a touch screen integrated with the display108. In other examples, the device100can include other input devices, in addition to or instead of the above-mentioned touch screen. The device100includes a plurality of communication ports, also referred to herein simply as ports. The illustrated example includes a first communication port112-1disposed on a bottom116of the device100, as well as a second communication port112-2disposed on a side120of the device100, and a third communication port112-3disposed on a back124of the device100. The ports112-1,112-2, and112-3are also referred to collectively herein as the ports112, and any one of the ports112may also be referred to generically as a port112. In other examples, the device100can include a greater number of ports112, or as few as two ports112. As seen inFIG.1, the port112are disposed on the exterior of the device100, and enable connections to be established between the device100and other electronic devices, also referred to as external devices. A wide variety of such other devices can be connected with the device100via the ports112. Examples of such other devices include peripherals such as a headset (e.g., including one or more speakers and/or one or microphones), a heads-up display (HUD), e.g., mounted on an eyeglass frame, a storage device such as a thumb drive, and the like. Further examples of such other devices include a host computing device, such as a personal computer, to which the device100can be connected to transfer files between the personal computer and the device100, to manage configuration settings of the device100, and the like. The connections mentioned above are implemented via a common communication protocol, or set of related communication protocols. In the present example, the connections are Universal Serial Bus (USB) connections. The ports112are therefore USB ports, although the ports112can have distinct physical configurations and support distinct operational modes, some of which may enable more restricted sets of functionality than others. For example, in the embodiment illustrated inFIG.1, the ports112-1and112-2are USB type-C ports configured to accept type-C connectors on the above-mentioned external devices, or cables connected thereto. The port112-3includes a set of contact pads (e.g., eight, in the illustrated example) configured to engage with a corresponding set of pogo pins or other suitable electrical contacts on an external device. Other physical configurations can also be deployed for one or more of the ports112, as will be apparent to those skilled in the art. Turning toFIG.2, certain internal components of the device100are illustrated. In particular, the device100includes a System on a Chip (SoC)200, e.g., an integrated circuit or set of integrated circuits deployed in a common package. The SoC200can include a central processing unit (CPU), graphics processing unit (GPU), and memory (e.g., a suitable combination of volatile and/or non-volatile memory). The SoC200can also include other components such as wireless transceivers or other networking hardware, input/output controllers (e.g., for the touch screen integrated with the display108), and the like. In other examples, some or all of the above mentioned components can be implemented as discrete integrated circuits, rather than being integrated into the SoC200. The device100also includes a controller204configured to establish connections with external devices via the ports112and manage the exchange of data between such external devices and the device100itself. In the present example, in which the ports112are USB ports, the controller204is a USB controller configured to manage the above-mentioned connections and enable data transfer between external devices and the CPU and/or other components of the device100. In some examples, as with the other components mentioned above, the controller204can be implemented as a discrete component rather than integrated with the SoC200. In the present example, although the device100includes three ports112, the controller204implements a single logical port (e.g., the controller204may include only a single set of pins implementing the above-mentioned port). The ports112, in other words, share access to the single internal port implemented by the controller204. In other devices, a multi-port controller may be provided, or a plurality of single-port controllers may be provided. The cost and/or manufacturing complexity of such devices may be increased as a result, however. The inclusion of a comparatively simple controller204in the device100may reduce cost and/or complexity. Sharing access to the internal port implemented by the controller204between the multiple external ports112, however, can lead to interrupted connections at one or more of the ports112, reduced functionality at the ports112, or the like. The device100, for example the controller204in particular, is therefore configured to implement functionality to mitigate such interruptions, as will be discussed below in greater detail. In order to physically interconnect the ports112with the controller204, the device100includes a port-sharing subsystem208. The port-sharing subsystem includes a hub212(e.g., a USB hub, in this example). The hub212enables physical connections between the controller204and each of the ports112. The hub212therefore enables concurrent use of one or more of the ports112, sharing the available bandwidth of the single internal port implemented by the controller204between any active ports112(i.e., ports112that are currently engaged with external devices). However, as will be apparent to those skilled in the art, the hub212, and/or certain combinations of the hub212and ports112, may not support a complete set of operational modes enabled by the controller204. In the context of USB connections, for example, the controller204may allow connections to be established with external devices with the relevant port112acting as either a downstream-facing port (DFP, in which the device100is a host device, e.g., for a peripheral such as a headset) or an upstream-facing port (UFP, in which the device100is a client and the external device is a host). Further, the controller204may support operational modes with various different transfer speeds. For example, the controller204may support USB SuperSpeed (SS) connections, as well as high speed (HS) connections with lower bandwidth the SS connections, and full speed (FS) connections with lower bandwidth than HS connections. The hub212, on the other hand, may enable only DFP connections to external devices, in which the device100is a host device. Further, the ports112-2and112-3, e.g., due to their pinouts or other restrictions, may support only reduced transfer speeds (e.g., FS and HS, but not SS). The port112-1, meanwhile, may be a dual-role port (DRP, i.e., configurable as either DFP or UFP) supporting the full set of functionality enabled by the controller204, but only when not connected via the hub212(which would otherwise restrict the port112-1to DFP operation). Similarly, some transfer speeds (e.g., SS) may be unavailable to the port112-1via the hub212. The port112-1may therefore also be referred to as an enhanced port, while the ports112-2and112-3may also be referred to as restricted ports. As will be seen below, the port-sharing subsystem208has enhanced and restricted states that determine the capabilities available via the ports112. To enable the port112-1to establish UFP connections, connections employing certain transfer speeds, or the like, the port-sharing subsystem208also includes at least one switch216. In the present example, the port-sharing subsystem208includes a first switch216-1, and a second switch216-2. The switches216can be implemented as distinct components, or as portions of a single component, such as a multiplexer or the like. The switches216are controllable by the controller204, as indicated by the dotted lines inFIG.2. As seen inFIG.2, the switch216-1enables connection of the controller204to either the hub212, or the second switch216-2. The second switch216-2, meanwhile, enables connection of the port112-1to either the hub212, or the first switch216-1. Thus, the switches216can cooperate to bypass the hub212on behalf of the port112-1, connecting the port112-1directly to the controller204. Direct connection of the port112-1to the controller204via the switches216excludes connections between the controller204and the ports112-2and112-3. Therefore, subsequent connection of an external device to the port112-2or112-3may require the control of the switches216to reconnect the hub212to the controller204, thereby interrupting any connection established via the port112-1. The connection established via the port112-1may then be re-established via the hub212(with the above-noted operational restrictions), but in the meantime data transfers, audio streams, or the like may fail and to need to be restarted. As will be discussed below, the controller204is configured to implement an assessment process prior to establishing connections to external devices, in order to mitigate interruptions such as those noted above, while also enabling an enhanced operational mode for certain connections via the port112-1under some conditions. Turning toFIG.3, a flowchart of a port-sharing method300is illustrated. The method300will be described in conjunction with its performance in the device100, e.g., by the controller204to control operation of the port-sharing subsystem208. As will be apparent, the method300may also be implemented in other devices100, e.g., including different numbers of enhanced ports and restricted ports. At block305, the controller204is configured to detect engagement of an external device with one of the ports112. Detection of engagement at block305can be done by detecting a predetermined voltage on one or more lines of the relevant port112, as will be apparent to those skilled in the art. At block305, the port-sharing subsystem208may be configured to connect the hub212to the controller204, such that all ports112can be monitored for engagement. In some examples, detection of engagement at block305can also be implemented by querying a software-based state machine configured to track port connections, and/or by querying port drivers (e.g. integrated with an operating system) for port activity status. Following the detection at block305, at block310the controller204is configured to determine whether there are any other active connections to external devices. That is, the controller204is configured to determine whether any other ports112are currently engaged with external devices. When the determination at block310is affirmative (i.e., when one or more other ports112are already engaged with external devices), the controller204proceeds to establish a connection at the port detected at block305via the hub212(at block330, discussed in greater detail below). Specifically, to support more than one concurrent connection, the hub212must be activated, and therefore an affirmative determination at block310necessitates use of the hub212, regardless of any operational restrictions imposed thereby. In the present example, it is assumed that the determination at block310is negative, because the engagement detected at block305is the only engagement present with any of the ports112. The controller204therefore proceeds to block315, and determines whether the engagement detected at block305is an engagement of an external device with the enhanced port112-1. When the determination at block315is negative, as will be discussed below, the hub212is employed because the ports112-2and112-3are only connected to the controller204via the hub212. However, in the present example performance of the method300, the determination at block315is assumed to be affirmative. For example, turning toFIG.4, a headset400is shown having been connected to the port112-1. The ports112-2and112-3, on the other hand, are inactive (i.e., no external devices are connected to the ports112-2and112-3). Returning toFIG.3, following an affirmative determination at block315, the controller204proceeds to block320. At block320the controller204obtains connection parameters for the external device, e.g., the headset400shown inFIG.4. To obtain the connection parameters, the controller204can be configured to set the port-sharing subsystem208in a first state. The first state connects the port112-1to the controller204, exclusively of the ports112-2and112-3. As shown inFIG.4, in the first state the switch216-1is controlled to connect the controller204to the second switch216-2, and the second switch216-2is controlled to connect the port112-1to the first switch216-1. Thus, the hub212is bypassed (i.e., not connected to the controller204), and the port112-1is connected to the controller204exclusively. In the first state shown inFIG.4, the port112-1can operate in an enhanced operating mode, in which the full set of functionality implemented by the controller204is available. However, prior to establishing a connection with the headset400in the illustrated state, the controller204is configured to assess whether such an exclusive connection is necessary to support the headset400. The connection parameters obtained at block320include, for example, a transfer speed and/or directionality for the connection. Various protocols(s) will occur to those skilled in the art to obtain the connection parameters, e.g., according to the USB enumeration process. For example, at block320the controller204may obtain connection parameters indicating that the headset400operates as a UFP device, indicating that the port112-1is to operate as a DFP. Further, the connection parameters obtained at block320indicate that the headset400supports FS and HS transfer speeds (e.g., excluding SS transfer speed). A variety of other connection parameters can also be obtained at block320. For example, the controller204may obtain indications of whether the headset400(or any other connected device) supports any of a variety of connection modes supported by the controller204, such as a display mode (e.g., Display Port over Type-C USB), and the like. At block325, the controller204is configured to determine whether the connection parameters obtained at block320are compatible with the hub212. That is, the controller204is configured to determine whether, despite the engagement detected at block305being the only current port engagement, the engaged device can be supported via the hub212rather than via the dedicated connection established by bypassing the hub212. The determination at block325includes comparing the connection parameters from block320to stored connection parameters at the controller204, representing the capabilities of the hub212. For example, the controller204can store a maximum supported transfer speed, and/or support directionality parameters for the hub212. In the present example, it is assumed that the hub212supports transfer speeds up to and including HS (i.e., excluding SS), and supports only DFP connections, in which the device100is the host. Thus, at block325the controller204determines that the connection parameters from block320are compatible with the hub212. Following an affirmative determination at block325, the controller204proceeds to block330. At block330, the hub212is activated, and the connection with the headset400is completed. Completion of the connection with the headset400can include further enumeration processes, loading and execution of a driver for the headset400, and the like. Referring to FIG. the headset400is shown connected to the controller204via the hub212. InFIG.5, the port-sharing subsystem208is set to a second state to connect the controller204with the port112-1as well as the ports112-2and112-3, via the hub. In the second state, the port-sharing subsystem208enables a restricted operational mode for the ports112, due to restrictions in the functionality implemented by the hub212relative to the functionality implemented by the controller204(as well as the port112-1). The port112-1, in other words, is excluded from operating in an enhanced operational mode when the subsystem208is in the second state. As will be apparent, subsequent engagements of other external devices may be detected through additional performances of block305, triggering further performances of the method300. Turning toFIG.6, a thumb drive600or other external storage device has been engaged with the port112-3. Thus, following a detection of the thumb drive600at another performance of block305, the controller204determines, at block310, whether other active connections are present. In this example, the determination at block310is affirmative, because the port112-1is also active. The controller304therefore proceeds directly to block330, to complete a connection with the thumb drive600via the hub212. As will now be apparent, connecting the thumb drive600via the hub212avoids interrupting the connection with the headset400, because the headset400is also already connected via the hub212. If the headset400had been connected by bypassing the hub212, connection of the thumb drive600would require disconnection of the headset400, and reconnection via the hub212. Such a reconnection may result in a dropped audio stream or the like at the headset400. The assessment of connection parameters at block325therefore enables the controller204to set the port-sharing subsystem208in the second state for connections via the port112-1that are compatible with the restricted operating mode enabled in the second state. Using the second state in some cases, rather than simply connecting external devices at the port112-1using the first state regardless of the functional needs of those external devices, allows the controller204to reduce future disruptions to the connection at the port112-1. Turning toFIG.7, a further example performance of the method300will be discussed, in which an external hard drive700is engaged with the port112-1. Following detection of the hard drive700at block305, the controller204makes a negative determination at block310(because no other ports112are active, as shown inFIG.7), and an affirmative determination at block315. At block320, the controller204obtains connection parameters indicating that the hard drive700is a UFP device, such that the port112-1will operate as a DFP (host). Further, the connection parameters indicate that the maximum transfer speed supported by the hard drive700is SS. The determination at block325is therefore negative, because the hub212does not support the transfer speed specified in the connection parameters. The controller204therefore proceeds to block335, and determines whether to modify the connection parameters to parameters that are compatible with the hub212. For example, the data stored by the controller204representing the capabilities of the hub212indicate that the hub212does support connections with the port112-1as a DFP. Further, because the connection parameters indicate that the hard drive700supports the SS transfer speed, the hard drive also supports lower transfer speeds, including those supported by the hub212. Therefore, at block335, the controller204can be configured to select modified connection parameters including DFP and a transfer speed compatible with the hub212(e.g., HS or FS). The controller204can then proceed to block330as noted above. In some examples, prior to selecting modified connection parameters, at block335the controller204can generate a prompt via the display108requesting a command from an operator of the device100to modify the connection parameters or connect the hard drive700in the first state (i.e., in the enhanced operational mode). Turning toFIG.8, the display108is shown rendering a prompt, including a notification800(e.g., a string of text, although other notifications may also be employed in addition to or instead of the text shown inFIG.8). The notification800indicates that the external device (i.e., the hard drive700in the illustrated example) operates at a given transfer speed (SS, in this example), and that operating the device at the specified transfer speed may interrupt the connection if other external devices are connected later. The prompt also includes selectable options804and808, enabling the operator of the device100to connect the hard drive700at a reduced transfer speed (option804), or to continue with the maximum transfer speed supported by the hard drive700(option808). If the option804is selected, the determination at block335is affirmative, and the controller204proceeds to connect the hard drive700via the hub212, at a reduced transfer speed. That is, the controller204sets the port-sharing subsystem208to the second state, enabling a connection with the hard drive700in the restricted operational mode. If the option808is selected, the determination at block335is negative, and the controller204proceeds to connect the hard drive700by bypassing the hub212. That is, the controller204sets the port-sharing subsystem208to the first state, enabling a connection with the hard drive700in the enhanced operational mode. As will be apparent, the use of the first state may lead to interruptions in the connection with the hard drive700if another external device is connected to the port112-2or the port112-3. In other examples, the determination at block335may be made automatically by the controller204. For instance, if a host computing device, such as a personal computer, is connected to the port112-1, the determination at block335is negative, and the controller204may omit the generation of a prompt. The host computing device may specify in the connection parameters from block320that it operates as a DFP, meaning the port112-1must operate as a UFP. Because the hub212is incompatible with UFP operation, no modification is available. In the foregoing specification, specific embodiments have been described. However, one of ordinary skill in the art appreciates that various modifications and changes can be made without departing from the scope of the invention as set forth in the claims below. Accordingly, the specification and figures are to be regarded in an illustrative rather than a restrictive sense, and all such modifications are intended to be included within the scope of present teachings. The benefits, advantages, solutions to problems, and any element(s) that may cause any benefit, advantage, or solution to occur or become more pronounced are not to be construed as a critical, required, or essential features or elements of any or all the claims. The invention is defined solely by the appended claims including any amendments made during the pendency of this application and all equivalents of those claims as issued. Moreover in this document, relational terms such as first and second, top and bottom, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. The terms “comprises,” “comprising,” “has”, “having,” “includes”, “including,” “contains”, “containing” or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises, has, includes, contains a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. An element proceeded by “comprises . . . a”, “has . . . a”, “includes . . . a”, “contains . . . a” does not, without more constraints, preclude the existence of additional identical elements in the process, method, article, or apparatus that comprises, has, includes, contains the element. The terms “a” and “an” are defined as one or more unless explicitly stated otherwise herein. The terms “substantially”, “essentially”, “approximately”, “about” or any other version thereof, are defined as being close to as understood by one of ordinary skill in the art, and in one non-limiting embodiment the term is defined to be within 10%, in another embodiment within 5%, in another embodiment within 1% and in another embodiment within 0.5%. The term “coupled” as used herein is defined as connected, although not necessarily directly and not necessarily mechanically. A device or structure that is “configured” in a certain way is configured in at least that way, but may also be configured in ways that are not listed. Certain expressions may be employed herein to list combinations of elements. Examples of such expressions include: “at least one of A, B, and C”; “one or more of A, B, and C”; “at least one of A, B, or C”; “one or more of A, B, or C”. Unless expressly indicated otherwise, the above expressions encompass any combination of A and/or B and/or C. It will be appreciated that some embodiments may be comprised of one or more specialized processors (or “processing devices”) such as microprocessors, digital signal processors, customized processors and field programmable gate arrays (FPGAs) and unique stored program instructions (including both software and firmware) that control the one or more processors to implement, in conjunction with certain non-processor circuits, some, most, or all of the functions of the method and/or apparatus described herein. Alternatively, some or all functions could be implemented by a state machine that has no stored program instructions, or in one or more application specific integrated circuits (ASICs), in which each function or some combinations of certain of the functions are implemented as custom logic. Of course, a combination of the two approaches could be used. Moreover, an embodiment can be implemented as a computer-readable storage medium having computer readable code stored thereon for programming a computer (e.g., comprising a processor) to perform a method as described and claimed herein. Examples of such computer-readable storage mediums include, but are not limited to, a hard disk, a CD-ROM, an optical storage device, a magnetic storage device, a ROM (Read Only Memory), a PROM (Programmable Read Only Memory), an EPROM (Erasable Programmable Read Only Memory), an EEPROM (Electrically Erasable Programmable Read Only Memory) and a Flash memory. Further, it is expected that one of ordinary skill, notwithstanding possibly significant effort and many design choices motivated by, for example, available time, current technology, and economic considerations, when guided by the concepts and principles disclosed herein will be readily capable of generating such software instructions and programs and ICs with minimal experimentation. The Abstract of the Disclosure is provided to allow the reader to quickly ascertain the nature of the technical disclosure. It is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. In addition, in the foregoing Detailed Description, it can be seen that various features are grouped together in various embodiments for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed embodiments require more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter lies in less than all features of a single disclosed embodiment. Thus the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separately claimed subject matter.
30,902
11860810
DETAILED DESCRIPTION One solution for providing specialized computing resources within a set of reusable general computing resources is to provide a server computer comprising a configurable logic platform (such as by providing a server computer with an add-in card including a field-programmable gate array (FPGA)) as a choice among the general computing resources. Configurable logic is hardware that can be programmed or configured to perform a logic function that is specified by configuration data that is applied to or loaded on the configurable logic. For example, a user of the computing resources can provide a specification (such as source code written in a hardware description language) for configuring the configurable logic, the configurable logic can be configured according to the specification, and the configured logic can be used to perform a task for the user. However, allowing a user access to low-level hardware of the computing facility can potentially introduce security and privacy issues within the computing facility. As a specific example, a faulty or malicious design from one user could potentially cause a denial of service to other users if the configured logic caused one or more server computers within the computing facility to malfunction (e.g., crash, hang, or reboot) or be denied network services. As another specific example, a faulty or malicious design from one user could potentially corrupt or read data from another user if the configured logic is able to read and/or write memory of the other user's memory space. As described herein, a compute services facility can include a variety of computing resources, where one type of the computing resources can include a server computer comprising a configurable logic platform. The configurable logic platform can be programmed or configured by a user of the computer system so that hardware (e.g., the configurable logic) of the computing resource is customized by the user. For example, the user can program the configurable logic so that it functions as a hardware accelerator that is tightly coupled to the server computer. As a specific example, the hardware accelerator can be accessible via a local interconnect, such as Peripheral Component Interconnect Express (PCI-Express or PCIe), of the server computer. The user can execute an application on the server computer and tasks of the application can be performed by the hardware accelerator using PCIe transactions. By tightly coupling the hardware accelerator to the server computer, the latency between the accelerator and the server computer can be reduced which can potentially increase the processing speed of the application. The compute services provider can potentially increase the security and/or availability of the computing resources by wrapping or encapsulating the user's hardware accelerator (also referred to herein as application logic) within host logic of the configurable logic platform. Encapsulating the application logic can include limiting or restricting the application logic's access to configuration resources, physical interfaces, hard macros of the configurable logic platform, and various peripherals of the configurable logic platform. For example, the compute services provider can manage the programming of the configurable logic platform so that it includes both the host logic and the application logic. The host logic can provide a framework or sandbox for the application logic to work within. In particular, the host logic can communicate with the application logic and constrain the functionality of the application logic. For example, the host logic can perform bridging functions between the local interconnect (e.g., the PCIe interconnect) and the application logic so that the application logic cannot directly control the signaling on the local interconnect. The host logic can be responsible for forming packets or bus transactions on the local interconnect and ensuring that the protocol requirements are met. By controlling transactions on the local interconnect, the host logic can potentially prevent malformed transactions or transactions to out-of-bounds locations. As another example, the host logic can isolate a configuration access port so that the application logic cannot cause the configurable logic platform to be reprogrammed without using services provided by the compute services provider. FIG.1is a system diagram showing an example of a computing system100including a configurable logic platform110and a server computer120. For example, the server computer120can be used to execute an application program for an end-user. Specifically, the server computer120can include a central processing unit (CPU)122, memory124, and a peripheral interface126. The CPU122can be used to execute instructions stored in the memory124. For example, the memory124can be loaded with all or a portion of the application program and the CPU122can execute the instructions of the application program. The application program can communicate with a hardware accelerator of the configurable logic platform110by issuing transactions using the peripheral interface126. As used herein, a transaction is a communication between components. As specific examples, a transaction can be a read request, a write, a read response, a message, an interrupt, or other various exchanges of information between components. The transaction can occur on a bus shared by multiple components. Specifically, values of signal lines of the bus can be modulated to transfer information on the bus using a communications protocol of the bus. The transaction can occur over one or more phases, such as an address phase and one or more data phases. Additionally or alternatively, the transaction can occur using one or more serial lines of a point-to-point interconnect that connects two components. Specifically, the transaction can be sent in a packet that is transmitted over the point-to-point interconnect. The peripheral interface126can include a bridge for communicating between the CPU122using a local or front-side interconnect and components using a peripheral or expansion interconnect. Specifically, the peripheral interface126can be connected to a physical interconnect that is used to connect the server computer120to the configurable logic platform110and/or to other components. For example, the physical interconnect can be an expansion bus for connecting multiple components together using a shared parallel bus or serial point-to-point links. As a specific example, the physical interconnect can be PCI express, PCI, or another physical interconnect that tightly couples the server computer120to the configurable logic platform110. Thus, the server computer120and the configurable logic platform110can communicate using PCI bus transactions or PCIe packets, for example. The configurable logic platform110can include host logic and a reconfigurable logic region140. The host logic can include a host interface112, a management function114, and data path function116. The reconfigurable logic region140can include hardware that is configurable to implement the hardware accelerator or application logic. In other words, the reconfigurable logic region140can include logic that is programmable to perform a given function. For example, the reconfigurable logic region140can include programmable logic blocks comprising combinational logic and/or look-up tables (LUTs) and sequential logic elements (such as flip-flops and/or latches), programmable routing and clocking resources, programmable distributed and block random access memories (RAMs), digital signal processing (DSP) bitslices, and programmable input/output pins. The host logic can be used to encapsulate the reconfigurable logic region140. For example, the reconfigurable logic region140can interface with various components of the configurable hardware platform using predefined interfaces so that the reconfigurable logic region140is restricted in the functionality that it can perform. As one example, the reconfigurable logic region can interface with static host logic that is loaded prior to the reconfigurable logic region140being configured. For example, the static host logic can include logic that isolates different components of the configurable logic platform110from the reconfigurable logic region140. As one example, hard macros of the configurable logic platform110(such as a configuration access port or circuits for signaling on the physical interconnect) can be masked off so that the reconfigurable logic region140cannot directly access the hard macros. The host logic can include the host interface112for communicating with the server computer120. Specifically, the host interface112can be used to connect to the physical interconnect and to communicate with the server computer120using a communication protocol of the physical interconnect. As one example, the server computer120can communicate with the configurable logic platform110using a transaction including an address associated with the configurable logic platform110. Similarly, the configurable logic platform110can communicate with the server computer120using a transaction including an address associated with the server computer120. The addresses associated with the various devices connected to the physical interconnect can be predefined by a system architect and programmed into software residing on the devices. Additionally or alternatively, the communication protocol can include an enumeration sequence where the devices connected to the physical interconnect are queried and where addresses are assigned to each of devices as part of the enumeration sequence. As one example, the peripheral interface126can issue queries to each of the devices connected to the physical interconnect. The host interface112can respond to the queries by providing information about the configurable logic platform110, such as how many functions are present on the configurable logic platform110, and a size of an address range associated with each of the functions of the configurable logic platform110. Based on this information, addresses of the computing system100can be allocated such that each function of each device connected to the physical interconnect is assigned a non-overlapping range of addresses. After enumeration, the host interface112can route transactions to functions of the configurable logic platform110based on an address of the transaction. The host logic can include the management function114that can be used for managing and configuring the configurable logic platform110. Commands and data can be sent from the server computer120to the management function114using transactions that target the address range of the management function114. For example, the server computer120can generate transactions to transfer data (e.g., configuration data) and/or write control registers of the configurable logic platform110that are mapped to one or more addresses within the address range of the management function114. Writing the control registers can cause the configurable logic platform110to perform operations, such as configuring and managing the configurable logic platform110. As a specific example, configuration data corresponding to application logic to be implemented in the reconfigurable logic region140can be transmitted from the server computer120to the configurable logic platform110in one or more transactions over the physical interconnect. A transaction150to configure the reconfigurable logic region140with the configuration data can be transmitted from the server computer120to the configurable logic platform110. Specifically, the transaction150can write a value to a control register mapped to the management function114address space that will begin configuring the reconfigurable logic region140. In one embodiment, the configuration data can be transferred from the server computer120to the configurable logic platform110before the configuration of the reconfigurable logic region140begins. For example, the management function114can cause the configuration data to be stored in an on-chip or off-chip memory accessible by the configurable logic platform110, and the configuration data can be read from the memory when the reconfigurable logic region140is being configured. In another embodiment, the configuration data can be transferred from the server computer120to the configurable logic platform110after the configuration of the reconfigurable logic region140begins. For example, a control register can be written to begin configuration of the reconfigurable logic region140and the configuration data can be streamed into or loaded onto the reconfigurable logic region140as transactions including the configuration data are processed by the management function114. The host logic can include a data path function116that can be used to exchange information (e.g., application input/output160) between the server computer120and the configurable logic platform110. Specifically, commands and data can be sent from the server computer120to the data path function116using transactions that target the address range of the data path function116. Similarly, the configurable logic platform110can communicate with the server computer120using a transaction including an address associated with the server computer120. The data path function116can act as a translation layer between the host interface112and the reconfigurable logic region140. Specifically, the data path function116can include an interface for receiving information from the reconfigurable logic region140and the data path function116can format the information for transmission from the host interface112. Formatting the information can include generating control information for one or more transactions and partitioning data into blocks that are sized to meet protocol specifications. Thus, the data path function116can be interposed between the reconfigurable logic region140and the physical interconnect. In this manner, the reconfigurable logic region140can potentially be blocked from formatting transactions and directly controlling the signals used to drive the physical interconnect so that the reconfigurable logic region140cannot be used to inadvertently or maliciously violate protocols of the physical interconnect. FIG.2is a system diagram showing an example of a system200including a configurable hardware platform210and a server computer220. The server computer220and the configurable hardware platform210can be connected via a physical interconnect230. For example, the physical interconnect230can be PCI express, PCI, or any other interconnect that tightly couples the server computer220to the configurable hardware platform210. The server computer220can include a CPU222, memory224, and an interconnect interface226. For example, the interconnect interface226can provide bridging capability so that the server computer220can access devices that are external to the server computer220. For example, the interconnect interface226can include a host function, such as root complex functionality as used in PCI express. The configurable hardware platform210can include reconfigurable logic blocks and other hardware. The reconfigurable logic blocks can be configured or programmed to perform various functions of the configurable hardware platform210. The reconfigurable logic blocks can be programmed multiple times with different configurations so that the blocks can perform different functions over the lifetime of the device. The functions of the configurable hardware platform210can be categorized based upon the purpose or capabilities of each function, or based upon when the function is loaded into the configurable hardware platform210. For example, the configurable hardware platform210can include static logic, reconfigurable logic, and hard macros. The functionality for the static logic, reconfigurable logic, and hard macros can be configured at different times. Thus, the functionality of the configurable hardware platform210can be loaded incrementally. A hard macro can perform a predefined function and can be available when the configurable hardware platform210is powered on. For example, a hard macro can include hardwired circuits that perform a specific function. As specific examples, the hard macros can include a configuration access port (CAP)211for configuring the configurable hardware platform210, a serializer-deserializer transceiver (SERDES)212for communicating serial data, a memory or dynamic random access memory (DRAM) controller213for signaling and controlling off-chip memory (such as a double data rate (DDR) DRAM281), and a storage controller214for signaling and controlling a storage device282. The static logic can be loaded at boot time onto reconfigurable logic blocks. For example, configuration data specifying the functionality of the static logic can be loaded from an on-chip or off-chip flash memory device during a boot-up sequence. The boot-up sequence can include detecting a power event (such as by detecting that a supply voltage has transitioned from below a threshold value to above the threshold value) and deasserting a reset signal in response to the power event. An initialization sequence can be triggered in response to the power event or the reset being deasserted. The initialization sequence can include reading configuration data stored on the flash device and loading the configuration data onto the configurable hardware platform210using the configuration access port211so that at least a portion of the reconfigurable logic blocks are programmed with the functionality of the static logic. After the static logic is loaded, the configurable hardware platform210can transition from a loading state to an operational state that includes the functionality of the static logic. The reconfigurable logic can be loaded onto reconfigurable logic blocks while the configurable hardware platform210is operational (e.g., after the static logic has been loaded). The configuration data corresponding to the reconfigurable logic can be stored in an on-chip or off-chip memory and/or the configuration data can be received or streamed from an interface (e.g., the interconnect interface256) of the configurable hardware platform210. The reconfigurable logic can be divided into non-overlapping regions, which can interface with static logic. For example, the reconfigurable regions can be arranged in an array or other regular or semi-regular structure. For example, the array structure may include holes or blockages where hard macros are placed within the array structure. The different reconfigurable regions can communicate with each other, the static logic, and the hard macros by using signal lines that can be specified as static logic. The different reconfigurable regions can be configured at different points in time so that a first reconfigurable region can be configured at a first point in time and a second reconfigurable region can be configured at a second point in time. The functions of the configurable hardware platform210can be divided or categorized based upon the purpose or capabilities of the functions. For example, the functions can be categorized as control plane functions, data plane functions, and shared functions. A control plane can be used for management and configuration of the configurable hardware platform210. The data plane can be used to manage data transfer between accelerator logic loaded onto the configurable hardware platform210and the server computer220. Shared functions can be used by both the control plane and the data plane. The control plane functionality can be loaded onto the configurable hardware platform210prior to loading the data plane functionality. The data plane can include encapsulated reconfigurable logic configured with application logic240. The control plane can include host logic of the configurable hardware platform210. Generally, the data plane and the control plane can be accessed using different functions of the configurable hardware platform210, where the different functions are assigned to different address ranges. Specifically, the control plane functions can be accessed using a management function252and the data plane functions can be accessed using a data path function or an application function254. An address mapping layer250can differentiate transactions bound for the control plane or the data plane. In particular, transactions from the server computer220bound for the configurable hardware platform210can be identified using an address within the transaction. Specifically, if the address of the transaction falls within the range of addresses assigned to the configurable hardware platform210, the transaction is destined for the configurable hardware platform210. The transaction can be sent over the physical interconnect230and received at the interconnect interface256. The interconnect interface256can be an endpoint of the physical interconnect230. It should be understood that the physical interconnect230can include additional devices (e.g., switches and bridges) arranged in a fabric for connecting devices or components to the server computer220. The address mapping layer250can analyze the address of the transaction and determine where to route the transaction within the configurable hardware platform210based on the address. For example, the management function252can be assigned a first range of addresses and different functions of the management plane can be accessed by using different addresses within that range. Transactions with addresses falling within the range assigned to the management function252can be routed through the host logic private fabric260to the different blocks of the control plane. For example, transactions can be addressed to a management and configuration block262. Similarly, the application function254can be assigned a second range of addresses and different functions of the data plane can be accessed by using different addresses within that range. The management and configuration block262can include functions related to managing and configuring the configurable hardware platform210. For example, the management and configuration block262can provide access to the configuration access port211so that the reconfigurable logic blocks can be configured. For example, the server computer220can send a transaction to the management and configuration block262to initiate loading of the application logic within the encapsulated reconfigurable logic240. The configuration data corresponding to the application logic can be sent from the server computer220to the management function252. The management function252can route the configuration data corresponding to the application logic through the host logic fabric260to the configuration access port211so that the application logic can be loaded. As another example, the management and configuration block262can store metadata about the configurable hardware platform210. For example, versions of the different logic blocks, update histories, and other information can be stored in memory of the management and configuration block262. The server computer220can read the memory to retrieve some or all of the metadata. Specifically, the server computer220can send a read request targeting the memory of the management and configuration block262and the management and configuration block262can generate read response data to return to the server computer220. The management function252can also be used to access private peripherals of the configurable hardware platform210. The private peripherals are components that are only accessible from the control plane. For example, the private peripherals can include a JTAG (e.g., IEEE 1149.1) controller270, light emitting displays (LEDs)271, a microcontroller272, a universal asynchronous receiver/transmitter (UART)273, a memory274(e.g., a serial peripheral interface (SPI) flash memory), and any other components that are accessible from the control plane and not the data plane. The management function252can access the private peripherals by routing commands through the host logic private fabric260and the private peripheral interface(s)275. The private peripheral interface(s)275can directly communicate with the private peripherals. Public peripherals are shared functions that are accessible from either the control plane or the data plane. For example, the public peripherals can be accessed from the control plane by addressing transactions within the address range assigned to the management function252. The public peripherals can be accessed from the data plane by addressing transactions within the address range assigned to the application function254. Thus, the public peripherals are components that can have multiple address mappings and can be used by both the control plane and the data plane. Examples of the public peripherals are other configurable hardware platform(s) (CHP(s))280, DRAM281(e.g., DDR DRAM), storage devices282(e.g., hard disk drives and solid-state drives), and other various components that can be used to generate, store, or process information. The public peripherals can be accessed via the public peripheral interfaces285. Thus, the public peripheral interfaces285can be an intermediary layer transposed between the public peripherals and the other functions of the configurable hardware platform210. Specifically, the public peripheral interfaces285can translate requests from the control plane or the data plane and format communications to the public peripherals into a native protocol of the public peripherals. Mailboxes290and watchdog timers292are shared functions that are accessible from either the control plane or the data plane. Specifically, the mailboxes290can be used to pass messages and other information between the control plane and the data plane. For example, the mailboxes290can include buffers, control registers (such as semaphores), and status registers. By using the mailboxes290as an intermediary between the control plane and the data plane, isolation between the data plane and the control plane can potentially be increased which can increase the security of the configurable hardware platform210. The watchdog timers292can be used to detect and recover from hardware and/or software malfunctions. For example, a watchdog timer292can monitor an amount of time taken to perform a particular task, and if the amount of time exceeds a threshold, the watchdog timer292can initiate an event, such as writing a value to a control register or causing an interrupt or reset to be asserted. As one example, the watchdog timer292can be initialized with a first value when beginning a first task. The watchdog timer292can automatically count down after it is initialized and if the watchdog timer292reaches a zero value, an event can be generated. Alternatively, if the first task finishes before the watchdog timer292reaches a zero value, the watchdog timer292can be reinitialized with a second value when beginning a second task. The first and second values can be selected based on a complexity or an amount of work to complete in the first and second tasks, respectively. The application function254can be used to access the data plane functions, such as the application logic240. For example, a transaction directed to the application logic240can cause data to be loaded, processed, and/or returned to the server computer220. Specifically, the data plane functions can be accessed using transactions having an address within the range assigned to the application function254. For example, a transaction can be sent from the server computer220to the application logic240via the application function254. Specifically, transactions addressed to the application function254can be routed through the peripheral fabric264to the application logic240. Responses from the application logic240can be routed through the peripheral fabric264to the application function254, and then back to the server computer220. Additionally, the data and transactions generated by the application logic240can be monitored using a usage and transaction monitoring layer266. The monitoring layer266can potentially identify transactions or data that violate predefined rules and can generate an alert to be sent over the control plane. Additionally or alternatively, the monitoring layer266can terminate any transactions generated by the application logic240that violate any criteria of the monitoring layer266. Additionally, the monitoring layer266can analyze information moving to or from the application logic240so that statistics about the information can be collected and accessed from the control plane. Data can also be transferred between the server computer220and the application logic by programming a direct memory access (DMA) engine242. The DMA engine242can include control and status registers for programming or specifying DMA transfers from a source location to a destination location. As one example, the DMA engine242can be programmed to pull information stored within the memory224of server computer220into the application logic240or into the public peripherals of the configurable hardware platform210. As another example, the DMA engine242can be programmed to push data that has been generated by the application logic240to the memory224of the server computer220. The data generated by the application logic240can be streamed from the application logic240or can be written to the public peripherals, such as the memory281or storage282. The application logic240can communicate with other configurable hardware platforms280. For example, the other configurable hardware platforms280can be connected by one or more serial lines that are in communication with the SERDES212. The application logic240can generate transactions to the different configurable hardware platforms280, and the transactions can be routed through the CHP fabric244to the corresponding serial lines (via the SERDES212) of the configurable hardware platforms280. Similarly, the application logic240can receive information from other configurable hardware platforms280using the reverse path. In sum, the functions of the configurable hardware platform210can be categorized as control plane functions and application functions. The control plane functions can be used to monitor and restrict the capabilities of the data plane. The data plane functions can be used to accelerate a user's application that is running on the server computer220. By separating the functions of the control and data planes, the security and availability of the server computer220and other computing infrastructure can potentially be increased. For example, the application logic240cannot directly signal onto the physical interconnect230because the intermediary layers of the control plane control the formatting and signaling of transactions of the physical interconnect230. As another example, the application logic240can be prevented from using the private peripherals which could be used to reconfigure the configurable hardware platform210and/or to access management information that may be privileged. As another example, the application logic240can only access hard macros of the configurable hardware platform210through intermediary layers so that any interaction between the application logic240and the hard macros is controlled using the intermediary layers. FIG.3is a system diagram showing an example of a system300including a logic repository service310for managing configuration data that can be used to configure configurable resources within a fleet of compute resources320. A compute services provider can maintain the fleet of computing resources320for users of the services to deploy when a computing task is to be performed. The computing resources320can include server computers340having configurable logic resources342that can be programmed as hardware accelerators. The compute services provider can manage the computing resources320using software services to manage the configuration and operation of the configurable hardware342. As one example, the compute service provider can execute a logic repository service310for ingesting application logic332specified by a user, generating configuration data336for configuring the configurable logic platform based on the logic design of the user, and downloading the validated configuration data362in response to a request360to configure an instance of the configurable logic platform. The download request360can be from the user that developed the application logic332or from a user that has acquired a license to use the application logic332. Thus, the application logic332can be created by the compute services provider, a user, or a third-party separate from the user or the compute services provider. For example, a marketplace of accelerator intellectual property (IP) can be provided to the users of the compute services provider, and the users can potentially increase the speed of their applications by selecting an accelerator from the marketplace. The logic repository service310can be a network-accessible service, such as a web service. Web services are commonly used in cloud computing. A web service is a software function provided at a network address over the web or the cloud. Clients initiate web service requests to servers and servers process the requests and return appropriate responses. The client web service requests are typically initiated using, for example, an API request. For purposes of simplicity, web service requests will be generally described below as API requests, but it is understood that other web service requests can be made. An API request is a programmatic interface to a defined request-response message system, typically expressed in JSON or XML, which is exposed via the web—most commonly by means of an HTTP-based web server. Thus, in certain implementations, an API can be defined as a set of Hypertext Transfer Protocol (HTTP) request interfaces, along with a definition of the structure of the messages used to invoke the API and the response messages, which can be in an Extensible Markup Language (XML) or JavaScript Object Notation (JSON) format. The API can specify a set of functions or routines that perform an action, which includes accomplishing a specific task or allowing interaction with a software component. When a web service receives the API request from a client device, the web service can generate a response to the request and send the response to the endpoint identified in the request. Additionally or alternatively, the web service can perform actions in response to the API request without generating a response to the endpoint identified in the request. The logic repository service310can receive an API request330to generate configuration data for a configurable hardware platform, such as the configurable hardware342of the server computer340. For example, the API request330can be originated by a developer or partner user of the compute services provider. The request330can include fields for specifying data and/or metadata about the logic design, the configurable hardware platform, user information, access privileges, production status, and various additional fields for describing information about the inputs, outputs, and users of the logic repository service310. As specific examples, the request can include a description of the design, a production status (such as trial or production), an encrypted status of the input or output of the service, a reference to a location for storing an input file (such as the hardware design source code), a type of the input file, an instance type of the configurable hardware, and a reference to a location for storing an output file or report. In particular, the request can include a reference to a hardware design specifying application logic332for implementation on the configurable hardware platform. Specifically, a specification of the application logic332and/or of the host logic334can be a collection of files, such as source code written in a hardware description language (HDL), a netlist generated by a logic synthesis tool, and/or placed and routed logic gates generated by a place and route tool. The compute resources320can include many different types of hardware and software categorized by instance type. In particular, an instance type specifies at least a portion of the hardware and software of a resource. For example, hardware resources can include servers with central processing units (CPUs) of varying performance levels (e.g., different clock speeds, architectures, cache sizes, and so forth), servers with and without co-processors (such as graphics processing units (GPUs) and configurable logic), servers with varying capacity and performance of memory and/or local storage, and servers with different networking performance levels. Example software resources can include different operating systems, application programs, and drivers. One example instance type can comprise the server computer340including a central processing unit (CPU)344in communication with the configurable hardware342. The configurable hardware342can include programmable logic such as an FPGA, a programmable logic array (PLA), a programmable array logic (PAL), a generic array logic (GAL), or a complex programmable logic device (CPLD), for example. As specific examples, an “F1.small” instance type can include a first type of server computer with one capacity unit of FPGA resources, an “F1.medium” instance type can include the first type of server computer with two capacity units of FPGA resources, an “F1.large” instance type can include the first type of server computer with eight capacity units of FPGA resources, and an “F2.large” instance type can include a second type of server computer with eight capacity units of FPGA resources. The logic repository service310can generate configuration data336in response to receiving the API request330. The generated configuration data336can be based on the application logic332and the host logic334. Specifically, the generated configuration data336can include information that can be used to program or configure the configurable hardware342so that it performs the functions specified by the application logic332and the host logic334. As one example, the compute services provider can generate the host logic334including logic for interfacing between the CPU344and the configurable hardware342. Specifically, the host logic334can include logic for masking or shielding the application logic332from communicating directly with the CPU344so that all CPU-application logic transactions pass through the host logic334. In this manner, the host logic334can potentially reduce security and availability risks that could be introduced by the application logic332. Generating the configuration data336can include performing checks and/or tests on the application logic332, integrating the application logic332into a host logic334wrapper, synthesizing the application logic332, and/or placing and routing the application logic332. Checking the application logic332can include verifying the application logic332complies with one or more criteria of the compute services provider. For example, the application logic332can be analyzed to determine whether interface signals and/or logic functions are present for interfacing to the host logic334. In particular, the analysis can include analyzing source code and/or running the application logic332against a suite of verification tests. The verification tests can be used to confirm that the application logic is compatible with the host logic. As another example, the application logic332can be analyzed to determine whether the application logic332fits within a designated region of the specified instance type. As another example, the application logic332can be analyzed to determine whether the application logic332includes any prohibited logic functions, such as ring oscillators or other potentially harmful circuits. As another example, the application logic332can be analyzed to determine whether the application logic332has any naming conflicts with the host logic334or any extraneous outputs that do not interface with the host logic334. As another example, the application logic332can be analyzed to determine whether the application logic332attempts to interface to restricted inputs, outputs, or hard macros of the configurable hardware342. If the application logic332passes the checks of the logic repository service310, then the configuration data336can be generated. If any of the checks or tests fail, the generation of the configuration data336can be aborted. Generating the configuration data336can include compiling and/or translating source code of the application logic332and the host logic334into data that can be used to program or configure the configurable hardware342. For example, the logic repository service310can integrate the application logic332into a host logic334wrapper. Specifically, the application logic332can be instantiated in a system design that includes the application logic332and the host logic334. The integrated system design can be synthesized, using a logic synthesis program, to create a netlist for the system design. The netlist can be placed and routed, using a place and route program, for the instance type specified for the system design. The placed and routed design can be converted to configuration data336which can be used to program the configurable hardware342. For example, the configuration data336can be directly output from the place and route program. As one example, the generated configuration data336can include a complete or partial bitstream for configuring all or a portion of the configurable logic of an FPGA. An FPGA can include configurable logic and non-configurable logic. The configurable logic can include programmable logic blocks comprising combinational logic and/or look-up tables (LUTs) and sequential logic elements (such as flip-flops and/or latches), programmable routing and clocking resources, programmable distributed and block random access memories (RAMs), digital signal processing (DSP) bitslices, and programmable input/output pins. The bitstream can be loaded into on-chip memories of the configurable logic using configuration logic (e.g., a configuration access port). The values loaded within the on-chip memories can be used to control the configurable logic so that the configurable logic performs the logic functions that are specified by the bitstream. Additionally, the configurable logic can be divided into different regions which can be configured independently of one another. As one example, a full bitstream can be used to configure the configurable logic across all of the regions and a partial bitstream can be used to configure only a portion of the configurable logic regions. The non-configurable logic can include hard macros that perform a specific function within the FPGA, such as input/output blocks (e.g., serializer and deserializer (SERDES) blocks and gigabit transceivers), analog-to-digital converters, memory control blocks, test access ports, and configuration logic for loading the configuration data onto the configurable logic. The logic repository service310can store the generated configuration data336in a logic repository database350. The logic repository database350can be stored on removable or non-removable media, including magnetic disks, direct-attached storage, network-attached storage (NAS), storage area networks (SAN), redundant arrays of independent disks (RAID), magnetic tapes or cassettes, CD-ROMs, DVDs, or any other medium which can be used to store information in a non-transitory way and which can be accessed by the logic repository service310. Additionally, the logic repository service310can be used to store input files (such as the specifications for the application logic332and the host logic334) and metadata about the logic designs and/or the users of the logic repository service310. The generated configuration data336can be indexed by one or more properties such as a user identifier, an instance type or types, a marketplace identifier, a machine image identifier, and a configurable hardware identifier, for example. The logic repository service310can receive an API request360to download configuration data. For example, the request360can be generated when a user of the compute resources320launches or deploys a new instance (e.g., an F1 instance) within the compute resources320. As another example, the request360can be generated in response to a request from an application executing on an operating instance. The request360can include a reference to the source and/or destination instance, a reference to the configuration data to download (e.g., an instance type, a marketplace identifier, a machine image identifier, or a configurable hardware identifier), a user identifier, an authorization token, and/or other information for identifying the configuration data to download and/or authorizing access to the configuration data. If the user requesting the configuration data is authorized to access the configuration data, the configuration data can be retrieved from the logic repository database350, and validated configuration data362(e.g. a full or partial bitstream) can be downloaded to the requesting instance (e.g., server computer340). The validated configuration data362can be used to configure the configurable logic of the destination instance. The logic repository service310can verify that the validated configuration data362can be downloaded to the requesting instance. Validation can occur at multiple different points by the logic repository service310. For example, validation can include verifying that the application logic332is compatible with the host logic334. In particular, a regression suite of tests can be executed on a simulator to verify that the host logic334performs as expected after the application logic332is added to the design. Additionally or alternatively, it can be verified that the application logic332is specified to reside only in reconfigurable regions that are separate from reconfigurable regions of the host logic334. As another example, validation can include verifying that the validated configuration data362is compatible with the instance type to download to. As another example, validation can include verifying that the requestor is authorized to access the validated configuration data362. If any of the validation checks fail, the logic repository service310can deny the request to download the validated configuration data362. Thus, the logic repository service310can potentially safeguard the security and the availability of the computing resources320while enabling a user to customize hardware of the computing resources320. FIG.4is a computing system diagram of a network-based compute service provider400that illustrates one environment in which embodiments described herein can be used. By way of background, the compute service provider400(i.e., the cloud provider) is capable of delivery of computing and storage capacity as a service to a community of end recipients. In an example embodiment, the compute service provider can be established for an organization by or on behalf of the organization. That is, the compute service provider400may offer a “private cloud environment.” In another embodiment, the compute service provider400supports a multi-tenant environment, wherein a plurality of customers operate independently (i.e., a public cloud environment). Generally speaking, the compute service provider400can provide the following models: Infrastructure as a Service (“IaaS”), Platform as a Service (“PaaS”), and/or Software as a Service (“SaaS”). Other models can be provided. For the IaaS model, the compute service provider400can offer computers as physical or virtual machines and other resources. The virtual machines can be run as guests by a hypervisor, as described further below. The PaaS model delivers a computing platform that can include an operating system, programming language execution environment, database, and web server. Application developers can develop and run their software solutions on the compute service provider platform without the cost of buying and managing the underlying hardware and software. Additionally, application developers can develop and run their hardware solutions on configurable hardware of the compute service provider platform. The SaaS model allows installation and operation of application software in the compute service provider. In some embodiments, end users access the compute service provider400using networked client devices, such as desktop computers, laptops, tablets, smartphones, etc. running web browsers or other lightweight client applications. Those skilled in the art will recognize that the compute service provider400can be described as a “cloud” environment. The particular illustrated compute service provider400includes a plurality of server computers402A-402C. While only three server computers are shown, any number can be used, and large centers can include thousands of server computers. The server computers402A-402C can provide computing resources for executing software instances406A-406C. In one embodiment, the software instances406A-406C are virtual machines. As known in the art, a virtual machine is an instance of a software implementation of a machine (i.e. a computer) that executes applications like a physical machine. In the example of a virtual machine, each of the servers402A-402C can be configured to execute a hypervisor408or another type of program configured to enable the execution of multiple software instances406on a single server. Additionally, each of the software instances406can be configured to execute one or more applications. The applications can include user or non-privileged programs, kernel or privileged programs, and/or drivers. In another embodiment (not shown), the software instances can include an operating system and application programs controlled by a single user. Thus, the computer service provider400can partition the resources of a given server computer among multiple customers (such as by providing a different virtual machine to each customer) and/or provide the full resources of a server computer to a single customer. It should be appreciated that although the embodiments disclosed herein are described primarily in the context of virtual machines, other types of instances can be utilized with the concepts and technologies disclosed herein. For instance, the technologies disclosed herein can be utilized with storage resources, data communications resources, and with other types of computing resources. The embodiments disclosed herein might also execute all or a portion of an application directly on a computer system without utilizing virtual machine instances. The server computers402A-402C can include a heterogeneous collection of different hardware resources or instance types. Some of the hardware instance types can include configurable hardware that is at least partially configurable by a user of the compute service provider400. One example of an instance type can include the server computer402A which is in communication with configurable hardware404A. Specifically, the server computer402A and the configurable hardware404A can communicate over a local interconnect such as PCIe. Another example of an instance type can include the server computer402B and configurable hardware404B. For example, the configurable logic404B can be integrated within a multi-chip module or on the same die as a CPU of the server computer402B. Yet another example of an instance type can include the server computer402C without any configurable hardware. Thus, hardware instance types with and without configurable logic can be present within the resources of the compute service provider400. One or more server computers420can be reserved for executing software components for managing the operation of the server computers402and the software instances406. For example, the server computer420can execute a management component422. A customer can access the management component422to configure various aspects of the operation of the software instances406purchased by the customer. For example, the customer can purchase, rent or lease instances and make changes to the configuration of the software instances. The configuration information for each of the software instances can be stored as a machine image (MI)442on the network-attached storage440. As a specific example, the MI442can describe the information used to launch a VM instance. The MI can include a template for a root volume of the instance (e.g., an OS and applications), launch permissions for controlling which customer accounts can use the MI, and a block device mapping which specifies volumes to attach to the instance when the instance is launched. The MI can also include a reference to a configurable hardware image (CHI)442which is to be loaded on configurable hardware404when the instance is launched. The CHI includes configuration data for programming or configuring at least a portion of the configurable hardware404. As another specific example, the MI442can describe the information used to launch an instance of an operating system directly on one of the server computers420. The customer can also specify settings regarding how the purchased instances are to be scaled in response to demand. The management component can further include a policy document to implement customer policies. An auto scaling component424can scale the instances406based upon rules defined by the customer. In one embodiment, the auto scaling component424allows a customer to specify scale-up rules for use in determining when new instances should be instantiated and scale-down rules for use in determining when existing instances should be terminated. The auto scaling component424can consist of a number of subcomponents executing on different server computers402or other computing devices. The auto scaling component424can monitor available computing resources over an internal management network and modify resources available based on need. A deployment component426can be used to assist customers in the deployment of new instances406of computing resources. The deployment component can have access to account information associated with the instances, such as who is the owner of the account, credit card information, country of the owner, etc. The deployment component426can receive a configuration from a customer that includes data describing how new instances406should be configured. For example, the configuration can specify one or more applications to be installed in new instances406, provide scripts and/or other types of code to be executed for configuring new instances406, provide cache logic specifying how an application cache should be prepared, and other types of information. The deployment component426can utilize the customer-provided configuration and cache logic to configure, prime, and launch new instances406. For example, the deployment component426can be invoked when a customer launches an instance from a control console, another instance, or a marketplace page. The control console can be a web-based service that provides an interface to a customer of the compute service provider400so that the customer can manage his or her account and access services. As one example, the control console can enable a user to upload MIs and/or CHIs to a private catalog, and images corresponding to a particular MI or CHI can be selected by the user from the private catalog when an instance is to be deployed. The configuration, cache logic, and other information used for launching instances may be specified by a customer using the management component422or by providing this information directly to the deployment component426. The instance manager can be considered part of the deployment component. Customer account information428can include any desired information associated with a customer of the multi-tenant environment. For example, the customer account information can include a unique identifier for a customer, a customer address, billing information, licensing information, customization parameters for launching instances, scheduling information, auto-scaling parameters, previous IP addresses used to access the account, a listing of the MI's and CHI's accessible to the customer, etc. One or more server computers430can be reserved for executing software components for managing the download of configuration data to configurable hardware404of the server computers402. For example, the server computer430can execute a logic repository service comprising an ingestion component432, a library management component434, and a download component436. The ingestion component432can receive host logic and application logic designs or specifications and generate configuration data that can be used to configure the configurable hardware404. The library management component434can be used to manage source code, user information, and configuration data associated with the logic repository service. For example, the library management component434can be used to store configuration data generated from a user's design in a location specified by the user on the network-attached storage440. In particular, the configuration data can be stored within a configurable hardware image442on the network-attached storage440. Additionally, the library management component434can manage the versioning and storage of input files (such as the specifications for the application logic and the host logic) and metadata about the logic designs and/or the users of the logic repository service. The library management component434can index the generated configuration data by one or more properties such as a user identifier, an instance type, a marketplace identifier, a machine image identifier, and a configurable hardware identifier, for example. The download component436can be used to authenticate requests for configuration data and to transmit the configuration data to the requestor when the request is authenticated. For example, agents on the server computers402A-B can send requests to the download component436when the instances406are launched that use the configurable hardware404. As another example, the agents on the server computers402A-B can send requests to the download component436when the instances406request that the configurable hardware404be partially reconfigured while the configurable hardware404is in operation. The network-attached storage (NAS)440can be used to provide storage space and access to files stored on the NAS440. For example, the NAS440can include one or more server computers used for processing requests using a network file sharing protocol, such as Network File System (NFS). The NAS440can include removable or non-removable media, including magnetic disks, storage area networks (SANs), redundant arrays of independent disks (RAID), magnetic tapes or cassettes, CD-ROMs, DVDs, or any other medium which can be used to store information in a non-transitory way and which can be accessed over the network450. The network450can be utilized to interconnect the server computers402A-402C, the server computers420and430, and the storage440. The network450can be a local area network (LAN) and can be connected to a Wide Area Network (WAN)460so that end users can access the compute service provider400. It should be appreciated that the network topology illustrated inFIG.4has been simplified and that many more networks and networking devices can be utilized to interconnect the various computing systems disclosed herein. FIG.5shows further details of an example system500including components of a control plane and a data plane for configuring and interfacing to a configurable hardware platform510. The control plane includes software and hardware functions for initializing, monitoring, reconfiguring, and tearing down the configurable hardware platform510. The data plane includes software and hardware functions for communicating between a user's application and the configurable hardware platform510. The control plane can be accessible by users or services having a higher privilege level and the data plane can be accessible by users or services having a lower privilege level. In one embodiment, the configurable hardware platform510is connected to a server computer520using a local interconnect, such as PCIe. In an alternative embodiment, the configurable hardware platform510can be integrated within the hardware of the server computer520. As one example, the server computer520can be one of the plurality of server computers402A-402B of the compute service provider400ofFIG.4. The server computer520has underlying hardware522including one or more CPUs, memory, storage devices, interconnection hardware, etc. Running a layer above the hardware522is a hypervisor or kernel layer524. The hypervisor or kernel layer can be classified as a type 1 or type 2 hypervisor. A type 1 hypervisor runs directly on the host hardware522to control the hardware and to manage the guest operating systems. A type 2 hypervisor runs within a conventional operating system environment. Thus, in a type 2 environment, the hypervisor can be a distinct layer running above the operating system and the operating system interacts with the system hardware. Different types of hypervisors include Xen-based, Hyper-V, ESXi/ESX, Linux, etc., but other hypervisors can be used. A management partition530(such as Domain 0 of the Xen hypervisor) can be part of the hypervisor or separated therefrom and generally includes device drivers needed for accessing the hardware522. User partitions540are logical units of isolation within the hypervisor. Each user partition540can be allocated its own portion of the hardware layer's memory, CPU allocation, storage, interconnect bandwidth, etc. Additionally, each user partition540can include a virtual machine and its own guest operating system. As such, each user partition540is an abstract portion of capacity designed to support its own virtual machine independent of the other partitions. The management partition530can be used to perform management services for the user partitions540and the configurable hardware platform510. The management partition530can communicate with web services (such as a deployment service, a logic repository service550, and a health monitoring service) of the compute service provider, the user partitions540, and the configurable hardware platform510. The management services can include services for launching and terminating user partitions540, and configuring, reconfiguring, and tearing down the configurable logic of the configurable hardware platform510. As a specific example, the management partition530can launch a new user partition540in response to a request from a deployment service (such as the deployment component426ofFIG.4). The request can include a reference to an MI and/or a CHI. The MI can specify programs and drivers to load on the user partition540and the CHI can specify configuration data to load on the configurable hardware platform510. The management partition530can initialize the user partition540based on the information associated with the MI and can cause the configuration data associated with the CHI to be loaded onto the configurable hardware platform510. The initialization of the user partition540and the configurable hardware platform510can occur concurrently so that the time to make the instance operational can be reduced. The management partition530can be used to manage programming and monitoring of the configurable hardware platform510. By using the management partition530for this purpose, access to the configuration data and the configuration ports of the configurable hardware platform510can be restricted. Specifically, users with lower privilege levels can be restricted from directly accessing the management partition530. Thus, the configurable logic cannot be modified without using the infrastructure of the compute services provider and any third party IP used to program the configurable logic can be protected from viewing by unauthorized users. The management partition530can include a software stack for the control plane to configure and interface to a configurable hardware platform510. The control plane software stack can include a configurable logic (CL) application management layer532for communicating with web services (such as the logic repository service550and a health monitoring service), the configurable hardware platform510, and the user partitions540. For example, the CL application management layer532can issue a request to the logic repository service550to fetch configuration data in response to a user partition540being launched. The CL application management layer532can communicate with the user partition540using shared memory of the hardware522or by sending and receiving inter-partition messages over the interconnect connecting the server computer520to the configurable hardware platform510. Specifically, the CL application management layer532can read and write messages to mailbox logic511of the configurable hardware platform510. The messages can include requests by an end-user application541to reconfigure or tear-down the configurable hardware platform510. The CL application management layer532can issue a request to the logic repository service550to fetch configuration data in response to a request to reconfigure the configurable hardware platform510. The CL application management layer532can initiate a tear-down sequence in response to a request to tear down the configurable hardware platform510. The CL application management layer532can perform watchdog related activities to determine whether the communication path to the user partition540is functional. The control plane software stack can include a CL configuration layer534for accessing the configuration port512(e.g., a configuration access port) of the configurable hardware platform510so that configuration data can be loaded onto the configurable hardware platform510. For example, the CL configuration layer534can send a command or commands to the configuration port512to perform a full or partial configuration of the configurable hardware platform510. The CL configuration layer534can send the configuration data (e.g., a bitstream) to the configuration port512so that the configurable logic can be programmed according to the configuration data. The configuration data can specify host logic and/or application logic. The control plane software stack can include a management driver536for communicating over the physical interconnect connecting the server computer520to the configurable hardware platform510. The management driver536can encapsulate commands, requests, responses, messages, and data originating from the management partition530for transmission over the physical interconnect. Additionally, the management driver536can de-encapsulate commands, requests, responses, messages, and data sent to the management partition530over the physical interconnect. Specifically, the management driver536can communicate with the management function513of the configurable hardware platform510. For example, the management function513can be a physical or virtual function mapped to an address range during an enumeration of devices connected to the physical interconnect. The management driver536can communicate with the management function513by addressing transactions to the address range assigned to the management function513. The control plane software stack can include a CL management and monitoring layer538. The CL management and monitoring layer538can monitor and analyze transactions occurring on the physical interconnect to determine a health of the configurable hardware platform510and/or to determine usage characteristics of the configurable hardware platform510. The configurable hardware platform510can include non-configurable hard macros and configurable logic. The hard macros can perform specific functions within the configurable hardware platform510, such as input/output blocks (e.g., serializer and deserializer (SERDES) blocks and gigabit transceivers), analog-to-digital converters, memory control blocks, test access ports, and a configuration port512. The configurable logic can be programmed or configured by loading configuration data onto the configurable hardware platform510. For example, the configuration port512can be used for loading the configuration data. As one example, configuration data can be stored in a memory (such as a Flash memory) accessible by the configuration port512and the configuration data can be automatically loaded during an initialization sequence (such as during a power-on sequence) of the configurable hardware platform510. Additionally, the configuration port512can be accessed using an off-chip processor or an interface within the configurable hardware platform510. The configurable logic can be programmed to include host logic and application logic. The host logic can shield the interfaces of at least some of the hard macros from the end-users so that the end-users have limited access to the hard macros and to the physical interconnect. For example, the host logic can include the mailbox logic511, the configuration port512, the management function513, the host interface514, and the application function515. The end-users can cause the configurable application logic516to be loaded on the configurable hardware platform510, and can communicate with the configurable application logic516from the user partitions540(via the application function515). The host interface logic514can include circuitry (e.g., hard macros and/or configurable logic) for signaling on the physical interconnect and implementing a communications protocol. The communications protocol specifies the rules and message formats for communicating over the interconnect. The application function515can be used to communicate with drivers of the user partitions540. Specifically, the application function515can be a physical or virtual function mapped to an address range during an enumeration of devices connected to the physical interconnect. The application drivers can communicate with the application function515by addressing transactions to the address range assigned to the application function515. Specifically, the application function515can communicate with an application logic management driver542to exchange commands, requests, responses, messages, and data over the control plane. The application function515can communicate with an application logic data plane driver543to exchange commands, requests, responses, messages, and data over the data plane. The mailbox logic511can include one or more buffers and one or more control registers. For example, a given control register can be associated with a particular buffer and the register can be used as a semaphore to synchronize between the management partition530and the user partition540. As a specific example, if a partition can modify a value of the control register, the partition can write to the buffer. The buffer and the control register can be accessible from both the management function513and the application function515. When the message is written to the buffer, another control register (e.g., the message ready register) can be written to indicate the message is complete. The message ready register can polled by the partitions to determine if a message is present, or an interrupt can be generated and transmitted to the partitions in response to the message ready register being written. The user partition540can include a software stack for interfacing an end-user application540to the configurable hardware platform510. The application software stack can include functions for communicating with the control plane and the data plane. Specifically, the application software stack can include a CL-Application API544for providing the end-user application540with access to the configurable hardware platform510. The CL-Application API544can include a library of methods or functions for communicating with the configurable hardware platform510and the management partition530. For example, the end-user application541can send a command or data to the configurable application logic516by using an API of the CL-Application API544. In particular, the API of the CL-Application API544can interface with the application logic (AL) data plane driver543which can generate a transaction targeted to the application function515which can communicate with the configurable application logic516. In this manner, the end-user application541can cause the configurable application logic516receive, process, and/or respond with data to potentially accelerate tasks of the end-user application541. As another example, the end-user application541can send a command or data to the management partition530by using an API of the CL-Application API544. In particular, the API of the CL-Application API544can interface with the AL management driver542which can generate a transaction targeted to the application function515which can communicate with the mailbox logic511. In this manner, the end-user application541can cause the management partition530to provide operational or metadata about the configurable hardware platform510and/or to request that the configurable application logic516be reconfigured. The application software stack in conjunction with the hypervisor or kernel524can be used to limit the operations available to perform over the physical interconnect by the end-user application541. For example, the compute services provider can provide the AL management driver542, the AL data plane driver543, and the CL-Application API544(such as by associating the files with a machine image). These components can be protected from modification by only permitting users and services having a higher privilege level than the end-user to write to the files. The AL management driver542and the AL data plane driver543can be restricted to using only addresses within the address range of the application function515. Additionally, an input/output memory management unit (I/O MMU) can restrict interconnect transactions to be within the address ranges of the application function515or the management function513. FIG.6is a sequence diagram of an example method600of fetching configuration data, configuring an instance of configurable hardware in a multi-tenant environment using the configuration data, and using the instance of the configurable hardware. The sequence diagram illustrates a series of steps used by different elements of the compute services infrastructure that are used to configure the configurable logic. As one example, the infrastructure components of the compute services provider can include a marketplace610, a customer instance612, a control plane614, a configurable hardware platform616, and a logic repository service618. The marketplace service610can receive configuration data for hardware accelerators created by end-users or by independent hardware developers that provide or market their accelerators to end-users of the compute services provider. The marketplace service610can provide a listing of accelerators that are available for purchase or for licensing so that end-users can find a hardware accelerator suited for their needs. The customer instance612can include software (such as a virtual machine, operating system, and/or application software) executing on a server computer, where the software is launched in response to an end-user deploying resources of the compute services provider. The server computer can be executing control plane software614, which can be used to manage the configuration of the configurable hardware platform616. The configurable hardware platform616can include reconfigurable logic and host logic, as described above. The logic repository service618can include a repository of configuration data that can be indexed by product codes, machine instance identifiers, and/or configurable hardware identifiers, for example. The logic repository service618can receive a request for configuration data using one of the indexes, and can return the configuration data to the control plane. The components of the compute service provider infrastructure can be used at various phases during the deployment and use of a customer instance612. For example, the phases can include a configuration data fetching phase620, an application logic configuration phase630, and an application phase640. The configuration data fetching phase620can include identifying and fetching application logic from a logic repository service618. Specifically, an end-user of the compute services can subscribe and launch622a machine instance using the marketplace service610, a control console, or another instance. The marketplace service610can initiate a flow that causes an instance of a machine image to be loaded624on a server computer so that a customer instance612can initialized. The machine image can include application software written and/or used by the end-user and control plane software provided by the compute services provider. A request624to load the machine image can be sent from the marketplace service610to the control plane614executing on the server computer. For example, the control plane614can download the machine image from a storage service (not shown) and the machine image can be loaded626on the server computer. Fetching, loading, and booting the machine image within a virtual machine instance can potentially be time-consuming, so the control plane614can send a request to fetch628configuration data corresponding to the application logic from the logic repository service618in parallel with the fetching and loading of the machine image. It should be noted that the operations to fetch the machine image from storage and fetch the application logic from the logic repository service618can occur in series or in parallel. The logic repository service618can reply629with the configuration data. Thus, the control plane software at614can receive a copy of configuration data corresponding to the application logic so that the application logic can be loaded onto the configurable hardware platform. The configuration phase630can include loading the configuration data onto the configurable hardware platform616. The configuration phase630can include cleaning632the configurable hardware platform. For example, cleaning632the configurable hardware platform can include writing to any memories (e.g., the public peripherals) in communication with the configurable hardware platform so that a prior customer's data is not observable by the present customer. Cleaning632the memories can include writing all zeroes, writing all ones, and/or writing random patterns to the storage locations of the memories. Additionally, the configurable logic memory of the configurable hardware platform616can be fully or partially scrubbed. After the configurable hardware platform616is cleaned, a host logic version that is loaded on the configurable hardware platform616can be returned634to the control plane614. The host logic version can be used to verify635whether the application logic is compatible with the host logic that is loaded on the configurable hardware platform616. If the host logic and application logic are not compatible, then the configuration phase630can abort (not shown). Alternatively, if the host logic and the application logic are compatible, then the configuration phase630can continue at636. The application logic can be copied from the control plane614to the configurable hardware platform616so that the application logic can be loaded636into the configurable hardware platform616. After loading636, the configurable hardware platform616can indicate637that the functions (e.g., the application logic) loaded on the configurable hardware platform616is ready. The control plane614can indicate638to the customer instance612that the application logic is initialized and ready for use. Alternatively, the control plane614can prevent the virtual machine from completing an initialization or boot sequence until the indication637is received by the control plane614so that the virtual machine cannot begin executing until the configurable hardware platform616is configured. The application phase640can begin after the application logic is initialized. The application phase640can include executing the application software on the customer instance612and executing the application logic on the configurable hardware platform616. In particular, the application software of the customer instance612can be in communication642with the application logic of the configurable hardware platform616. For example, the application software can cause data to be transferred to the application logic, the data can be processed by the application logic, and the processed data and/or status information can be returned to the application software. The application logic can include specialized or customized hardware that can potentially accelerate processing speed compared to using only software on a general purpose computer. The application logic can perform the same functions for the duration of the customer instance612or the application logic can be adapted or reconfigured while the customer instance612is executing. For example, the application software executing on the customer instance612can request that different application logic be loaded onto the configurable hardware platform616, or additional application logic can be loaded on to a second configurable hardware platform (not illustrated). In particular, the application software can issue a request644to the configurable hardware platform616which can forward646the request to the control plane614or a customer can submit an API request to logic repository service or a compute service specifying an identifier of the instance, an identifier for the application logic, and a parameter that indicates to add a configurable hardware platform to the instance. The control plane614can begin fetching the new application logic at628from the logic repository service618. When new application logic is loaded onto a running customer instance, the cleaning632step can be omitted since the customer is not changing for the customer instance612. Additionally, a tear-down phase (not shown) can be used to clean the configurable hardware platform616so that customer data is further protected. For example, the memories of the configurable hardware platform616can be scrubbed and/or the configuration logic memories associated with the application logic can be scrubbed as part of a tear-down sequence when a customer stops using the customer instance612. FIG.7is a flow diagram of an example method700of using a configurable hardware platform. At710, host logic can be loaded on a first region of reconfigurable logic so that the configurable hardware platform performs operations of the host logic. The host logic can include a control plane function used for enforcing restricted access for transactions from the host interface. For example, the control plane function can reject transactions that are outside of an address range assigned to the control plane function. Additionally, the host logic can include logic for limiting or restricting the application logic from using hard macros of the configurable hardware platform and accessing the physical interfaces to a host device. Thus, the host logic can encapsulate the application logic so that the interfaces to hard macros and to other components of the computing infrastructure are managed by the host logic. The host logic can be loaded at one time or incrementally. For example, the host logic can include static logic that is loaded upon deassertion of a reset signal of the configurable hardware platform. As a specific example, configuration data corresponding to the static logic can be stored in a flash memory of the configurable hardware platform, and the contents of the flash memory can be used to program the configurable hardware platform with the static host logic. In one embodiment, the static logic can be loaded without intervention by a host computer (e.g., a customer instance). Additionally or alternatively, the host logic can include reconfigurable logic that is loaded after the static logic is loaded. For example, the reconfigurable host logic can be added while the static host logic is operating. In particular, the reconfigurable host logic can be loaded upon receiving a transaction requesting that the reconfigurable host logic be loaded. For example, the transaction can be transmitted from a host computer over a physical interconnect connecting the configurable hardware platform to the host computer. By dividing the host logic into a static logic component and a reconfigurable logic component, the host logic can be incrementally loaded onto the configurable hardware platform. For example, the static logic can include base functionality of the host logic, such as communication interfaces, enumeration logic, and configuration management logic. By providing the communication interfaces in the static logic, the configurable hardware platform can be discovered or enumerated on the physical interconnect as the computing system is powered on and/or is reset. The reconfigurable logic can be used to provide updates to the host logic and to provide higher-level functionality to the host logic. For example, some interconnect technologies have time limits for enumerating devices attached to the interconnect. The time to load host logic onto the configurable hardware platform can be included in the time budget allotted for enumeration and so the initial host logic can be sized to be loaded relatively quickly. Thus, the static logic can be a subset of the host logic functionality so that the configurable hardware platform can be operational within the time limits specified by the interconnect technology. The reconfigurable logic can provide additional host logic functionality to be added after the enumeration or boot up sequence is complete. As one example, host logic that is associated with the data plane (such as a DMA engine, CHP fabric, peripheral fabric, or a public peripheral interface) can be loaded as reconfigurable logic after the static logic has been loaded. At720, a transaction including a request to load application logic on a second region of the reconfigurable logic can be received. The second region of the reconfigurable logic can be non-overlapping with the first region of the reconfigurable logic so that the host logic is not modified. Additionally, the second region of the reconfigurable logic can have an interface to static host logic. As one example, the transaction including the request to load the application logic can target a control register of the host logic to initiate loading the application logic. At730, the second region of the reconfigurable logic can be configured to perform the operations of the application logic only when the request to load the application logic is authorized. The request can be authorized in a variety of ways. For example, the request can include an address, and the request can be authorized when the address matches a predefined address or falls within a range of addresses assigned to the host logic. As a specific example, a control register for controlling loading of the application logic can be assigned or mapped to an address, and the request can be authorized when it includes the address corresponding to the control register. Additionally or alternatively, the request can include an authorization token that is verified by the host logic to determine whether the request is authorized. At740, information between the host computer and the application logic can be transmitted using a translation layer of the host logic. For example, the application logic can use a streaming interface of the translation layer and the translation layer can format packets or transactions conforming to formatting and size specifications of the interconnection fabric. By using the translation layer, the security and availability of the host computer can potentially be increased because the application logic can be restricted from directly creating transactions and/or viewing transactions of the physical interconnect. Thus, the use of the translation layer can protect the integrity and privacy of transactions occurring on the physical interconnect. At750, information between a public peripheral and the application logic can be transmitted using a translation layer of the host logic. As described above, the public peripherals can include memory and/or other configurable hardware platforms. The translation layer can format all transfers between the public peripheral and the application logic so that the application logic is not burdened with conforming to low-level details of the transfer protocol and so that public peripherals are not misused (such as by causing a malfunction or accessing privileged information). At760, the host logic can be used to analyze transactions of the application logic. For example, the host logic can track operational characteristics, such as bandwidth, latency, and other performance characteristics of the application logic and/or the host logic. As another example, the host logic can analyze transactions to determine if the transactions conform to predefined criteria. If the transactions do not conform to the criteria, then the host logic can potentially cancel transactions originating at the application logic. FIG.8depicts a generalized example of a suitable computing environment800in which the described innovations may be implemented. The computing environment800is not intended to suggest any limitation as to scope of use or functionality, as the innovations may be implemented in diverse general-purpose or special-purpose computing systems. For example, the computing environment800can be any of a variety of computing devices (e.g., desktop computer, laptop computer, server computer, tablet computer, etc.) With reference toFIG.8, the computing environment800includes one or more processing units810,815and memory820,825. InFIG.8, this basic configuration830is included within a dashed line. The processing units810,815execute computer-executable instructions. A processing unit can be a general-purpose central processing unit (CPU), processor in an application-specific integrated circuit (ASIC) or any other type of processor. In a multi-processing system, multiple processing units execute computer-executable instructions to increase processing power. For example,FIG.8shows a central processing unit810as well as a graphics processing unit or co-processing unit815. The tangible memory820,825may be volatile memory (e.g., registers, cache, RAM), non-volatile memory (e.g., ROM, EEPROM, flash memory, etc.), or some combination of the two, accessible by the processing unit(s). The memory820,825stores software880implementing one or more innovations described herein, in the form of computer-executable instructions suitable for execution by the processing unit(s). A computing system may have additional features. For example, the computing environment800includes storage840, one or more input devices850, one or more output devices860, and one or more communication connections870. An interconnection mechanism (not shown) such as a bus, controller, or network interconnects the components of the computing environment800. Typically, operating system software (not shown) provides an operating environment for other software executing in the computing environment800, and coordinates activities of the components of the computing environment800. The tangible storage840may be removable or non-removable, and includes magnetic disks, magnetic tapes or cassettes, CD-ROMs, DVDs, or any other medium which can be used to store information in a non-transitory way and which can be accessed within the computing environment800. The storage840stores instructions for the software880implementing one or more innovations described herein. The input device(s)850may be a touch input device such as a keyboard, mouse, pen, or trackball, a voice input device, a scanning device, or another device that provides input to the computing environment800. The output device(s)860may be a display, printer, speaker, CD-writer, or another device that provides output from the computing environment800. The communication connection(s)870enable communication over a communication medium to another computing entity. The communication medium conveys information such as computer-executable instructions, audio or video input or output, or other data in a modulated data signal. A modulated data signal is a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media can use an electrical, optical, RF, or other carrier. Although the operations of some of the disclosed methods are described in a particular, sequential order for convenient presentation, it should be understood that this manner of description encompasses rearrangement, unless a particular ordering is required by specific language set forth below. For example, operations described sequentially may in some cases be rearranged or performed concurrently. Moreover, for the sake of simplicity, the attached figures may not show the various ways in which the disclosed methods can be used in conjunction with other methods. Any of the disclosed methods can be implemented as computer-executable instructions stored on one or more computer-readable storage media (e.g., one or more optical media discs, volatile memory components (such as DRAM or SRAM), or non-volatile memory components (such as flash memory or hard drives)) and executed on a computer (e.g., any commercially available computer, including smart phones or other mobile devices that include computing hardware). The term computer-readable storage media does not include communication connections, such as signals and carrier waves. Any of the computer-executable instructions for implementing the disclosed techniques as well as any data created and used during implementation of the disclosed embodiments can be stored on one or more computer-readable storage media. The computer-executable instructions can be part of, for example, a dedicated software application or a software application that is accessed or downloaded via a web browser or other software application (such as a remote computing application). Such software can be executed, for example, on a single local computer (e.g., any suitable commercially available computer) or in a network environment (e.g., via the Internet, a wide-area network, a local-area network, a client-server network (such as a cloud computing network), or other such network) using one or more network computers. For clarity, only certain selected aspects of the software-based implementations are described. Other details that are well known in the art are omitted. For example, it should be understood that the disclosed technology is not limited to any specific computer language or program. For instance, the disclosed technology can be implemented by software written in C++, Java, Perl, JavaScript, Adobe Flash, or any other suitable programming language. Likewise, the disclosed technology is not limited to any particular computer or type of hardware. Certain details of suitable computers and hardware are well known and need not be set forth in detail in this disclosure. It should also be well understood that any functionality described herein can be performed, at least in part, by one or more hardware logic components, instead of software. For example, and without limitation, illustrative types of hardware logic components that can be used include Field-programmable Gate Arrays (FPGAs), Application-specific Integrated Circuits (ASICs), Application-specific Standard Products (ASSPs), System-on-a-chip systems (SOCs), Complex Programmable Logic Devices (CPLDs), etc. Furthermore, any of the software-based embodiments (comprising, for example, computer-executable instructions for causing a computer to perform any of the disclosed methods) can be uploaded, downloaded, or remotely accessed through a suitable communication means. Such suitable communication means include, for example, the Internet, the World Wide Web, an intranet, software applications, cable (including fiber optic cable), magnetic communications, electromagnetic communications (including RF, microwave, and infrared communications), electronic communications, or other such communication means. The disclosed methods, apparatus, and systems should not be construed as limiting in any way. Instead, the present disclosure is directed toward all novel and nonobvious features and aspects of the various disclosed embodiments, alone and in various combinations and subcombinations with one another. The disclosed methods, apparatus, and systems are not limited to any specific aspect or feature or combination thereof, nor do the disclosed embodiments require that any one or more specific advantages be present or problems be solved. In view of the many possible embodiments to which the principles of the disclosed invention may be applied, it should be recognized that the illustrated embodiments are only preferred examples of the invention and should not be taken as limiting the scope of the invention. Rather, the scope of the invention is defined by the following claims. We therefore claim as our invention all that comes within the scope of these claims.
98,273
11860811
DETAILED DESCRIPTION Embodiments of the present disclosure will now be described with reference to the drawing figures, in which like reference numerals refer to like parts throughout. Embodiments of the present disclosure advantageously provide a high efficiency message protocol for a data processing system that includes high speed communication buses and networks that are connected to an interconnect. While applicable to many message protocols that may pass through an interconnect, embodiments of the present disclosure particularly improve the efficiency of peripheral component interconnect express (PCIe) peer-to-peer transactions that are fragmented by the interconnect during transmission, and the like, as discussed in detail below. In one embodiment, a system includes a destination high speed serial (HSS) controller including a processor configured to receive sequences of smaller write requests from an interconnect, where the sequences of smaller write requests are generated from a larger write request from a source, the larger write request has a data size, and each smaller write request has a last identifier and a data size. For each sequence of smaller write requests, the processor is configured to assemble, based on the last identifier, the smaller write requests into an intermediate write request having a data size, and to send the intermediate write request to a destination. Generally, a SoC interconnect may be coupled to an HSS bus or network via a controller, such as, for example, a PCIe controller, a compute express link (CXL) controller, etc., and may be required to efficiently convey traffic between source and destination endpoints that are located on different HSS buses and networks, such as, for example, graphics processing units (GPUs), memories, etc. The protocols used by these HSS buses and networks achieve higher performance with larger packet sizes. For example, the PCIe protocol allows payload sizes up to 4 KB (i.e., 4,096 bytes), which is due, in part, to the transaction header using the same channel as the write data. Typically, a SoC interconnect uses an internal communication protocol that is optimized for cache line-sized transactions, such as, for example, AXI (described below). This communication protocol transports header information and payload data on separate channels, and uses a much smaller packet size than HSS buses and networks, such as, for example, 64 B. Unfortunately, this packet size mismatch introduces inefficiencies when the SoC interconnect conveys traffic between source and destination endpoints that are located on different HSS buses and networks that are coupled to the SoC interconnect. FIG.1Adepicts a block diagram for SoC10, in accordance with an embodiment of the present disclosure. In this embodiment, SoC10includes interconnect100coupled to processor(s)110, accelerator(s) or special processor(s)120, high speed serial (HSS) communication controller(s)130coupled to HSS device(s)132, memory controller(s)140coupled to memory(ies)142, and HSS communication controller(s)150coupled to HSS device(s)152. A number, m, of memory controllers140are depicted inFIG.1A, i.e., memory controllers140-1, . . . ,140-m, and each memory controller140-1, . . . ,140-mis coupled to a respective memory142-1, . . . ,142-m, which may be integrated on SoC10or externally connected. Interconnect100is a communication system that transfers data between processor110, accelerator or special processor120, HSS communication controller130and HSS device132, memory controllers140-1, . . . ,140-mand memories142-1, . . . ,142-m, HSS communication controllers150and HSS device152, as well as other components. Certain components of SoC10may be classified as a particular type of interconnect protocol node, as discussed in more detail below. Generally, interconnect100may include, inter alia, a shared or hierarchical bus, a crossbar switch, a packet-based network-on-chip (NoC), etc. In one embodiment, interconnect100has a crossbar topology that provides an ordered network with low latency, and may be particularly suitable for a small-sized interconnect with a small number of protocol nodes, switches and wire counts. In another embodiment, interconnect100has a ring topology that balances wiring efficiency with latency, which increases linearly with the number of protocol nodes, and may be particularly suitable for a medium-sized interconnect. In a further embodiment, interconnect100has a mesh topology that has more wires to provide greater bandwidth, is modular and easily scalable by adding more rows and columns of switches or routers, and may be particularly suitable for a large-sized interconnect. In many embodiments, interconnect100is a coherent mesh network that includes multiple switches or router logic modules (routers) arranged in a two-dimensional rectangular mesh topology, such as, for example, the Arm CoreLink Coherent Mesh Network. In this example, the switches or routers are crosspoints (i.e., XPs). Each XP may connect up to four neighboring XPs using mesh ports, and may connect to one or two components (devices) using device ports. Additionally, each XP may support four coherent hub interface (CHI) channels to transport data from a source device to a destination or target device, as described, for example, in the Arm Advanced Microcontroller Bus Architecture (AMBA) CHI specification. In these embodiments, interconnect100may have an architecture that includes three layers, i.e., an upper protocol layer, a middle network layer, and a lower link layer. The protocol layer generates and processes requests and responses at the protocol nodes, defines the permitted cache state transitions at the protocol nodes that include caches, defines the transaction flows for each request type, and manages the protocol level flow control. The network layer packetizes the protocol message, determines, and adds to the packet, the source and target node IDs required to route the packet over interconnect100to the required destination. The link layer provides flow control between components, and manages link channels to provide deadlock free switching across interconnect100. Processor110is a general-purpose, central processing unit (CPU) that executes instructions to perform various functions for SoC10, such as, for example, control, computation, input/output, etc. More particularly, processor110may include a single processor core or multiple processor cores, which may be arranged in a processor cluster, such as, for example the Arm Cortex A, R and M families of processors. Generally, processor110may execute computer programs or modules, such as an operating system, application software, other software modules, etc., stored within a memory, such as, for example, memory142-1, . . . , memory142-m, etc. Processor110may also include local cache memory. Accelerator or special processor120is a specialized processor that is optimized to perform a specific function, such as process graphics, images and/or multimedia data, process digital signal data, process artificial neural network data, etc. For example, accelerator or special processor120may be a GPU, an neural processing unit (NPU), a digital signal processor (DSP), etc. More particularly, accelerator or special processor120may include a single processor core or multiple processor cores, such as, for example the Arm Mali family of GPUs, display processors and video processors, the Arm Machine Learning processor, etc. Accelerator or special processor120may also include local cache memory. Memory controllers140-1, . . . ,140-minclude a microprocessor, microcontroller, application specific integrated circuit (ASIC), field programmable gate array (FPGA), etc., and are configured to provide access to memories142-1, . . . ,142-mthrough interconnect100. Memories142-1, . . . ,142-mmay include a variety of non-transitory computer-readable medium that may be accessed by the other components of SoC10, such as processor110, accelerator or special processor120, etc. For example, memory142-1may store data and instructions for execution by processor110, accelerator or special processor120, etc. In various embodiments, memories142-1, . . . ,142-mmay include volatile and nonvolatile medium, non-removable medium and/or removable medium. For example, memories142-1, . . . ,142-mmay include any combination of random access memory (RAM), dynamic RAM (DRAM), double data rate (DDR) DRAM or synchronous DRAM (SDRAM), static RAM (SRAM), read only memory (ROM), flash memory, cache memory, and/or any other type of non-transitory computer-readable medium. In certain embodiments, memory controllers140-1, . . . ,140-mare dynamic memory controllers that provide data transfers to and from high-density DDR3 or DDR4 DRAM memory, such as, for example, the Arm CoreLink Dynamic Memory Controller (DMC) family, each of which includes a fast, single-port CHI channel interface for connecting to interconnect100. Generally, HSS communication controllers130,150each include a microprocessor, microcontroller, ASIC, FPGA, etc., communicate with interconnect100using one or more AMBA connections with advanced extensible interface (AXI) and/or AXI Coherency Extensions (ACE) Lite protocols, and communicate with HSS devices132,152(respectively) using an HSS communications interface, such as, for example, PCIe, CXL, Ethernet, high-definition multimedia interface (HDMI), Thunderbolt, universal serial bus (USB), serial attached SCSI (SAS), serial advanced technology attachment (SATA), etc. FIG.1Bdepicts a block diagram for SoC10, in accordance with an embodiment of the present disclosure. As depicted inFIG.1B, in many embodiments, HSS communication controller130is a source PCIe/CXL controller130, HSS devices132are source PCIe devices132, HSS communication controller150is a destination PCIe/CXL controller150, and HSS devices152are destination PCIe devices152. Source PCIe/CXL controller130includes a microprocessor, microcontroller, ASIC, FPGA, etc., a number of PCIe ports131and one or more interconnect interfaces133. Each PCIe port131may be coupled to a different source PCIe device132, and communicates therewith using a PCIe connection. Source PCIe/CXL controller130is configured to provide the functionality of a PCIe root complex, and implements a controller instance for each PCIe port131. Each interconnect interface133is coupled to an interface101of interconnect100, and communicates with the interconnect100using an AMBA connection with AXI and/or ACE Lite protocols. Similarly, destination PCIe/CXL controller150includes a microprocessor, microcontroller, ASIC, FPGA, etc., a number of PCIe ports151and one or more interconnect interfaces153. Each PCIe port151may be coupled to a different destination PCIe device152, and communicates therewith using a PCIe connection. Destination PCIe/CXL controller150is configured to provide the functionality of a PCIe root complex, and implements a controller instance for each PCIe port151. Each interconnect interface153is coupled to an interface103of interconnect100, and communicates with the interconnect100using an AMBA connection with AXI and/or ACE Lite protocols. Generally, source PCIe devices132and destination PCIe devices152exchange PCIe peer-to-peer traffic through source PCIe/CXL controller130, interconnect100, and destination PCIe/CXL controller150. In the embodiment depicted inFIG.1B, source PCIe device132(endpoint A) exchanges PCIe peer-to-peer traffic with destination PCIe device152(endpoint B). FIG.1Cdepicts a block diagram for SoC10using protocol node nomenclature, in accordance with an embodiment of the present disclosure. A requester is represented by a Request Node (RN), which is a protocol node that generates protocol transactions for interconnect100, such as, for example, memory reads and writes, I/O data transfers, etc. An RN-F protocol node represents a “fully” coherent requester, and an RN-I protocol node represents an “I/O” coherent requester. Processor110and accelerator or special processor120are fully coherent requesters, so RN-F210represents processor110, and RN-F220represents accelerator or special processor120. Each RN-I represents a source device that is connected to source PCIe/CXL controller130, and includes a microprocessor, microcontroller, ASIC, FPGA, logic circuits, etc., to provide the relevant functionality. In some embodiments, RN-Is may use the same hardware. Source PCIe device132is an I/O coherent requester, so RN-I232represents source PCIe device132. In this embodiment, source PCIe/CXL controller130acts as a bridge from RN-I232to source PCIe device132. In this embodiment, interconnect100includes several completers, each including a microprocessor, microcontroller, ASIC, FPGA, logic circuits, etc., to provide the relevant functionality. Each completer is represented by a Home Node (HN), which is a protocol node that receives protocol transactions from RNs, and may receive protocol transactions from completers (e.g., memory controllers, etc.), as described below. Each HN is responsible for managing a specific portion of the overall address space for SoC10. Similar to RNs, an HN-F protocol node represents a fully coherent completer, and an HN-I protocol node represents an I/O coherent completer. In many embodiments, the entire address space of memories142-1, . . . ,142-mmay be managed by the HN-Fs202-1, . . . ,202-min SoC10. Each HN-F may include a system level cache and a snoop traffic filter, and acts as the Point-of-Coherency (PoC) and Point of Serialization (PoS) for the memory requests sent to that HN-F. To avoid conflicts when multiple RNs attempt to access the same memory address within memories142-1, . . . ,142-m, HN-Fs202-1, . . . ,202-mact as the PoS, processing read requests, write requests, etc., in a serial manner. A fully coherent destination device may be represented by a completer, which is a protocol node that receives and completes requests from the HN-Fs. Memory controllers140-1, . . . ,140-mare fully coherent destination devices. Each HN-I is responsible for managing all of the transactions targeting the address space of a destination device or subsystem, and acts as the Point-of-Coherency (PoC) and Point of Serialization (PoS) for the requests sent to that destination device. Destination PCIe device152is a destination device, so HN-I204manages the address spaces for destination PCIe device152. In certain embodiments, source PCIe device132is a master device, and destination PCIe device152is a slave device. PCIe peer-to-peer (P2P) traffic includes posted and non-posed transactions, such as, for example, read transactions, write transactions, etc., that include requests and may include completions or responses. A posted transaction includes a request that does not require a completion or response, while a non-posted transaction includes a request that does require a completion or response. Posted write transactions follow Ordered Write Observation (OWO) to maintain PCIe ordering rules. FIG.2Adepicts a PCIe transaction layer protocol (TLP) packet310, in accordance with an embodiment of the present disclosure. PCIe TLP packet310includes header312and data payload314, which may include a read request, a read response, a write request, a write response, etc. For a write transaction, source PCIe device132(endpoint A) generates a PCIe TLP packet310that includes data payload314with a write request that includes the write address and the write data, and then transmits the PCIe TLP packet310to source PCIe/CXL controller130over a PCIe network. Source PCIe/CXL controller130converts the PCIe TLP packet310to an AXI write request, which is transmitted to interconnect100over an AMBA connection with AXI protocol. FIG.2Bdepicts an AXI write request320, in accordance with an embodiment of the present disclosure. AXI write request320includes write address channel data322transmitted over the write address (AW) channel, and write data channel data324transmitted over the write data (W) channel. Write address channel data322includes, inter alia, the write address from data payload314(i.e., AWAddr) and an AWID signal set to 0, while the write data channel data324includes the write data from data payload314(i.e., WData) and a WUser signal set to 0. The corresponding AXI write response, if required, includes write response channel data transmitted over the write response (B) channel, such as, for example, BResp, etc. The AXI write request320is received and processed by RN-I232, transmitted through interconnect100to HN-I204according to the AMBA CHI protocol, and then transmitted to destination PCIe/CXL controller150over an AMBA connection with AXI protocol. Destination PCIe/CXL controller150converts the AXI write request320into a PCIe TLP packet310, and then transmits the PCIe TLP packet310to destination PCIe device152(endpoint B) over a different PCIe network. In some cases, destination PCIe device152generates a write response, which is transmitted to source PCIe device132along a similarly route in the other direction. Typically, PCIe device132(endpoint A) originates a write request that includes write data that exceed the amount of data (i.e., the data size) that can be transmitted over interconnect100in a single AXI write request320. So, RN-I232divides or fragments the write request into a number of smaller AXI write requests3201, . . . ,320Nthat each satisfy the AMBA CHI protocol, which are then transmitted to HN-I204. In many embodiments, the amount of write data that can be transmitted over interconnect100in a single AXI write request320is 64 B, so RN-I232divides or fragments the write request into a number of smaller AXI write requests320i, each having a 64 B data size. For example, if the write request has a data size of 4 KB, then RN-I232divides or fragments the write request into 64 smaller AXI write requests3201, . . . ,32064(i.e., 4 KB/64 B=64). Different write request data sizes and interconnect protocols are also supported. FIG.2Cdepicts smaller AXI write requests3201, . . . ,320Nin accordance with an embodiment of the present disclosure. Smaller AXI write request3201includes write address channel data3221transmitted over the write address (AW) channel, and write data channel data3241transmitted over the write data (W) channel. Write address channel data3221includes, inter alia, the original write address from data payload314(i.e., AWAddr1) and an AWID signal set to 0, while the write data channel data3241includes the first 64 B of write data from data payload314(i.e., WData1) and a WUser signal set to 0. The next smaller AXI write request3202(not depicted for clarity) includes write address channel data3222transmitted over the write address (AW) channel, and write data channel data3242transmitted over the write data (W) channel. Write address channel data3222includes, inter alia, the original write address from data payload314(i.e., AWAddr2) advanced by 64 bytes, and an AWID signal set to 0, while the write data channel data3242includes the second 64 B of write data from data payload314(i.e., WData2) and a WUser signal set to 0. The smaller AXI write requests3203, . . . ,320N-1are similarly generated. The last smaller AXI write request320Nincludes write address channel data322Ntransmitted over the write address (AW) channel, and write data channel data324Ntransmitted over the write data (W) channel. Write address channel data322Nincludes, inter alia, the original write address from data payload314(i.e., AWAddrN) advanced by (N−1)·64 bytes, and an AWID signal set to 0, while the write data channel data324Nincludes the last 64 B of write data from data payload314(i.e., WDataN) and a WUser signal set to 0. During the transmission through interconnect100, the smaller AXI write requests320may become interleaved with one or more write requests from other RNs while traversing interconnect100. FIG.2Ddepicts smaller AXI write requests3201, . . . ,320Ninterleaved with AXI write request330, in accordance with an embodiment of the present disclosure. Interleaved AXI write request330includes write address channel data332transmitted over the write address (AW) channel, and write data channel data334transmitted over the write data (W) channel. Write address channel data332includes, inter alia, a write address (i.e., AWAddr) and an AWID signal set to 1, while the write data channel data334includes write data (i.e., WData) and a WUser signal set to 0. The AWID signal differentiates the interleaved AXI write request330(i.e., AWID=1) from the smaller AXI write requests3201, . . . ,320N(i.e., AWID=0) during reassembly by destination PCIe/CXL controller150. HN-I204then transmits the smaller AXI write requests320to destination PCIe/CXL controller150over an AMBA connection with AXI protocol. Destination PCIe/CXL controller150may simply convert each of the smaller AXI write requests320into a respective PCIe TLP packet310, and then transmit the PCIe TLP packets310to destination PCIe device152(endpoint B) over the different PCIe network. Alternatively, destination PCIe/CXL controller150may reassemble the smaller AXI write requests320iinto a single PCIe TLP packet310that includes one large write request, similar to the original PCIe TLP packet310with the original write request, and then transmit the single PCIe TLP packet310to destination PCIe device152(endpoint B). One reason for reassembling the smaller AXI write requests320iinto a single PCIe TLP packet310at the destination PCIe/CXL controller150is to leverage the performance of the PCIe network. For example, for PCIe, transmitting data using smaller (fragmented) transactions that have a data size of 64 B achieves a data transfer rate about 48 GB/s, while transmitting data using a single large transaction that has a data size of 4 KB achieves a data transfer rate about 60 GB/s, representing an increase in performance of about 25%. Above a data size of about 256 B, however, the data transfer rate remains essentially the same. For purposes of explanation, the size of the header is considered to be negligible. FIG.2Edepicts PCIe write utilization graph400that presents measured write bandwidth (GB/s) vs. PCIe TLP payload size (Bytes). Rather than reassemble the smaller AXI write requests320iwith a small data size (e.g., 64 B) into a single PCIe TLP packet310with a large data size (e.g., 4 KB), which requires large-sized buffers, embodiments of the present disclosure advantageously divide or fragment the original write request into sequences of smaller AXI write requests3201, . . . ,320nthat each satisfy the AMBA CHI protocol. The sequences of smaller AXI write requests3201, . . . ,320nare transmitted to HN-I204and then to destination PCIe/CXL controller150, which reassembles each sequence of smaller AXI write requests3201, . . . ,320ninto an intermediate (sized) PCIe TLP packet310with an intermediate data size (e.g., 256 B). Destination PCIe/CXL controller150then transmits the intermediate PCIe TLP packets310to destination PCIe device152(endpoint B) over the different PCIe network. Assembling the sequences of smaller AXI write requests into intermediate PCIe TLP packets advantageously maximizes efficiency by, inter alia, greatly reducing the buffer size required for reassembly (e.g., 256 B vs. 4 KB) and greatly reducing latency due to reassembly. For example, if an original write request with a 4 KB data size was divided or fragmented into sixteen (16) sequences of four (4) smaller AXI write requests, each with a 64 B data size, reassembly of each sequence of 4 smaller AXI write requests into an intermediate write request with a 256 B data size advantageously provides the maximum data transfer rate of about 60 GB/s with about 1/16thbuffer size (i.e., 2566/4 KB=0.0625), reduced latency, etc. FIG.3depicts a protocol flow500for SoC10depicted inFIGS.1B and1C, in accordance with an embodiment of the present disclosure. Protocol flow500illustrates a write stream for PCIe peer-to-peer traffic flowing between PCIe endpoint A on source PCIe device132and PCIe endpoint B on destination PCIe device152. The PCIe peer-to-peer traffic flows between source PCIe device132and source PCIe/CXL controller130through a PCIe connection, between source PCIe/CXL controller130and RN-I232through an AXI connection, between RN-I232and HN-I204through an AMBA CHI connection, between HN-I204and destination PCIe/CXL controller150through an AXI connection, and between destination PCIe/CXL controller150and destination PCIe device152through a PCIe connection. The protocol nodes are positioned along the horizontal axis, and time is indicated vertically, from top to bottom. The write requests are represented by arrows proceeding to the right, and the write responses are represented by arrows proceeding to the left. Source PCIe device132(endpoint A) generates a PCIe TLP packet310that includes header312and data payload314with a write request that includes the write address and the write data, and then transmits the PCIe TLP packet310to source PCIe/CXL controller130over a PCIe network. Source PCIe/CXL controller130converts the PCIe TLP packet310to an AXI write request320that is transmitted to interconnect100, over the write address (AW) and write data (W) channels of the AXI connection, for processing by RN-I232. The AXI write request320includes AW channel information identified as AW0, and W channel information identified as W0; the subscript “0” indicates that this AXI write request originated at source PCIe device132(endpoint A). AW0includes, inter alia, the AWAddr0signal (i.e., the write address) and the AWID signal (set to 0), and W0includes, inter alia, the WData0signal (i.e., the write data) and the WUser0signal (set to 0). In this embodiment, the size of the write data is 4 KB, the amount of write data that can be transmitted over interconnect100in a single AXI write request is 64 B, and the size of the write data for each intermediate write request is 256 B. In other words, the data size for each smaller AXI write request data (i.e., 64 B) is smaller than the data size of the write request (i.e., 4 KB), and the data size for each intermediate write request (i.e., 256 B) is smaller than the data size of the write request and larger than the data size of each smaller AXI write request. RN-I232receives and divides the AXI write request (AW0W0) into 64 smaller AXI write requests, identified as AW1W1, . . . , AW64W64, and arranges them into 16 sequences, each sequence having 4 smaller AXI write requests. The AWID signal in each of the smaller AXI write requests is set to 0 to indicate that all of the smaller AXI write requests are derived from the AXI write request that originated at source PCIe device132(endpoint A). As discussed above, after the first smaller AXI write request (AW1W1) is generated based on the AXI write request (AW0W0), each subsequent smaller AXI write request has a write address that is advanced by 64 bytes, and a successive 64 B of write data. In many embodiments, the sum of the data sizes of the smaller AXI write requests in the sequences of smaller AXI write requests equals the AXI write request data size. In other words, all of the write data from the AXI write request (AW0W0) is divided among the sequences of smaller AXI write requests. In certain embodiments, each sequence of smaller AXI write requests includes the same number of smaller AXI write requests, each smaller AXI write request has the same data size, and each intermediate write request has the same data size. In other embodiments, at least one intermediate write request may have a different data size than the others. The first sequence includes smaller AXI write requests AW1W1, AW2W2, AW3W3and AW4W4. The WUser signal identifies the last smaller AXI request in the sequence and, as such, may be known as the last identifier. In this embodiment, the WUser signal is set to 1 for the first three smaller AXI write requests, i.e., AW1W1, AW2W2and AW3W3, and set to 0 for the last smaller AXI write requests, i.e., AW4W4. Other last identifier values are also supported. The second sequence includes smaller AXI write requests AW5W5, AW6W6, AW7W7and AW8W8(not depicted for clarity). The WUser signal is set to 1 for the first three smaller AXI write requests, i.e., AW5W5, AW6W6and AW7W7, and set to 0 for the last smaller AXI write requests, i.e., AW8W8. And so on for the next 13 sequences of smaller AXI write requests. The last (16th) sequence includes smaller AXI write requests AW61W61, AW62W62, AW63W63and AW64W64. The WUser signal is set to 1 for the first three smaller AXI write requests, i.e., AW61W61, AW62W62and AW63W63, and set to 0 for the last smaller AXI write requests, i.e., AW64W64. RN-I232then transmits the sequences of smaller AXI write requests AW1W1, . . . , AW64W64across interconnect100to HN-I204. HN-I204transmits the sequences of smaller AXI write requests AW1W1, . . . , AW64W64, over the AW and W channels of the AXI connection, to destination PCIe/CXL controller150. Destination PCIe/CXL controller150assembles each sequence of smaller AXI write requests into an intermediate (sized) PCIe TLP packet, with an intermediate data size of 256 B, based on the WUser signal. For example, destination PCIe/CXL controller150assembles the first sequence of smaller AXI write requests, i.e., AW1W1, AW2W2, AW3W3and AW4W4, into a first intermediate PCIe TLP packet1based on the WUser signal and the AWID signal. Destination PCIe/CXL controller150then transmits the intermediate PCIe TLP packets, i.e., intermediate PCIe TLP packet1, . . . , intermediate PCIe TLP packet16, to destination PCIe device152(endpoint B) over the PCIe network. Destination PCIe device152(endpoint B) provides a response to each intermediate PCIe TLP packet that is received, and the responses are returned back through the transmission path to source PCIe device132(endpoint A), as generally depicted inFIG.3. Destination PCIe device152(endpoint B) does not provide a response for posted write requests. In other embodiments, rather than transmit the sequences of smaller AXI write requests to destination PCIe/CXL controller150, HN-I204assembles each sequence of smaller AXI write requests into an intermediate (sized) PCIe TLP packet, with an intermediate data size of 256 B, based on the WUser signal. For example, HN-I204assembles the first sequence of smaller AXI write requests, i.e., AW1W1, AW2W2, AW3W3and AW4W4, into a first intermediate PCIe TLP packet1based on the WUser signal and the AWID signal, then transmits the intermediate PCIe TLP packets, i.e., intermediate PCIe TLP packet1, . . . , intermediate PCIe TLP packet16, to destination PCIe/CXL controller150, which forwards them to destination PCIe device152(endpoint B) over the PCIe network. In many embodiments, if HN-I204receives an interleaved AXI write request, i.e., an AXI write request with an AWID signal different than 0, such as, for example, 1, 2, etc., before the last smaller AXI write request from the last sequence is received, HN-I204simply transmits the interleaved AXI write request to destination PCIe/CXL controller150. Because destination PCIe/CXL controller150assembles each sequence into an intermediate PCIe TLP packet based on the WUser signal and the AWID signal, destination PCIe/CXL controller150will not include the interleaved AXI write request in any intermediate PCIe TLP packet that is based on an AWID signal that is 0. In other embodiments, the interleaved AXI write request may be buffered until the last intermediate write request has been transmitted. In many embodiments, when interconnect100provides a common ID to fragments of a transaction, a first flag may be used to identify transactions originating from another source PCIe/CXL controller, and a second flag may be used to identify the last beat in the original write transaction. In other embodiments, as an alternative to the first flag, transactions from another controller may be identified by matching the subset of the transaction ID used by interconnect100to route responses back to that controller. This may require that interconnect100use AXI IDs for this purpose, knowledge of the IDs used, programmable logic or software to match these IDs, and a mechanism to configure the programmable logic or software. As an alternative to the second flag, the transactions may be monitored for contiguous addresses and writes may be merged when observed, which advantageously works for all traffic except, perhaps, interleaved transactions. In further embodiments, when the destination HSS controller has no free buffers when a smaller write request arrives, which may occur due to interleaving, for example, the data size of the intermediate write request may be dynamically reduced. For example, if the first smaller write request of a new intermediate write request, or a new smaller write request for an intermediate write request that is already being assembled, arrives and there is no buffer space, the destination HSS controller may send out the intermediate write request(s) that is currently being assembled. This ensures that assembly does not create latency outliers or system congestion due to backpressure. In many embodiments, buffers may be allocated on receipt of the first smaller write request and overflow may be handled by either reporting an error, or by passing through the smaller write requests that cannot be assembled. The former is not transparent, and the latter may lead to a sudden decrease in performance, which limits aggressive minimization of buffer space. FIG.4depicts a flow diagram600representing functionality for transferring data across an interconnect, in accordance with embodiments of the present disclosure. The functionality at610,620and630is performed at a request node of an interconnect, and at least a portion of the functionality at640is performed at a home node of the interconnect. At610, a write request from a source is received from a source HSS controller. The write request has a data size. At620, the write request is divided into sequences of smaller write requests. Each smaller write request has a last identifier and a data size. At630, the sequences of smaller write requests are sent to the home node. At640, the sequences of smaller write requests are sent to the destination HSS controller for assembly into intermediate write requests that are transmitted to a destination. Each sequence of smaller write requests is assembled into an intermediate write request based on the last identifier, and each intermediate write request has a data size. The embodiments described above and summarized below are combinable. In one embodiment, a system includes a destination high speed serial (HSS) controller including a processor configured to receive sequences of smaller write requests from an interconnect, where the sequences of smaller write requests are generated from a larger write request from a source, the larger write request has a data size, and each smaller write request has a last identifier and a data size. For each sequence of smaller write requests, the processor is configured to assemble, based on the last identifier, the smaller write requests into an intermediate write request having a data size, and send, to a destination, the intermediate write request. In another embodiment of the system, each smaller write request data size is smaller than the write request data size; and each intermediate write request data size is smaller than the write request data size and larger than the smaller write request data size. In another embodiment of the system, the last identifier identifies a last request in each sequence of smaller write requests. In another embodiment of the system, generated includes when a smaller write request is not the last request in the sequence, setting the last identifier to a first value; and when the smaller write request is the last request in the sequence, setting the last identifier to a second value. In another embodiment of the system, assemble includes, for each smaller write request in the sequence, add the smaller write request to the intermediate write request; and when the last identifier is the second value, complete the assemble. In another embodiment of the system, the source is a peripheral component interconnect express (PCIe) endpoint coupled to a source PCIe controller, the destination HSS controller is a PCIe controller, and the destination is a PCIe endpoint. In another embodiment of the system, the write request data size is 4 KB, the intermediate request data size is 256 B and the smaller write request data size is 64 B. In another embodiment of the system, a sum of the data sizes of the smaller write requests in the sequences of smaller write requests equals the write request data size. In another embodiment of the system, each sequence of smaller write requests includes a same number of smaller write requests, each smaller write request has a same data size, and each intermediate write request has a same data size. In one embodiment, a computer-based method for transferring data across an interconnect includes, at a request node, receiving, from a source high speed serial (HSS) controller, a write request from a source, the write request having a data size, dividing the write request into sequences of smaller write requests, each smaller write request having a last identifier and a data size, and sending, to a home node, the sequences of smaller write requests; and, at the home node, sending, to a destination HSS controller, the sequences of smaller write requests for assembly into intermediate write requests that are transmitted to a destination, where each sequence of smaller write requests is assembled into an intermediate write request based on the last identifier, and each intermediate write request has a data size. In another embodiment of the computer-based method, each smaller write request data size is smaller than the write request data size; and each intermediate write request data size is smaller than the write request data size and larger than the smaller write request data size. In another embodiment of the computer-based method, the last identifier identifies a last request in each sequence of smaller write requests. In another embodiment of the computer-based method, dividing includes when a smaller write request is not the last request in the sequence, setting the last identifier to a first value; and when the smaller write request is the last request in the sequence, setting the last identifier to a second value. In another embodiment of the computer-based method, assembly includes, for each smaller write request in the sequence, adding the smaller write request to the intermediate write request; and when the last identifier is the second value, completing the assembling. In another embodiment of the computer-based method, the source is a peripheral component interconnect express (PCIe) endpoint, the source HSS controller is a PCIe controller, the destination HSS controller is a PCIe controller, and the destination is a PCIe endpoint. In another embodiment of the computer-based method, the write request data size is 4 KB, the intermediate request data size is 256 B and the smaller write request data size is 64 B. In another embodiment of the computer-based method, a sum of the data sizes of the smaller write requests in the sequences of smaller write requests equals the write request data size. In another embodiment of the computer-based method, each sequence of smaller write requests includes a same number of smaller write requests, each smaller write request has a same data size, and each intermediate write request has a same data size. In another embodiment of the computer-based method, at least one intermediate write request has a different data size than at least one other intermediate write request. In another embodiment, a computer-based method for transferring data across an interconnect includes at a request node, receiving, from a source high speed serial (HSS) controller, a write request from a source, the write request having a data size, dividing the write request into sequences of smaller write requests, each smaller write request having a last identifier and a data size, and sending, to a home node, the sequences of smaller write requests; and, at the home node, for each sequence of smaller write requests, assembling, based on the last identifier, the smaller write requests into an intermediate write request having a data size, and sending, to a destination HSS controller, the intermediate write request. While implementations of the disclosure are susceptible to embodiment in many different forms, there is shown in the drawings and will herein be described in detail specific embodiments, with the understanding that the present disclosure is to be considered as an example of the principles of the disclosure and not intended to limit the disclosure to the specific embodiments shown and described. In the description above, like reference numerals may be used to describe the same, similar or corresponding parts in the several views of the drawings. In this document, the terms “comprises,” “comprising,” “includes,” “including,” “has,” “having,” or any other variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. An element preceded by “comprises . . . a” does not, without more constraints, preclude the existence of additional identical elements in the process, method, article, or apparatus that comprises the element. Reference throughout this document to “one embodiment,” “certain embodiments,” “many embodiment,” “an embodiment,” “implementation(s),” “aspect(s),” or similar terms means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present disclosure. Thus, the appearances of such phrases or in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments without limitation. The term “or” as used herein is to be interpreted as an inclusive or meaning any one or any combination. Therefore, “A, B or C” means “any of the following: A; B; C; A and B; A and C; B and C; A, B and C.” An exception to this definition will occur only when a combination of elements, functions, steps or acts are in some way inherently mutually exclusive. Also, grammatical conjunctions are intended to express any and all disjunctive and conjunctive combinations of conjoined clauses, sentences, words, and the like, unless otherwise stated or clear from the context. Thus, the term “or” should generally be understood to mean “and/or” and so forth. References to items in the singular should be understood to include items in the plural, and vice versa, unless explicitly stated otherwise or clear from the text. Recitation of ranges of values herein are not intended to be limiting, referring instead individually to any and all values falling within the range, unless otherwise indicated, and each separate value within such a range is incorporated into the specification as if it were individually recited herein. The words “about,” “approximately,” or the like, when accompanying a numerical value, are to be construed as indicating a deviation as would be appreciated by one of ordinary skill in the art to operate satisfactorily for an intended purpose. Ranges of values and/or numeric values are provided herein as examples only, and do not constitute a limitation on the scope of the described embodiments. The use of any and all examples, or exemplary language (“e.g.,” “such as,” “for example,” or the like) provided herein, is intended merely to better illuminate the embodiments and does not pose a limitation on the scope of the embodiments. No language in the specification should be construed as indicating any unclaimed element as essential to the practice of the embodiments. For simplicity and clarity of illustration, reference numerals may be repeated among the figures to indicate corresponding or analogous elements. Numerous details are set forth to provide an understanding of the embodiments described herein. The embodiments may be practiced without these details. In other instances, well-known methods, procedures, and components have not been described in detail to avoid obscuring the embodiments described. The description is not to be considered as limited to the scope of the embodiments described herein. The many features and advantages of the disclosure are apparent from the detailed specification, and, thus, it is intended by the appended claims to cover all such features and advantages of the disclosure which fall within the scope of the disclosure. Further, since numerous modifications and variations will readily occur to those skilled in the art, it is not desired to limit the disclosure to the exact construction and operation illustrated and described, and, accordingly, all suitable modifications and equivalents may be resorted to that fall within the scope of the disclosure.
45,638
11860812
DETAILED DESCRIPTION In the following description, numerous specific details are set forth, such as examples of specific types of processors and system configurations, specific hardware structures, specific architectural and micro architectural details, specific register configurations, specific instruction types, specific system components, specific measurements/heights, specific processor pipeline stages, and operation, etc. in order to provide a thorough understanding of the present disclosure. It will be apparent, however, to one skilled in the art that these specific details need not be employed to practice the present disclosure. In other instances, well known components or methods, such as specific and alternative processor architectures, specific logic circuits/code for described algorithms, specific firmware code, specific interconnect operation, specific logic configurations, specific manufacturing techniques and materials, specific compiler implementations, specific expression of algorithms in code, specific power down and gating techniques/logic and other specific operational details of computer system haven't been described in detail in order to avoid unnecessarily obscuring the present disclosure. Although the following embodiments may be described with reference to energy conservation and energy efficiency in specific integrated circuits, such as in computing platforms or microprocessors, other embodiments are applicable to other types of integrated circuits and logic devices. Similar techniques and teachings of embodiments described herein may be applied to other types of circuits or semiconductor devices that may also benefit from better energy efficiency and energy conservation. For example, the disclosed embodiments are not limited to desktop computer systems or Ultrabooks™. And may be also used in other devices, such as handheld devices, tablets, other thin notebooks, systems on a chip (SOC) devices, and embedded applications. Some examples of handheld devices include cellular phones, Internet protocol devices, digital cameras, personal digital assistants (PDAs), and handheld PCs. Embedded applications typically include a microcontroller, a digital signal processor (DSP), a system on a chip, network computers (NetPC), set-top boxes, network hubs, wide area network (WAN) switches, or any other system that can perform the functions and operations taught below. Moreover, the apparatuses, methods, and systems described herein are not limited to physical computing devices, but may also relate to software optimizations for energy conservation and efficiency. The following detailed description refers to the accompanying drawings. The same reference numbers may be used in different drawings to identify the same or similar elements. In the following description, for purposes of explanation and not limitation, specific details are set forth such as particular structures, architectures, interfaces, techniques, etc. in order to provide a thorough understanding of the various aspects of various embodiments. However, it will be apparent to those skilled in the art having the benefit of the present disclosure that the various aspects of the various embodiments may be practiced in other examples that depart from these specific details. In certain instances, descriptions of well-known devices, circuits, and methods are omitted so as not to obscure the description of the various embodiments with unnecessary detail. For the purposes of the present document, the phrase “A or B” means (A), (B), or (A and B). Referring toFIG.1, an embodiment of a block diagram for a computing system including a multicore processor is depicted. Processor100includes any processor or processing device, such as a microprocessor, an embedded processor, a digital signal processor (DSP), a network processor, a handheld processor, an application processor, a co-processor, a system on a chip (SOC), or other device to execute code. Processor100, in one embodiment, includes at least two cores—core101and102, which may include asymmetric cores or symmetric cores (the illustrated embodiment). However, processor100may include any number of processing elements that may be symmetric or asymmetric. In some embodiments, a processing element refers to hardware or logic to support a software thread. Examples of hardware processing elements include: a thread unit, a thread slot, a thread, a process unit, a context, a context unit, a logical processor, a hardware thread, a core, and/or any other element, which is capable of holding a state for a processor, such as an execution state or architectural state. In other words, a processing element, in one embodiment, refers to any hardware capable of being independently associated with code, such as a software thread, operating system, application, or other code. A physical processor (or processor socket) typically refers to an integrated circuit, which potentially includes any number of other processing elements, such as cores or hardware threads. A core often refers to logic located on an integrated circuit capable of maintaining an independent architectural state, wherein each independently maintained architectural state is associated with at least some dedicated execution resources. In contrast to cores, a hardware thread typically refers to any logic located on an integrated circuit capable of maintaining an independent architectural state, wherein the independently maintained architectural states share access to execution resources. As can be seen, when certain resources are shared and others are dedicated to an architectural state, the line between the nomenclature of a hardware thread and core overlaps. Yet often, a core and a hardware thread are viewed by an operating system as individual logical processors, where the operating system is able to individually schedule operations on each logical processor. Physical processor100, as illustrated inFIG.1, includes two cores—core101and102. Here, core101and102are considered symmetric cores, i.e. cores with the same configurations, functional units, and/or logic. In another embodiment, core101includes an out-of-order processor core, while core102includes an in-order processor core. However, cores101and102may be individually selected from any type of core, such as a native core, a software managed core, a core adapted to execute a native Instruction Set Architecture (ISA), a core adapted to execute a translated Instruction Set Architecture (ISA), a co-designed core, or other known core. In a heterogeneous core environment (i.e. asymmetric cores), some form of translation, such a binary translation, may be utilized to schedule or execute code on one or both cores. Yet to further the discussion, the functional units illustrated in core101are described in further detail below, as the units in core102operate in a similar manner in the depicted embodiment. As depicted, core101includes two hardware threads101aand101b, which may also be referred to as hardware thread slots101aand101b. Therefore, software entities, such as an operating system, in one embodiment potentially view processor100as four separate processors, i.e., four logical processors or processing elements capable of executing four software threads concurrently. As alluded to above, a first thread is associated with architecture state registers101a, a second thread is associated with architecture state registers101b, a third thread may be associated with architecture state registers102a, and a fourth thread may be associated with architecture state registers102b. Here, each of the architecture state registers (101a,101b,102a, and102b) may be referred to as processing elements, thread slots, or thread units, as described above. As illustrated, architecture state registers101aare replicated in architecture state registers101b, so individual architecture states/contexts are capable of being stored for logical processor101aand logical processor101b. In core101, other smaller resources, such as instruction pointers and renaming logic in allocator and renamer block130may also be replicated for threads101aand101b. Some resources, such as re-order buffers in reorder/retirement unit135, ILTB120, load/store buffers, and queues may be shared through partitioning. Other resources, such as general purpose internal registers, page-table base register(s), low-level data-cache and data-TLB115, execution unit(s)140, and portions of out-of-order unit135are potentially fully shared. Processor100often includes other resources, which may be fully shared, shared through partitioning, or dedicated by/to processing elements. InFIG.1, an embodiment of a purely exemplary processor with illustrative logical units/resources of a processor is illustrated. Note that a processor may include, or omit, any of these functional units, as well as include any other known functional units, logic, or firmware not depicted. As illustrated, core101includes a simplified, representative out-of-order (OOO) processor core. But an in-order processor may be utilized in different embodiments. The OOO core includes a branch target buffer120to predict branches to be executed/taken and an instruction-translation buffer (I-TLB)120to store address translation entries for instructions. Core101further includes decode module125coupled to fetch unit120to decode fetched elements. Fetch logic, in one embodiment, includes individual sequencers associated with thread slots101a,101b, respectively. Usually core101is associated with a first ISA, which defines/specifies instructions executable on processor100. Often machine code instructions that are part of the first ISA include a portion of the instruction (referred to as an opcode), which references/specifies an instruction or operation to be performed. Decode logic125includes circuitry that recognizes these instructions from their opcodes and passes the decoded instructions on in the pipeline for processing as defined by the first ISA. For example, as discussed in more detail below decoders125, in one embodiment, include logic designed or adapted to recognize specific instructions, such as transactional instruction. As a result of the recognition by decoders125, the architecture or core101takes specific, predefined actions to perform tasks associated with the appropriate instruction. It is important to note that any of the tasks, blocks, operations, and methods described herein may be performed in response to a single or multiple instructions; some of which may be new or old instructions. Note decoders126, in one embodiment, recognize the same ISA (or a subset thereof). Alternatively, in a heterogeneous core environment, decoders126recognize a second ISA (either a subset of the first ISA or a distinct ISA). In one example, allocator and renamer block130includes an allocator to reserve resources, such as register files to store instruction processing results. However, threads101aand101bare potentially capable of out-of-order execution, where allocator and renamer block130also reserves other resources, such as reorder buffers to track instruction results. Unit130may also include a register renamer to rename program/instruction reference registers to other registers internal to processor100. Reorder/retirement unit135includes components, such as the reorder buffers mentioned above, load buffers, and store buffers, to support out-of-order execution and later in-order retirement of instructions executed out-of-order. Scheduler and execution unit(s) block140, in one embodiment, includes a scheduler unit to schedule instructions/operation on execution units. For example, a floating point instruction is scheduled on a port of an execution unit that has an available floating point execution unit. Register files associated with the execution units are also included to store information instruction processing results. Exemplary execution units include a floating point execution unit, an integer execution unit, a jump execution unit, a load execution unit, a store execution unit, and other known execution units. Lower level data cache and data translation buffer (D-TLB)150are coupled to execution unit(s)140. The data cache is to store recently used/operated on elements, such as data operands, which are potentially held in memory coherency states. The D-TLB is to store recent virtual/linear to physical address translations. As a specific example, a processor may include a page table structure to break physical memory into a plurality of virtual pages. Here, cores101and102share access to higher-level or further-out cache, such as a second level cache associated with on-chip interface110. Note that higher-level or further-out refers to cache levels increasing or getting further way from the execution unit(s). In one embodiment, higher-level cache is a last-level data cache—last cache in the memory hierarchy on processor100—such as a second or third level data cache. However, higher level cache is not so limited, as it may be associated with or include an instruction cache. A trace cache—a type of instruction cache—instead may be coupled after decoder125to store recently decoded traces. Here, an instruction potentially refers to a macro-instruction (i.e. a general instruction recognized by the decoders), which may decode into a number of micro-instructions (micro-operations). In the depicted configuration, processor100also includes on-chip interface module110. Historically, a memory controller, which is described in more detail below, has been included in a computing system external to processor100. In this scenario, on-chip interface11is to communicate with devices external to processor100, such as system memory175, a chipset (often including a memory controller hub to connect to memory175and an I/O controller hub to connect peripheral devices), a memory controller hub, a northbridge, or other integrated circuit. And in this scenario, bus105may include any known interconnect, such as multi-drop bus, a point-to-point interconnect, a serial interconnect, a parallel bus, a coherent (e.g. cache coherent) bus, a layered protocol architecture, a differential bus, and a GTL bus. Memory175may be dedicated to processor100or shared with other devices in a system. Common examples of types of memory175include DRAM, SRAM, nonvolatile memory (NV memory), and other known storage devices. Device180may include a graphic accelerator, processor, or card coupled to a memory controller hub, data storage coupled to an I/O controller hub, a wireless transceiver, a flash device, an audio controller, a network controller, or other known device. Recently however, as more logic and devices are being integrated on a single die, such as SOC, each of these devices may be incorporated on processor100. For example in one embodiment, a memory controller hub is on the same package and/or die with processor100. Here, a portion of the core (an on-core portion)110includes one or more controller(s) for interfacing with other devices such as memory175or a graphics device180. The configuration including an interconnect and controllers for interfacing with such devices is often referred to as an on-core (or un-core) configuration. As an example, on-chip interface110includes a ring interconnect for on-chip communication and a high-speed serial point-to-point link105for off-chip communication. Yet, in the SOC environment, even more devices, such as the network interface, co-processors, memory175, graphics processor180, and any other known computer devices/interface may be integrated on a single die or integrated circuit to provide small form factor with high functionality and low power consumption. In one embodiment, processor100is capable of executing a compiler, optimization, and/or translator code177to compile, translate, and/or optimize application code176to support the apparatus and methods described herein or to interface therewith. A compiler often includes a program or set of programs to translate source text/code into target text/code. Usually, compilation of program/application code with a compiler is done in multiple phases and passes to transform hi-level programming language code into low-level machine or assembly language code. Yet, single pass compilers may still be utilized for simple compilation. A compiler may utilize any known compilation techniques and perform any known compiler operations, such as lexical analysis, preprocessing, parsing, semantic analysis, code generation, code transformation, and code optimization. Larger compilers often include multiple phases, but most often these phases are included within two general phases: (1) a front-end, i.e. generally where syntactic processing, semantic processing, and some transformation/optimization may take place, and (2) a back-end, i.e. generally where analysis, transformations, optimizations, and code generation takes place. Some compilers refer to a middle, which illustrates the blurring of delineation between a front-end and back end of a compiler. As a result, reference to insertion, association, generation, or other operation of a compiler may take place in any of the aforementioned phases or passes, as well as any other known phases or passes of a compiler. As an illustrative example, a compiler potentially inserts operations, calls, functions, etc. in one or more phases of compilation, such as insertion of calls/operations in a front-end phase of compilation and then transformation of the calls/operations into lower-level code during a transformation phase. Note that during dynamic compilation, compiler code or dynamic optimization code may insert such operations/calls, as well as optimize the code for execution during runtime. As a specific illustrative example, binary code (already compiled code) may be dynamically optimized during runtime. Here, the program code may include the dynamic optimization code, the binary code, or a combination thereof. Similar to a compiler, a translator, such as a binary translator, translates code either statically or dynamically to optimize and/or translate code. Therefore, reference to execution of code, application code, program code, or other software environment may refer to: (1) execution of a compiler program(s), optimization code optimizer, or translator either dynamically or statically, to compile program code, to maintain software structures, to perform other operations, to optimize code, or to translate code; (2) execution of main program code including operations/calls, such as application code that has been optimized/compiled; (3) execution of other program code, such as libraries, associated with the main program code to maintain software structures, to perform other software related operations, or to optimize code; or (4) a combination thereof. Embodiments herein relate to systems and methods to reduce an amount of time to link train interconnects, such as Peripheral Component Interconnect Express (PCIe) Express. During link training, the transmitter(s) and receiver(s) can undergo equalization. The equalization process runs uninterrupted for about 100 ms for each Data Rate above Gen 2. Hence, equalization is typically performed prior to establishing the link, in order to avoid transactions timing out due to the link being unavailable during equalization. Since equalization is unique for each data rate, equalization is performed sequentially for each of the data rates above Gen 2, as shown inFIG.2. FIG.2is a process flow diagram200for link training links between two devices. At the outset, the link is trained to L0 at Gen 1 data rates (2.5 GT/s) (202). The Gen 3 and above data rates advertised by all (pseudo) ports. The link enters recovery in Gen 1 and changes speed to Gen 3 (8 GT/s) for link training (204). The link can perform equalization and trains to L0 at Gen 3. Gen 4 and 5 data rates are not advertised. The equalization portion of the Gen 3 link training can take about 100 ms. The link enters recovery in Gen 3 and changes speed to Gen 4 (16 GT/s) for link training (206). The link can performs equalization, which can take another 100 ms. The link trains to L0 at Gen 4, while the Gen 5 data rate is not advertised. The link enters recovery in Gen 4 and changes speed to Gen 5 (32 GT/s) for link training (208). The link performs equalization at Gen 5 speed, which can take another 100 ms. After that, the link trains to L0 at Gen 5. Flow control initialization can be completed at Gen 5 (210), which results in the link being fully operational at Gen 5 (212). The process flow200can occur each time the devices are initialized (e.g., at the factory, and also upon power-up sequences). The latency that results from the sequential equalization process for each data rate adds to the link training time, during which there are no transactions in the Link. The latency effect can be further pronounced for applications that include resources that may be moved around multiple server nodes by invoking hot-plug flows between the shared resources and the compute nodes. This disclosure addresses the latency for link training by performing equalization once and storing the resulting equalization parameter values in a nonvolatile storage. The stored equalization parameter values can be retrieved across multiple boot/power cycles. The equalization and other TX/RX settings can be saved and used in subsequent link reset, platform reset/power cycle to bypass equalization or to reduce latency associated with equalization during link training. Advantages of the present disclosure are readily apparent to those of skill in the art. Among the advantages include a reduction in the link training time in the order of 300-400 ms by loading settings equivalent to a factory settings. Aspects of this disclosure include performing equalization in each of the data rates above Gen 2 as described inFIG.2above, and storing the resulting equalization settings to avoid long latency equalization from that point on. If the devices in the link have not changed, the stored equalization parameter values can be used to link train the links between connected devices in subsequent initialization procedures. When the link(s) are initialized, the link undergoes an equalization procedure. System software, firmware, and/or hardware can determine if the components of the link can support local save/restore equalization parameters (or this capability can be made mandatory). The system software can instruct the components to store the settings locally. If not, the system software can read out the registers and store it in the system storage. On subsequent link initialization, if the same components are present in the link, the system software can set up the DSP to indicate to the other components to skip equalization and either to restore the settings from their local storage or to load the registers through configuration cycle accesses for the DSP/USP. For retimers, the system can access the configuration space indirectly through the DSP registers. This disclosure introduces an architected ordered set to access the retimer registers (for both read and write operations) using 8b/10b encoding at the lower data rates. Note that the retimer configuration registers can also be read by the control SKP OS already defined in the specification at Gen 4 or Gen 5 Data Rates. Once all the components have loaded their equalization and other TX/RX settings, the system software can indicate the DSP to change the Data Rate to Gen 3 and above. FIG.3Ais a schematic diagram of a system300that includes a upstream port (USP)302connected to an downstream port (USP)332and two retimers in accordance with embodiments of the present disclosure. The USP302can be part of a processor of a computing device, such as a central processing unit (CPU) or a PCIe-compliant switch. The USP302can be connected to a nonvolatile memory322. Nonvolatile memory322can be a scratch pad memory, flash memory, or other nonvolatile storage. The USP302can include a TX/RX circuit settings control status register (CSR)308. The control status register308can include equalization parameter values as register settings for the USP302, as well as other parameter values that result from link training. The USP302can also include logic to communicate with retimers342and362. For example, the USP302can include a retimer configuration register address and data logic304and a retimer configuration data return logic306. The logic can be implemented in software, hardware, or a combination of software and hardware. The USP302also includes a transmitter TX1310and a receiver RX6312. The transmitter TX1310can be coupled to a receiver at a downstream connected device. InFIGS.3A-C, TX1310is connected by a link382ato a receiver RX1348that is part of a retimer342. It is understood, however, that the USP transmitter TX1308can be directly coupled to a receiver at a downstream port, such as RX3338at DSP332. Likewise, the USP302can include a receiver RX6312that is connected to a downstream connected device. InFIGS.3A-C, RX6310is connected by a link384ato a receiver TX6350that is part of retimer342. It is understood, however, that the USP receiver RX6310can be directly coupled to a receiver at a downstream port, such as TX4340at DSP332. The DSP332can include a TX/RX circuit settings CSR334that can include equalization parameter values, as well as other parameter values that result from link training. The DSP332can also include a receiver RX3338connected by a link382cto a retimer transmitter TX3372at retimer362. Likewise, DSP332can also include a transmitter TX4340connected by a link384cto a retimer receiver RX4372at retimer362. The system300is shown to include two retimers connected between the upstream port302and the downstream port332. Though shown as two retimers, it is understood that any number of retimers can be used within the scope of the disclosure, including zero retimers, where the USP302is directly connected to the DSP332. A first retimer342can include a TX/RX circuit settings CSR344that can include equalization parameter values, as well as other parameter values that result from link training. The first retimer342can also include a receiver RX1348connected by a link382ato a USP transmitter TX1310at USP302. RX1348can receive downstream transmissions from USP302. Likewise, first retimer342can also include a transmitter TX2352connected by a link382bto a retimer receiver RX2368at a second retimer362. TX2352can transmit downstream towards a DSP332via retimer362. The first retimer342can also include a receiver RX5354for receiving upstream transmissions from, e.g., the second retimer362across a link384b. The first retimer342can also include a transmitter TX6350for upstream transmissions across a link384ato USP302. A second retimer362can include a TX/RX circuit settings CSR364that can include equalization parameter values, as well as other parameter values that result from link training. The second retimer362can also include a receiver RX2368connected by a link382bto a first retimer transmitter TX2352at first retimer342. RX2368can receive downstream transmissions from first retimer342. Likewise, second retimer362can also include a transmitter TX3372connected by a link382cto a DSP receiver RX3338at DSP332. TX3372can transmit downstream to DSP332. The second retimer362can also include a receiver RX4374for receiving upstream transmissions from, e.g., the DSP332across a link384c. The second retimer362can also include a transmitter TX5370for upstream transmissions across a link384bto first retimer342(e.g., to RX5354). Links382a-cand links384a-ccan be multi-lane links that are compliant with the PCIe protocol. In some embodiments, the first retimer342can include a local nonvolatile memory346for locally storing equalization parameters; the second retimer362can include a local nonvolatile memory366. The local nonvolatile memory can be a scratch pad type of memory that can store equalization parameter values and other link training values for loading during a subsequent initialization of the connected devices. FIG.3Bis a schematic diagram illustrating register read/write access pathways between the upstream port (USP) and the downstream port (DSP) and the two retimers ofFIG.3Ain accordance with embodiments of the present disclosure. In some embodiments, the USP302can be directly connected to the DSP332without using retimers. In such embodiments, the USP302can communicate directly with the DSP332. For example, after the equalization sequence described inFIG.2is performed, the USP302can read the equalization parameters from the TX/RX circuit settings CSR334and store the equalization parameters directly in nonvolatile memory322. Upon re-initialization of the system components, the USP302can retrieve the equalization parameter values for the DSP332from the nonvolatile memory322and write the equalization parameter values for the DSP332directly to the TX/RX circuit settings CSR334. In embodiments that include the use of retimers between the USP302and the DSP332, the USP cannot directly read or write to the CSR within the retimers because the CSR are physical layer entities. Therefore, this disclosure describes a mechanism by which the USP302can communicate indirectly with each retimer using physical layer commands to read and write the equalization parameter values to the retimer CSRs (CSR344or CSR364, respectively). Each component of the system300(e.g., USP302, DSP332, retimer(s)342and362, etc.) can include a set of architected registers (e.g., CSR308,344,364, and334) that capture the equalization settings, as well as other Transmitter (TX) and Receiver (RX) settings, on a per-lane basis.FIGS.4A-Cillustrate the capability structure for hosting these registers. FIGS.4A-Care schematic diagrams of an architected capability structure and register structure400for equalization and other transmission and reception circuit settings in accordance with embodiments of the present disclosure. The architected capability structure400can include a plurality of bit fields, including a plurality of set-up bit fields that include lane-specific set-up registers (e.g., bit fields402a-p). The plurality of set-up bit fields includes a bit field for each lane of the link; in this example, the link includes 16 lanes, including lane 0. Each component may have a local persistent storage (e.g., flash, or other nonvolatile memory346or366inFIGS.3A-B) where the retimers can save/restore these registers locally. The architected capability structure can include a set bit404that can indicate the presence or absence of a local nonvolatile memory in the retimer and/or the downstream device. If such a local store is not available, system software can read the contents of these registers and store it in its persistent storage (e.g., flash, disk, other nonvolatile memory322) after the first equalization and use the stored values on subsequent equalizations. The registers in the retimers342and362can be accessed through a set of architected registers, as shown inFIGS.3A-Bas CSR344and364, respectively. FIG.4Billustrates an example set-up bit field402pfor lane 15 of the link. The set-up bit field402pincludes equalization parameter values representative of an equalization procedure performed on the link and for the specific lane (here, lane 15). The equalization parameter values included in the set-up bit field include TX EQ pre-cursor and cursor bits (414), TX EQ post-cursor bits (416), and proprietary RX/TX/PLL set-up registers (418). The set-up bit field can also include FS, LF, and TX Preset bits (412). These bit fields (i.e., the FS, LF, and TX Preset bits) characterize the set-up of the transmitter design, including the fine-grain control it has, as architected in the PCIe specification. The FS and LF bits indicate the range of finer-grain coefficients with which the receiver can operate the transmitter. The TX Preset field indicates which of the architected11preset fields the TX needs to start with if the RX needs to perform equalization using coefficients. Some reserved bits are also included, which can be used for additional functionality at a later time. In addition to the TX preset, FS, LF, pre-cursor, post-cursor, and cursor values used in the equalization process, the system described herein also permits each design to have implementation specific registers (such as parameters for various control loops, CTLE/DFE settings, etc.) mapped into the configuration space. Thus, a variable “Number of DWs per Lane per Data Rate” can be defined, which indicates to software the number of double words (DWs) it needs to save/restore for that component. The retimer registers are accessed through a windowing mechanism using bits[9:0] at offset 06h through the DSP CSRs. For local save/restore as well as other commands, there are command (bits 11:10)/response (bits 15:12) handshake register bits in offset 06h. Table 1 illustrates an example ordered set for retimer register read/writes. The ordered set can access retimer registers using 8b/10b encoding at the lower data rates. The retimer configuration registers can also be read by the control SKP OS already defined in the specification at Gen 4 or Gen 5 Data Rates. Once all the components have loaded their equalization and other TX/RX settings, the system software can indicate the DSP to change the Data Rate to Gen 3 and above Each component in the link advertises its ability to bypass equalization by setting a previously Reserved bit (e.g., Bit 6 of Symbol 5, a Training Control symbol—renamed herein as “Highest Data Rate Equalization only”) in Training Sets (TS1 and TS2). If any retimer in a topology (such as the one shown inFIGS.3A-B) does not support the ability to bypass EQ, the retimer resets the ‘Highest Data Rate Equalization only’ bit to 0b. Thus, by the time the link is trained to L0 for the first time, all components know whether the components of the linked system have the ability to participate in the mechanism to save and restore the equalization settings to bypass equalization. The architected capability structure400also includes a command control/status register bit field404.FIG.4Cillustrates the command control/status register bit field404. TABLE 1Ordered Set for Retimer Register Read/WriteSymbol No.Description0K28.4: Start of Control/Status Ordered Set1Cmd/Resp[3:0]:Cmd/Resp[3:0]:0000b: NOP0000b: NOP0001b: Load EQ0001b: Load EQ0010b: Load EQ Response0010b: Load EQ Response0011b: Vendor Defined0011b: Vendor Defined(Vendor ID in symbols 2, 3)(Vendor ID in symbols 2, 3)0100b-1111b: Reserved0000b-1111b: ReservedRcvr No[3:0]Rcvr No[3:0]: Broadcast used forOnly retimer receivers target onLoad EQ Responseload EQ2If Cmd/Resp = Load EQ:RX (B) 8B Addr Offset [7:0]: 00h8B Addr Offset (32 bit)default3RX (C) 8B Addr Offset [7:0]: 00hdefault4RX (D) 8B Addr Offset [7:0]: 00hdefault5RX (E) 8B Addr Offset [7:0]: 00hdefault6-13If Cmd/Resp = Load EQ: 64 bitReservedvalue14-15{2′b00, Per-symbol parity for{2′b00, Per-symbol parity forsymbols 13:1,1′b0}symbols Returning toFIG.3B, the upstream port302can include software and/or hardware that can send commands downstream to the downstream port322and to the retimers342and362. After link training and equalization, the retimer config reg addr/data logic304can use the ordered set or a variant of the ordered set of Table 1 to request that the retimer342read equalization parameter values from the TX/RX circuit settings CSR344. The retimer342can send the equalization parameter values to the upstream port302to a retimer config data return logic306. The upstream port302can store the equalization parameter values in nonvolatile memory322. Similarly, the retimer config reg addr/data logic304can use the ordered set or a variant of the ordered set of Table 1 to request that the retimer362read equalization parameter values from the TX/RX circuit settings CSR364. The retimer362can send the equalization parameter values to the upstream port302to a retimer config data return logic306. The upstream port302can store the equalization parameter values in nonvolatile memory322. Note that the Ordered Set defined in Table 1 can be sent even when the Link is operating with transactions in the L0 state to ensure that equalization settings save/restore can occur on an operational link. For a subsequent link training (e.g., for a re-initialization of the system), the retimer config reg addr/data logic304can use the ordered set or a variant of the ordered set of Table 1 to write equalization parameters values stored in nonvolatile memory322to the TX/RX circuit settings CSR344of the retimer342. Similarly, the retimer config reg addr/data logic304can use the ordered set or a variant of the ordered set of Table 1 to write equalization parameters values stored in nonvolatile memory322to the TX/RX circuit settings CSR364of the retimer362. In embodiments, the retimer342can include a nonvolatile memory346. The retimer342can be instructed to store the equalization parameter values in the nonvolatile memory346instead of or in addition to the nonvolatile memory322. The retimer config reg addr/data logic304can instruct the retimer342to write the equalization parameter values stored in local nonvolatile memory346to the TX/RX circuit settings CSR344. Similarly, the retimer362can include a nonvolatile memory366. The retimer362can be instructed to store the equalization parameter values in the nonvolatile memory366instead of or in addition to the nonvolatile memory322. The retimer config reg addr/data logic304can instruct the retimer362to write the equalization parameter values stored in local nonvolatile memory366to the TX/RX circuit settings CSR364. To perform the retimer register accesses, the USP302sends a physical layer command through the retimer config reg logic304that goes to the retimer. The register access command can include a read (or write) request to a specific register(s) of the TX/RX circuit settings CSR inside a Retimer. The USP302uses Ordered Sets (such as the one starting with K28.4 defined above for 8b/10b or a Control SKP Ordered set in 128b/130b encoding) to convey this command to retimer(s). The response from the retimer(s)342and/or362is carried back in an Ordered Set flowing in the Upstream direction and recorded in the retimer config data return logic306. System software can then read and store the values from the registers to the nonvolatile memory322on a read or use it as indication of completion on a write. As mentioned previously, the upstream port302can include logic to read the equalization parameter values from the TX/RX circuit settings CSR334in the DSP332, and can store the equalization parameter values from the DSP332to the nonvolatile memory322. The USP302can write the equalization parameter values from the nonvolatile memory322to the DSP TX/RX circuit settings CSR334during a re-initialization link training equalization process. The USP302can also store the equalization parameter values for itself in the nonvolatile memory322, and can use the equalization parameter values stored in the nonvolatile memory322during a re-initialization link training equalization process. The use of stored equalization parameter values allows the system to skip equalization processes during subsequent link trainings. Or, the system can use the equalization parameter values from a previous link training as a starting point for a subsequent equalization process to reduce the latency time associated with a full equalization process for link training. FIG.5is a process flow diagram500for performing link training including equalization in accordance with embodiments of the present disclosure. The process flow diagram500can be performed, for example, at a manufacturer site or for the initial set up of the linked components. At the outset, an upstream port (USP) and a downstream port (DSP) coupled to each other by one or more PCIe compliant links can undergo a link training, which can including an equalization procedure to determine one or more equalization parameter values (502). One or more retimers can be coupled to the transmitter and receiver (e.g., linked between the transmitter and receiver), and the links coupling the retimers can also undergo link training and equalization procedures. The upstream port can include logic to retrieve and store equalization parameter values from components coupled to the upstream port across the PCIe compliant link, such as the downstream port (510). The upstream port can store the equalization parameter values in a nonvolatile memory coupled to or associated with the upstream port. In some embodiments, the USP can include logic to determine whether each retimer includes a local nonvolatile memory (504). For example, the USP logic can receive a data structure that includes a set bit indicating the presence or absence of a nonvolatile memory local to the retimer(s). The upstream port can include logic implemented in hardware, software, or a combination of hardware and software. If the retimer does not include a local nonvolatile memory, the upstream port logic can read equalization parameter values from the retimer settings register (506). For example, the upstream port logic can cause the retimer settings register to provide the contents of one or more specified registers that contain equalization parameter values. The upstream port logic can store the equalization parameter values in a nonvolatile memory coupled to the upstream port (or associated with the upstream port) (508). If the retimer does include a local nonvolatile memory, the upstream port logic can instruct the retimer to store equalization parameter values in local nonvolatile memory (512). In embodiments, the retimer can automatically store equalization parameter values in the local nonvolatile memory to the retimer. The equalization parameter values for the upstream port and the downstream port can be stored in the nonvolatile memory associated with the upstream port (510). FIG.5Bis a process flow diagram520for loading equalization and other parameters for link training in accordance with embodiments of the present disclosure. The process flow diagram520can be performed after the linked components (e.g., upstream port, downstream port, and any retimers) are re-initialized. As part of a re-initialization process, the system can undergo link training. For re-initialization, however, the link training can forgo the equalization process and can use stored values of equalization parameter values. The upstream port can include logic implemented in hardware, software, or a combination of hardware and software. The upstream port logic can write equalization parameter values to a register of the upstream port and a register of the downstream port (528). The system can use the equalization parameter values to conclude link training (530). In embodiments, the upstream port logic can determine whether one or more retimers coupled to the upstream port and/or the downstream port across one or more PCIe links includes a local nonvolatile memory (524). If the retimer includes a local nonvolatile memory, then the upstream port logic can instruct retimer to write equalization parameter values stored local nonvolatile memory to retimer register. In embodiments, the upstream port logic can determine that the retimer(s) do not include a local nonvolatile memory. The upstream port logic can write equalization parameters to the retimer register(s) (526). FIG.6Ais a process flow diagram600for a retimer for storing equalization parameter values in accordance with embodiments of the present disclosure. The retimer can undergo an initial link training, which includes an equalization process to determine one or more equalization parameter values (602). If the retimer includes a local nonvolatile memory (604), then the retimer can store the equalization parameter values in the local nonvolatile memory (612). If the retimer does not include a local nonvolatile memory, then the retimer can temporarily store the equalization parameter values in a settings register of the retimer (606). The retimer can receive a read request from an upstream port for reading the equalization parameter values stored in the settings register (608). The retimer can read the equalization parameter values and provide the equalization parameter values to the upstream port for storage (610). FIG.6Bis a process flow diagram620for writing equalization parameter values into a retimer register in accordance with embodiments of the present disclosure. The retimer can undergo a link training as part of a re-initialization of the connected system. If the retimer includes a local nonvolatile memory (624), then the retimer can read the equalization parameter values from the local nonvolatile memory and write the equalization parameter values into the retimer settings register (628). If the retimer does not include a local nonvolatile memory, then the retimer can receive from an upstream port one or more equalization parameter values stored in a nonvolatile memory associated with the upstream port (626). The retimer can write the equalization parameter values into the setting register (628). The system can use the equalization parameter values for link training, as opposed to performing a sequential equalization procedure for each data rate (630). One interconnect fabric architecture includes the Peripheral Component Interconnect (PCI) Express (PCIe) architecture. A primary goal of PCIe is to enable components and devices from different vendors to inter-operate in an open architecture, spanning multiple market segments; Clients (Desktops and Mobile), Servers (Standard and Enterprise), and Embedded and Communication devices. PCIe is a high performance, general purpose I/O interconnect protocol defined for a wide variety of future computing and communication platforms. Some PCI attributes, such as its usage model, load-store architecture, and software interfaces, have been maintained through its revisions, whereas previous parallel bus implementations have been replaced by a highly scalable, fully serial interface. The more recent versions of PCI Express take advantage of advances in point-to-point interconnects, Switch-based technology, and packetized protocol to deliver new levels of performance and features. Power Management, Quality Of Service (QoS), Hot-Plug/Hot-Swap support, Data Integrity, and Error Handling are among some of the advanced features supported by PCI Express. Referring toFIG.7, an embodiment of a fabric composed of point-to-point Links that interconnect a set of components is illustrated. System700includes processor705and system memory710coupled to controller hub715. Processor705includes any processing element, such as a microprocessor, a host processor, an embedded processor, a co-processor, or other processor. Processor705is coupled to controller hub715through front-side bus (FSB)706. In one embodiment, FSB706is a serial point-to-point interconnect as described below. In another embodiment, link706includes a serial, differential interconnect architecture that is compliant with different interconnect standard. System memory710includes any memory device, such as random access memory (RAM), nonvolatile (NV) memory, or other memory accessible by devices in system700. System memory710is coupled to controller hub715through memory interface716. Examples of a memory interface include a double-data rate (DDR) memory interface, a dual-channel DDR memory interface, and a dynamic RAM (DRAM) memory interface. In one embodiment, controller hub715is a root hub, root complex, or root controller in a Peripheral Component Interconnect Express (PCIe or PCIE) interconnection hierarchy. Examples of controller hub715include a chipset, a memory controller hub (MCH), a northbridge, an interconnect controller hub (ICH) a southbridge, and a root controller/hub. Often the term chipset refers to two physically separate controller hubs, i.e. a memory controller hub (MCH) coupled to an interconnect controller hub (ICH). Note that current systems often include the MCH integrated with processor705, while controller715is to communicate with I/O devices, in a similar manner as described below. In some embodiments, peer-to-peer routing is optionally supported through root complex715. Here, controller hub715is coupled to switch/bridge720through serial link719. Input/output modules717and721, which may also be referred to as interfaces/ports717and721, include/implement a layered protocol stack to provide communication between controller hub715and switch720. In one embodiment, multiple devices are capable of being coupled to switch720. Switch/bridge720routes packets/messages from device725upstream, i.e. up a hierarchy towards a root complex, to controller hub715and downstream, i.e. down a hierarchy away from a root controller, from processor705or system memory710to device725. Switch720, in one embodiment, is referred to as a logical assembly of multiple virtual PCI-to-PCI bridge devices. Device725includes any internal or external device or component to be coupled to an electronic system, such as an I/O device, a Network Interface Controller (NIC), an add-in card, an audio processor, a network processor, a hard-drive, a storage device, a CD/DVD ROM, a monitor, a printer, a mouse, a keyboard, a router, a portable storage device, a Firewire device, a Universal Serial Bus (USB) device, a scanner, and other input/output devices. Often in the PCIe vernacular, such as device, is referred to as an endpoint. Although not specifically shown, device725may include a PCIe to PCI/PCI-X bridge to support legacy or other version PCI devices. Endpoint devices in PCIe are often classified as legacy, PCIe, or root complex integrated endpoints. Graphics accelerator730is also coupled to controller hub715through serial link732. In one embodiment, graphics accelerator730is coupled to an MCH, which is coupled to an ICH. Switch720, and accordingly I/O device725, is then coupled to the ICH. I/O modules731and718are also to implement a layered protocol stack to communicate between graphics accelerator730and controller hub715. Similar to the MCH discussion above, a graphics controller or the graphics accelerator730itself may be integrated in processor705. Turning toFIG.8an embodiment of a layered protocol stack is illustrated. Layered protocol stack800includes any form of a layered communication stack, such as a Quick Path Interconnect (QPI) stack, a PCIe stack, a next generation high performance computing interconnect stack, or other layered stack. Although the discussion immediately below in reference toFIGS.7-9are in relation to a PCIe stack, the same concepts may be applied to other interconnect stacks. In one embodiment, protocol stack800is a PCIe protocol stack including transaction layer805, link layer810, and physical layer820. An interface, such as interfaces717,718,721,722,726, and731inFIG.7, may be represented as communication protocol stack800. Representation as a communication protocol stack may also be referred to as a module or interface implementing/including a protocol stack. PCI Express uses packets to communicate information between components. Packets are formed in the Transaction Layer805and Data Link Layer810to carry the information from the transmitting component to the receiving component. As the transmitted packets flow through the other layers, they are extended with additional information necessary to handle packets at those layers. At the receiving side the reverse process occurs and packets get transformed from their Physical Layer820representation to the Data Link Layer810representation and finally (for Transaction Layer Packets) to the form that can be processed by the Transaction Layer1005of the receiving device. Transaction Layer In one embodiment, transaction layer805is to provide an interface between a device's processing core and the interconnect architecture, such as data link layer810and physical layer820. In this regard, a primary responsibility of the transaction layer805is the assembly and disassembly of packets (i.e., transaction layer packets, or TLPs). The translation layer805typically manages credit-base flow control for TLPs. PCIe implements split transactions, i.e. transactions with request and response separated by time, allowing a link to carry other traffic while the target device gathers data for the response. In addition PCIe utilizes credit-based flow control. In this scheme, a device advertises an initial amount of credit for each of the receive buffers in Transaction Layer805. An external device at the opposite end of the link, such as controller hub715inFIG.7, counts the number of credits consumed by each TLP. A transaction may be transmitted if the transaction does not exceed a credit limit. Upon receiving a response an amount of credit is restored. An advantage of a credit scheme is that the latency of credit return does not affect performance, provided that the credit limit is not encountered. In one embodiment, four transaction address spaces include a configuration address space, a memory address space, an input/output address space, and a message address space. Memory space transactions include one or more of read requests and write requests to transfer data to/from a memory-mapped location. In one embodiment, memory space transactions are capable of using two different address formats, e.g., a short address format, such as a 32-bit address, or a long address format, such as 64-bit address. Configuration space transactions are used to access configuration space of the PCIe devices. Transactions to the configuration space include read requests and write requests. Message space transactions (or, simply messages) are defined to support in-band communication between PCIe agents. Therefore, in one embodiment, transaction layer805assembles packet header/payload806. Format for current packet headers/payloads may be found in the PCIe specification at the PCIe specification website. Referring briefly toFIG.9, an embodiment of a PCIe transaction descriptor is illustrated. In one embodiment, transaction descriptor900is a mechanism for carrying transaction information. In this regard, transaction descriptor900supports identification of transactions in a system. Other potential uses include tracking modifications of default transaction ordering and association of transaction with channels. Transaction descriptor900includes global identifier field902, attributes field1004and channel identifier field906. In the illustrated example, global identifier field902is depicted comprising local transaction identifier field908and source identifier field910. In one embodiment, global transaction identifier902is unique for all outstanding requests. According to one implementation, local transaction identifier field908is a field generated by a requesting agent, and it is unique for all outstanding requests that require a completion for that requesting agent. Furthermore, in this example, source identifier910uniquely identifies the requestor agent within a PCIe hierarchy. Accordingly, together with source ID910, local transaction identifier908field provides global identification of a transaction within a hierarchy domain. Attributes field904specifies characteristics and relationships of the transaction. In this regard, attributes field904is potentially used to provide additional information that allows modification of the default handling of transactions. In one embodiment, attributes field904includes priority field912, reserved field914, ordering field916, and no-snoop field918. Here, priority sub-field912may be modified by an initiator to assign a priority to the transaction. Reserved attribute field914is left reserved for future, or vendor-defined usage. Possible usage models using priority or security attributes may be implemented using the reserved attribute field. In this example, ordering attribute field916is used to supply optional information conveying the type of ordering that may modify default ordering rules. According to one example implementation, an ordering attribute of “0” denotes default ordering rules are to apply, wherein an ordering attribute of “1” denotes relaxed ordering, wherein writes can pass writes in the same direction, and read completions can pass writes in the same direction. Snoop attribute field1118is utilized to determine if transactions are snooped. As shown, channel ID Field906identifies a channel that a transaction is associated with. Link Layer Link layer810, also referred to as data link layer810, acts as an intermediate stage between transaction layer905and the physical layer820. In one embodiment, a responsibility of the data link layer810is providing a reliable mechanism for exchanging Transaction Layer Packets (TLPs) between two components a link. One side of the Data Link Layer810accepts TLPs assembled by the Transaction Layer805, applies packet sequence identifier811, i.e. an identification number or packet number, calculates and applies an error detection code, i.e. CRC 812, and submits the modified TLPs to the Physical Layer820for transmission across a physical to an external device. Physical Layer In one embodiment, physical layer820includes logical sub block821and electrical sub-block822to physically transmit a packet to an external device. Here, logical sub-block821is responsible for the “digital” functions of Physical Layer821. In this regard, the logical sub-block includes a transmit section to prepare outgoing information for transmission by physical sub-block822, and a receiver section to identify and prepare received information before passing it to the Link Layer810. Physical block822includes a transmitter and a receiver. The transmitter is supplied by logical sub-block821with symbols, which the transmitter serializes and transmits onto to an external device. The receiver is supplied with serialized symbols from an external device and transforms the received signals into a bit-stream. The bit-stream is de-serialized and supplied to logical sub-block821. In one embodiment, an 8b/10b transmission code is employed, where ten-bit symbols are transmitted/received. Here, special symbols are used to frame a packet with frames823. In addition, in one example, the receiver also provides a symbol clock recovered from the incoming serial stream. As stated above, although transaction layer805, link layer810, and physical layer820are discussed in reference to a specific embodiment of a PCIe protocol stack, a layered protocol stack is not so limited. In fact, any layered protocol may be included/implemented. As an example, an port/interface that is represented as a layered protocol includes: (1) a first layer to assemble packets, i.e. a transaction layer; a second layer to sequence packets, i.e. a link layer; and a third layer to transmit the packets, i.e. a physical layer. As a specific example, a common standard interface (CSI) layered protocol is utilized. Referring next toFIG.10, an embodiment of a PCIe serial point to point fabric is illustrated. Although an embodiment of a PCIe serial point-to-point link is illustrated, a serial point-to-point link is not so limited, as it includes any transmission path for transmitting serial data. In the embodiment shown, a basic PCIe link includes two, low-voltage, differentially driven signal pairs: a transmit pair1006/1011and a receive pair1012/1007. Accordingly, device1005includes transmission logic1006to transmit data to device1010and receiving logic1007to receive data from device1010. In other words, two transmitting paths, i.e. paths1016and1017, and two receiving paths, i.e. paths1018and1019, are included in a PCIe link. A transmission path refers to any path for transmitting data, such as a transmission line, a copper line, an optical line, a wireless communication channel, an infrared communication link, or other communication path. A connection between two devices, such as device1005and device1010, is referred to as a link, such as link1015. A link may support one lane—each lane representing a set of differential signal pairs (one pair for transmission, one pair for reception). To scale bandwidth, a link may aggregate multiple lanes denoted by xN, where N is any supported Link width, such as 1, 2, 4, 8, 12, 16, 32, 64, or wider. A differential pair refers to two transmission paths, such as lines1016and1017, to transmit differential signals. As an example, when line1016toggles from a low voltage level to a high voltage level, i.e. a rising edge, line1017drives from a high logic level to a low logic level, i.e. a falling edge. Differential signals potentially demonstrate better electrical characteristics, such as better signal integrity, i.e. cross-coupling, voltage overshoot/undershoot, ringing, etc. This allows for better timing window, which enables faster transmission frequencies. Referring now toFIG.11, shown is a block diagram of an embodiment of a multicore processor. As shown in the embodiment ofFIG.11, processor1300includes multiple domains. Specifically, a core domain1130includes a plurality of cores1130A-1130N, a graphics domain1160includes one or more graphics engines having a media engine1165, and a system agent domain1110. In various embodiments, system agent domain1110handles power control events and power management, such that individual units of domains1130and1160(e.g. cores and/or graphics engines) are independently controllable to dynamically operate at an appropriate power mode/level (e.g. active, turbo, sleep, hibernate, deep sleep, or other Advanced Configuration Power Interface like state) in light of the activity (or inactivity) occurring in the given unit. Each of domains1130and1160may operate at different voltage and/or power, and furthermore the individual units within the domains each potentially operate at an independent frequency and voltage. Note that while only shown with three domains, understand the scope of the present disclosure is not limited in this regard and additional domains may be present in other embodiments. As shown, each core1130further includes low level caches in addition to various execution units and additional processing elements. Here, the various cores are coupled to each other and to a shared cache memory that is formed of a plurality of units or slices of a last level cache (LLC)1140A-1140N; these LLCs often include storage and cache controller functionality and are shared amongst the cores, as well as potentially among the graphics engine too. As seen, a ring interconnect1150couples the cores together, and provides interconnection between the core domain1130, graphics domain1160and system agent circuitry1110, via a plurality of ring stops1152A-1152N, each at a coupling between a core and LLC slice. As seen inFIG.11, interconnect1150is used to carry various information, including address information, data information, acknowledgement information, and snoop/invalid information. Although a ring interconnect is illustrated, any known on-die interconnect or fabric may be utilized. As an illustrative example, some of the fabrics discussed above (e.g. another on-die interconnect, Intel On-chip System Fabric (IOSF), an Advanced Microcontroller Bus Architecture (AMBA) interconnect, a multi-dimensional mesh fabric, or other known interconnect architecture) may be utilized in a similar fashion. As further depicted, system agent domain1110includes display engine1112which is to provide control of and an interface to an associated display. System agent domain1110may include other units, such as: an integrated memory controller1120that provides for an interface to a system memory (e.g., a DRAM implemented with multiple DIMMs; coherence logic1122to perform memory coherence operations. Multiple interfaces may be present to enable interconnection between the processor and other circuitry. For example, in one embodiment at least one direct media interface (DMI)1116interface is provided as well as one or more PCIe™ interfaces1114. The display engine and these interfaces typically couple to memory via a PCIe™ bridge1118. Still further, to provide for communications between other agents, such as additional processors or other circuitry, one or more other interfaces (e.g. an Intel® Quick Path Interconnect (QPI) fabric) may be provided. Turning next toFIG.12, an embodiment of a system on-chip (SOC) design in accordance with the disclosures is depicted. As a specific illustrative example, SOC1200is included in user equipment (UE). In one embodiment, UE refers to any device to be used by an end-user to communicate, such as a hand-held phone, smartphone, tablet, ultra-thin notebook, notebook with broadband adapter, or any other similar communication device. Often a UE connects to a base station or node, which potentially corresponds in nature to a mobile station (MS) in a GSM network. Here, SOC1200includes 2 cores—1206and1207. Similar to the discussion above, cores1206and1207may conform to an Instruction Set Architecture, such as an Intel® Architecture Core™-based processor, an Advanced Micro Devices, Inc. (AMD) processor, a MIPS-based processor, an ARM-based processor design, or a customer thereof, as well as their licensees or adopters. Cores1206and1207are coupled to cache control1208that is associated with bus interface unit1209and L2 cache1210to communicate with other parts of system1200. Interconnect1210includes an on-chip interconnect, such as an IOSF, AMBA, or other interconnect discussed above, which potentially implements one or more aspects of the described disclosure. Interface1210provides communication channels to the other components, such as a Subscriber Identity Module (SIM)1230to interface with a SIM card, a boot ROM1235to hold boot code for execution by cores1206and1207to initialize and boot SOC1200, a SDRAM controller1240to interface with external memory (e.g. DRAM1260), a flash controller1245to interface with nonvolatile memory (e.g. Flash1265), a peripheral control1250(e.g. Serial Peripheral Interface) to interface with peripherals, video codecs1220and Video interface1225to display and receive input (e.g. touch enabled input), GPU1215to perform graphics related computations, etc. Any of these interfaces may incorporate aspects of the disclosure described herein. In addition, the system illustrates peripherals for communication, such as a Bluetooth module1270, 3G modem1275, GPS1285, and Wi-Fi1285. Note as stated above, a UE includes a radio for communication. As a result, these peripheral communication modules are not all required. However, in a UE some form a radio for external communication is to be included. Referring now toFIG.13, shown is a block diagram of a second system1300in accordance with an embodiment of the present disclosure. As shown inFIG.13, multiprocessor system1300is a point-to-point interconnect system, and includes a first processor1370and a second processor1380coupled via a point-to-point interconnect1350. Each of processors1370and1380may be some version of a processor. In one embodiment,1352and1354are part of a serial, point-to-point coherent interconnect fabric, such as Intel's Quick Path Interconnect (QPI) architecture. As a result, the disclosure may be implemented within the QPI architecture. While shown with only two processors1370,1380, it is to be understood that the scope of the present disclosure is not so limited. In other embodiments, one or more additional processors may be present in a given processor. Processors1370and1380are shown including integrated memory controller units1372and1382, respectively. Processor1370also includes as part of its bus controller units point-to-point (P-P) interfaces1376and1378; similarly, second processor1380includes P-P interfaces1386and1388. Processors1370,1380may exchange information via a point-to-point (P-P) interface1350using P-P interface circuits1378,1388. As shown inFIG.13, IMCs1372and1382couple the processors to respective memories, namely a memory1332and a memory1334, which may be portions of main memory locally attached to the respective processors. Processors1370,1380each exchange information with a chipset1390via individual P-P interfaces1352,1354using point to point interface circuits1376,1394,1386,1398. Chipset1390also exchanges information with a high-performance graphics circuit1338via an interface circuit1392along a high-performance graphics interconnect1339. A shared cache (not shown) may be included in either processor or outside of both processors; yet connected with the processors via P-P interconnect, such that either or both processors' local cache information may be stored in the shared cache if a processor is placed into a low power mode. Chipset1390may be coupled to a first bus1316via an interface1396. In one embodiment, first bus1316may be a Peripheral Component Interconnect (PCI) bus, or a bus such as a PCI Express bus or another third generation I/O interconnect bus, although the scope of the present disclosure is not so limited. As shown inFIG.13, various I/O devices1314are coupled to first bus1316, along with a bus bridge1318which couples first bus1316to a second bus1320. In one embodiment, second bus1320includes a low pin count (LPC) bus. Various devices are coupled to second bus1320including, for example, a keyboard and/or mouse1322, communication devices1327and a storage unit1328such as a disk drive or other mass storage device which often includes instructions/code and data1330, in one embodiment. Further, an audio I/O1324is shown coupled to second bus1320. Note that other architectures are possible, where the included components and interconnect architectures vary. For example, instead of the point-to-point architecture ofFIG.13, a system may implement a multi-drop bus or other such architecture. FIG.14illustrates an example computing device1400that may include and/or be suitable for use with various components described herein. As shown, computing device1400may include one or more processors or processor cores1402and system memory1404. For the purpose of this application, including the claims, the terms “processor” and “processor cores” may be considered synonymous, unless the context clearly requires otherwise. The processor1402may include any type of processors, such as a central processing unit (CPU), a microprocessor, and the like. The processor1402may be implemented as an integrated circuit having multi-cores, e.g., a multi-core microprocessor. The computing device1400may include mass storage devices1406(such as diskette, hard drive, volatile memory (e.g., dynamic random access memory (DRAM), compact disc read-only memory (CD-ROM), digital versatile disk (DVD), and so forth). In general, system memory1404and/or mass storage devices1406may be temporal and/or persistent storage of any type, including, but not limited to, volatile and nonvolatile memory, optical, magnetic, and/or solid state mass storage, and so forth. Volatile memory may include, but is not limited to, static and/or dynamic random access memory. Nonvolatile memory may include, but is not limited to, electrically erasable programmable read-only memory, phase change memory, resistive memory, and so forth. The processor(s)1402, mass storage1406and/or system memory1404may together or separately be considered to be, or implement, the BIOS and/or EC in whole or in part. The computing device1400may further include I/O devices1408(such as a display (e.g., a touchscreen display), keyboard, cursor control, remote control, gaming controller, image capture device, and so forth) and communication interfaces1410(such as network interface cards, modems, infrared receivers, radio receivers (e.g., Bluetooth), and so forth). In some embodiments the I/O devices1408may be coupled with the other components of the computing device1400via a PCI-e 4 connection as described herein. The communication interfaces1410may include communication chips (not shown) that may be configured to operate the device1400in accordance with a Global System for Mobile Communication (GSM), General Packet Radio Service (GPRS), Universal Mobile Telecommunications System (UMTS), High Speed Packet Access (HSPA), Evolved HSPA (EHSPA), or Long-Term Evolution (LTE) network. The communication chips may also be configured to operate in accordance with Enhanced Data for GSM Evolution (EDGE), GSM EDGE Radio Access Network (GERAN), Universal Terrestrial Radio Access Network (UTRAN), or Evolved UTRAN (E-UTRAN). The communication chips may be configured to operate in accordance with Code Division Multiple Access (CDMA), Time Division Multiple Access (TDMA), Digital Enhanced Cordless Telecommunications (DECT), Evolution-Data Optimized (EV-DO), derivatives thereof, as well as any other wireless protocols that are designated as 3G, 4G, 5G, and beyond. The communication interfaces1410may operate in accordance with other wireless protocols in other embodiments. In some embodiments, the communication interfaces1410may be, may include, and/or may be coupled with the EC and/or TCPM as described herein. The above-described computing device1400elements may be coupled to each other via system bus1412, which may represent one or more buses. In the case of multiple buses, they may be bridged by one or more bus bridges (not shown). Each of these elements may perform its conventional functions known in the art. In particular, system memory1404and mass storage devices1406may be employed to store a working copy and a permanent copy of the programming instructions for the operation of various components of computing device1400, including but not limited to an operating system of computing device1400and/or one or more applications. The various elements may be implemented by assembler instructions supported by processor(s)1402or high-level languages that may be compiled into such instructions. The permanent copy of the programming instructions may be placed into mass storage devices1406in the factory, or in the field through, for example, a distribution medium (not shown), such as a compact disc (CD), or through communication interface1410(from a distribution server (not shown)). That is, one or more distribution media having an implementation of the agent program may be employed to distribute the agent and to program various computing devices. The number, capability, and/or capacity of the elements1408,1410,1412may vary, depending on whether computing device1400is used as a stationary computing device, such as a set-top box or desktop computer, or a mobile computing device, such as a tablet computing device, laptop computer, game console, or smartphone. Their constitutions are otherwise known, and accordingly will not be further described. In embodiments, memory1404may include computational logic1422configured to implement various firmware and/or software services associated with operations of the computing device1400. For some embodiments, at least one of processors1402may be packaged together with computational logic1422configured to practice aspects of embodiments described herein to form a System in Package (SiP) or a System on Chip (SoC). In various implementations, the computing device1400may comprise one or more components of a data center, a laptop, a netbook, a notebook, an ultrabook, a smartphone, a tablet, a personal digital assistant (PDA), an ultra mobile PC, a mobile phone, or a digital camera. In further implementations, the computing device1400may be any other electronic device that processes data. FIG.15is a process flow diagram for using stored equalization parameters for link training in accordance with embodiments of the present disclosure. In some embodiments, the electronic device(s), network(s), system(s), chip(s) or component(s), or portions or implementations thereof, of Figures herein may be configured to perform one or more processes, techniques, or methods as described herein, or portions thereof. One such process is depicted inFIG.15. For example, the process may include storing or causing to store one or more parameters related to a link training (1502); identifying or causing to identify, in a subsequent link operation, the one or more parameters (1504); and using or causing to use the one or more parameters in the subsequent link operation in place of performing a subsequent full equalization for link training (1506). Note that the apparatuses, methods, and systems described above may be implemented in any electronic device or system as aforementioned. As specific illustrations, the figures below provide exemplary systems for utilizing the disclosure as described herein. As the systems below are described in more detail, a number of different interconnects are disclosed, described, and revisited from the discussion above. And as is readily apparent, the advances described above may be applied to any of those interconnects, fabrics, or architectures. While the present disclosure has been described with respect to a limited number of embodiments, those skilled in the art will appreciate numerous modifications and variations therefrom. It is intended that the appended claims cover all such modifications and variations as fall within the true spirit and scope of this present disclosure. A design may go through various stages, from creation to simulation to fabrication. Data representing a design may represent the design in a number of manners. First, as is useful in simulations, the hardware may be represented using a hardware description language or another functional description language. Additionally, a circuit level model with logic and/or transistor gates may be produced at some stages of the design process. Furthermore, most designs, at some stage, reach a level of data representing the physical placement of various devices in the hardware model. In the case where conventional semiconductor fabrication techniques are used, the data representing the hardware model may be the data specifying the presence or absence of various features on different mask layers for masks used to produce the integrated circuit. In any representation of the design, the data may be stored in any form of a machine readable medium. A memory or a magnetic or optical storage such as a disc may be the machine readable medium to store information transmitted via optical or electrical wave modulated or otherwise generated to transmit such information. When an electrical carrier wave indicating or carrying the code or design is transmitted, to the extent that copying, buffering, or re-transmission of the electrical signal is performed, a new copy is made. Thus, a communication provider or a network provider may store on a tangible, machine-readable medium, at least temporarily, an article, such as information encoded into a carrier wave, embodying techniques of embodiments of the present disclosure. A module as used herein refers to any combination of hardware, software, and/or firmware. As an example, a module includes hardware, such as a micro-controller, associated with a non-transitory medium to store code adapted to be executed by the micro-controller. Therefore, reference to a module, in one embodiment, refers to the hardware, which is specifically configured to recognize and/or execute the code to be held on a non-transitory medium. Furthermore, in another embodiment, use of a module refers to the non-transitory medium including the code, which is specifically adapted to be executed by the microcontroller to perform predetermined operations. And as can be inferred, in yet another embodiment, the term module (in this example) may refer to the combination of the microcontroller and the non-transitory medium. Often module boundaries that are illustrated as separate commonly vary and potentially overlap. For example, a first and a second module may share hardware, software, firmware, or a combination thereof, while potentially retaining some independent hardware, software, or firmware. In one embodiment, use of the term logic includes hardware, such as transistors, registers, or other hardware, such as programmable logic devices. Use of the phrase ‘to’ or ‘configured to,’ in one embodiment, refers to arranging, putting together, manufacturing, offering to sell, importing and/or designing an apparatus, hardware, logic, or element to perform a designated or determined task. In this example, an apparatus or element thereof that is not operating is still ‘configured to’ perform a designated task if it is designed, coupled, and/or interconnected to perform said designated task. As a purely illustrative example, a logic gate may provide a 0 or a 1 during operation. But a logic gate ‘configured to’ provide an enable signal to a clock does not include every potential logic gate that may provide a 1 or 0. Instead, the logic gate is one coupled in some manner that during operation the 1 or 0 output is to enable the clock. Note once again that use of the term ‘configured to’ does not require operation, but instead focus on the latent state of an apparatus, hardware, and/or element, where in the latent state the apparatus, hardware, and/or element is designed to perform a particular task when the apparatus, hardware, and/or element is operating. Furthermore, use of the phrases ‘capable of/to,’ and or ‘operable to,’ in one embodiment, refers to some apparatus, logic, hardware, and/or element designed in such a way to enable use of the apparatus, logic, hardware, and/or element in a specified manner. Note as above that use of to, capable to, or operable to, in one embodiment, refers to the latent state of an apparatus, logic, hardware, and/or element, where the apparatus, logic, hardware, and/or element is not operating but is designed in such a manner to enable use of an apparatus in a specified manner. A value, as used herein, includes any known representation of a number, a state, a logical state, or a binary logical state. Often, the use of logic levels, logic values, or logical values is also referred to as 1's and 0's, which simply represents binary logic states. For example, a 1 refers to a high logic level and 0 refers to a low logic level. In one embodiment, a storage cell, such as a transistor or flash cell, may be capable of holding a single logical value or multiple logical values. However, other representations of values in computer systems have been used. For example the decimal number ten may also be represented as a binary value of 1010 and a hexadecimal letter A. Therefore, a value includes any representation of information capable of being held in a computer system. Moreover, states may be represented by values or portions of values. As an example, a first value, such as a logical one, may represent a default or initial state, while a second value, such as a logical zero, may represent a non-default state. In addition, the terms reset and set, in one embodiment, refer to a default and an updated value or state, respectively. For example, a default value potentially includes a high logical value, i.e. reset, while an updated value potentially includes a low logical value, i.e. set. Note that any combination of values may be utilized to represent any number of states. The embodiments of methods, hardware, software, firmware or code set forth above may be implemented via instructions or code stored on a machine-accessible, machine readable, computer accessible, or computer readable medium which are executable by a processing element. A non-transitory machine-accessible/readable medium includes any mechanism that provides (i.e., stores and/or transmits) information in a form readable by a machine, such as a computer or electronic system. For example, a non-transitory machine-accessible medium includes random-access memory (RAM), such as static RAM (SRAM) or dynamic RAM (DRAM); ROM; magnetic or optical storage medium; flash memory devices; electrical storage devices; optical storage devices; acoustical storage devices; other form of storage devices for holding information received from transitory (propagated) signals (e.g., carrier waves, infrared signals, digital signals); etc., which are to be distinguished from the non-transitory mediums that may receive information there from. Instructions used to program logic to perform embodiments of the disclosure may be stored within a memory in the system, such as DRAM, cache, flash memory, or other storage. Furthermore, the instructions can be distributed via a network or by way of other computer readable media. Thus a machine-readable medium may include any mechanism for storing or transmitting information in a form readable by a machine (e.g., a computer), but is not limited to, floppy diskettes, optical disks, Compact Disc, Read-Only Memory (CD-ROMs), and magneto-optical disks, Read-Only Memory (ROMs), Random Access Memory (RAM), Erasable Programmable Read-Only Memory (EPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), magnetic or optical cards, flash memory, or a tangible, machine-readable storage used in the transmission of information over the Internet via electrical, optical, acoustical or other forms of propagated signals (e.g., carrier waves, infrared signals, digital signals, etc.). Accordingly, the computer-readable medium includes any type of tangible machine-readable medium suitable for storing or transmitting electronic instructions or information in a form readable by a machine (e.g., a computer) Reference throughout this specification to “one embodiment” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present disclosure. Thus, the appearances of the phrases “in one embodiment” or “in an embodiment” in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the foregoing specification, a detailed description has been given with reference to specific exemplary embodiments. It will, however, be evident that various modifications and changes may be made thereto without departing from the broader spirit and scope of the disclosure as set forth in the appended claims. The specification and drawings are, accordingly, to be regarded in an illustrative sense rather than a restrictive sense. Furthermore, the foregoing use of embodiment and other exemplarily language does not necessarily refer to the same embodiment or the same example, but may refer to different and distinct embodiments, as well as potentially the same embodiment. The following paragraphs provide examples of various ones of the embodiments disclosed herein. Example 1 is a method for performing link training for one or more input/output links interconnecting an upstream port with a downstream port, the method including storing one or more equalization parameter values for the downstream port in a nonvolatile memory associated with the upstream port; performing a initialization sequence of the one or more input/output links between the upstream port and the downstream port, the initialization sequence comprising link training the one or more input/output links; retrieving the stored equalization parameter values for the downstream port from the nonvolatile memory; writing the stored equalization parameter values for the downstream port to a register associated with the downstream port; and using the equalization parameter values as equalization parameters for the downstream port for operating the one or more links interconnecting the upstream port with the downstream port. Example 2 may include the subject matter of example 1, and can also include storing one or more equalization parameter values for the upstream port in the nonvolatile memory; retrieving the stored equalization parameter values for the upstream port from the nonvolatile memory; writing the stored equalization parameter values for the upstream port to a register associated with the upstream port; and using the equalization parameter values as equalization parameters for the upstream port for operating the one or more links interconnecting the upstream port with the downstream port. Example 3 may include the subject matter of example 1, and can also include providing a read request for a register associated with a retimer connected to one or both of the upstream port or the downstream port by one or more links; receiving, from the register associated with the retimer, one or more equalization parameter values for the retimer; and storing the one or more equalization parameter values for the retimer in the nonvolatile memory. Example 4 may include the subject matter of example 3, and can also include retrieving, from the nonvolatile memory, the stored equalization parameter values for the retimer; writing the stored equalization parameter values for the retimer to the register associated with the retimer; and using the equalization parameter values as equalization parameters for the retimer for operating one or more links interconnecting the retimer to the upstream port or one or more links interconnecting the retimer to the downstream port. Example 5 may include the subject matter of example 3, wherein providing a read request to the register associated with the retimer comprises providing an ordered set to the register specifying one or more registers from which the retimer is to provide values to the upstream port. Example 6 may include the subject matter of any of examples 1-3, and can also include determining that a retimer connected to one or both of the upstream port or the downstream port by one or more links comprises a nonvolatile memory local to the retimer; and providing an instruction to the retimer to store one or more equalization parameter values associated with the retimer in the nonvolatile memory local to the retimer. Example 7 may include the subject matter of example 6, and can also include instructing the retimer to write the equalization parameter values from the nonvolatile memory local to the retimer to a register associated with the retimer; and using the equalization parameter values in the register associated with the retimer as equalization parameters for the retimer for operating one or more links interconnecting the retimer to the upstream port or one or more links interconnecting the retimer to the downstream. Example 8 may include the subject matter of any of examples 1-7, wherein the input/output links are compliant with a Peripheral Component Interconnect Express (PCIe) protocol. Example 9 may include the subject matter of any of examples 1-8, wherein using the equalization parameter values as equalization parameters for the downstream port for operating the one or more links interconnecting the upstream port with the downstream port comprises bypassing an equalization procedure of a link training of the one or more links interconnecting the upstream port with the downstream port. Example 10 may include the subject matter of any of examples 1-9, wherein using the equalization parameter values as equalization parameters for the downstream port for operating the one or more links interconnecting the upstream port with the downstream port comprises using the equalization parameters in the register associated with the downstream port as an initial value for performing an equalization procedure of a link training of the one or more links interconnecting the upstream port with the downstream port. Example 11 is a system that includes an upstream port; a downstream port coupled to the upstream port by one or more links compliant with a Peripheral Component Interconnect Express (PCIe) protocol, the downstream port comprising a transmission/reception settings register; a nonvolatile memory associated with the upstream port; the upstream port comprising upstream port logic to read equalization parameter values from the downstream port; store the equalization parameter values in the nonvolatile memory associated with the upstream port; write equalization parameter values from the nonvolatile memory to the transmission/reception settings register of the downstream port; and use the equalization parameter values to operate the one or more links. Example 12 may include the subject matter of example 11, and can also include a retimer coupled to the upstream port by one or more links compliant with the PCIe protocol, the retimer residing upstream of the downstream port, the retimer comprising a retimer settings register; the upstream port comprising logic to provide a read access request to the retimer settings register; receive equalization parameter values from the retimer settings register; and store the equalization parameter values in the nonvolatile memory. Example 13 may include the subject matter of example 12, the upstream port comprising logic to provide a write access request to the retimer setting register to write the equalization parameter values to a specified register address. Example 14 may include the subject matter of any of examples 11-13, and can also include a retimer coupled to the upstream port by one or more links compliant with the PCIe protocol, the retimer residing upstream of the downstream port, the retimer comprising a settings register; the upstream port comprising logic to determine that the retimer comprises a nonvolatile memory local to the retimer; and provide an instruction to the retimer to store equalization parameter values to the nonvolatile memory local to the retimer. Example 15 may include the subject matter of example 14, the upstream port further comprising logic to provide a write instruction to the retimer to write the equalization parameter values from the nonvolatile memory local to the retimer settings register; and use the equalization parameter values written to the retimer register to operate the one or more links interconnecting the retimer with the upstream port. Example 16 may include the subject matter of any of examples 11-15, wherein the upstream port logic comprises a retimer configuration register logic implemented at least partially in hardware and a retimer configuration data return logic implemented at least partially in hardware. Example 17 may include the subject matter of any of examples 11-16, wherein the transmission/reception settings register of the downstream port comprises a command/status register. Example 18 may include the subject matter of any of examples 11-17, wherein the nonvolatile memory comprises a flash memory coupled to the upstream port. Example 19 may include the subject matter of any of examples 11-18, wherein the upstream port logic is to provide an ordered set to the register specifying one or more registers from which to provide values to the upstream port. Example 20 is a computer program product tangibly embodied on non-transitory computer-readable media, the computer program product comprising code that when executed cause an upstream port to store one or more equalization parameter values for a downstream port in a nonvolatile memory associated with the upstream port, the downstream port coupled to the upstream port by one or more links compliant with a Peripheral Component Interconnect Express (PCIe) protocol; perform a initialization sequence of one or more input/output links between the upstream port and the downstream port, the initialization sequence comprising link training the one or more input/output links; retrieve the stored equalization parameter values for the downstream port from the nonvolatile memory; write the stored equalization parameter values for the downstream port to a register associated with the downstream port; and use the equalization parameter values as equalization parameters for the downstream port for operating the one or more links interconnecting the upstream port with the downstream port. Example 21 may include the subject matter of example 20, wherein the code, when executed, causes the upstream port to store one or more equalization parameter values for the upstream port in the nonvolatile memory; retrieve the stored equalization parameter values for the upstream port from the nonvolatile memory; write the stored equalization parameter values for the upstream port to a register associated with the upstream port; and use the equalization parameter values as equalization parameters for the upstream port for operating the one or more links interconnecting the upstream port with the downstream port. Example 22 may include the subject matter of example 20, wherein the code, when executed, causes the upstream port to provide a read request to a register associated with a retimer connected to one or both of the upstream port or the downstream port by one or more links; receive, from the register associated with the retimer, one or more equalization parameter values for the retimer; and store the one or more equalization parameter values for the retimer in the nonvolatile memory. Example 23 may include the subject matter of example 22, wherein the code, when executed, causes the upstream port to retrieve, from the nonvolatile memory, the stored equalization parameter values for the retimer; write the stored equalization parameter values for the retimer to the register associated with the retimer; and use the equalization parameter values as equalization parameters for the retimer for operating one or more links interconnecting the retimer to the upstream port or one or more links interconnecting the retimer to the downstream port. Example 24 may include the subject matter of example 20, wherein the code, when executed, causes the upstream port to determine that a retimer connected to one or both of the upstream port or the downstream port by one or more links comprises a nonvolatile memory local to the retimer; and provide an instruction to the retimer to store one or more equalization parameter values associated with the retimer in the nonvolatile memory local to the retimer. Example 25 may include the subject matter of example 24, wherein the code, when executed, causes the upstream port to instruct the retimer to write the equalization parameter values from the nonvolatile memory local to the retimer to a register associated with the retimer; and use the equalization parameter values in the register associated with the retimer as equalization parameters for the retimer for operating one or more links interconnecting the retimer to the upstream port or one or more links interconnecting the retimer to the downstream. Example 26 is an apparatus that may include a hardware processor coupled to a nonvolatile memory, a peripheral component interconnect express (PCIe) compliant interface coupled on a downstream device, a transmitter and receiver, and logic implemented at least partially in hardware to request from the downstream device one or more equalization parameter values; store or cause to be stored the one or more equalization parameter values in the nonvolatile memory; and instructing the downstream device to use the equalization parameter values for link training purposes. Example 27 may include the subject matter of example 26, wherein the downstream device is a retimer. Example 28 may include the subject matter of example 27, wherein the request comprises an identification of one or more settings register contents. Example 29 may include the subject matter of example 27, wherein the request comprises an instruction to locally store the equalization parameter values in a nonvolatile memory local to the downstream device. Example 30 may include the subject matter of example 26, wherein the downstream device is a peripheral component. Example 31 is a retimer apparatus that includes a receiver coupled to a transmitter of an upstream port across a link compliant with a Peripheral Component Interconnect Express (PCIe) protocol, a transmitter coupled to a receiver of the upstream port across another PCIe compliant link; a setting register to temporarily store link training and equalization parameter values; and means for processing a request from the upstream port to store the equalization parameter values; and means for storing the equalization parameter values. Example 32 may include the subject matter of example 31, wherein the means for processing the request includes means for reading the setting register values. Example 33 may include the subject matter of any of examples 31 or 32, wherein means for storing the equalization parameter values includes a means for providing the equalization parameter values to the upstream port. Example 34 may include the subject matter of any of examples 31 or 32, wherein means for storing the equalization parameter values includes a means for storing the equalization parameter values in a nonvolatile memory local to the retimer. Example 35 may include the subject matter of example of example 31, and can also include means for receiving an instruction from the upstream port to write equalization parameter values to the settings register. Example 36 may include the subject matter of example 35, wherein the means for receiving an instruction can include means for receiving the equalization parameter values from the upstream port. Example 37 may include the subject matter of example 35, wherein the means for receiving an instruction can include means for receiving an instruction to write equalization parameter values from a nonvolatile memory local to the retimer to the settings register.
102,122
11860813
DETAILED DESCRIPTION Reference will now be made in detail to the various embodiments of the present disclosure, examples of which are illustrated in the accompanying drawings. Furthermore, in the following detailed description of the present disclosure, numerous specific details are set forth in order to provide a thorough understanding of the present disclosure. However, it will be understood that the present disclosure may be practiced without these specific details. In other instances, well-known methods, procedures, components, and circuits have not been described in detail so as not to unnecessarily obscure aspects of the present disclosure. Memory Appliance System FIG.1Ais a block diagram of a memory appliance system100A, in accordance with one embodiment of the present disclosure. In one embodiment, the memory appliance system100A provides for higher capacity and higher bandwidth scaling of memory and computation offloading to the memory with the use of programmable memory interfaces between network interface125and SMCs140A-N. In another embodiment, the memory appliance system100A provides for a higher rate of scaling of memory with the use of hardware implemented ASICs memory interfaces. Both the programmable and ASIC implementable memory interfaces on the memory side of an interface are configured to control and perform application specific primitive operations on memory that are typically controlled by a processor on the other side of the interface. Memory appliance system100A is configured to receive high level command or instructions (e.g., OSI layer 7 protocol or interface command from a client system), and to translate the instructions into lower-level assembly code style primitive operations that are executable by a plurality of SMC controllers. By controlling and performing these primitive operations at the memory, data from each primitive operation need not be delivered back-and-forth over the interface, thereby greatly reducing and/or avoiding the latency buildup normally experienced with increased scaling of memory. The memory appliance100A includes a plurality of smart memory units or Smart Memory Cubes (SMCs)140A-N, each of which includes memory. The term “SMCs” is used throughout this disclosure for ease of reference but is not meant to impart a special definition or suggest that particular functions or aspects are required. As such, memory is distributed throughout the memory appliance100A in the plurality of SMCs140A-N. The memory appliance100A can be configured as a stand-alone unit, or as a scalable unit. That is, in a scalable configuration a plurality of similarly configured memory appliances may be combined to form a non-limited and scalable configuration of memory. In either the stand-alone or scalable configurations, an appliance controller120is coupled to the plurality of SMCs140A-N through a command interface in order to provide configuration information for memory contained within the SMCs140A-N. The appliance controller120may be coupled to higher level controller that remotely manages one or more memory appliances through an external management network108. For example, operations performed by the appliance controller120alone or in cooperation with a remote manager include discovery of memory, provision of memory (e.g., within a virtual memory device), event logging, remote management, power and/or thermal management, monitor, and control. As shown inFIG.1A, the memory appliance system includes a host controller110that is configured to perform processing and switching operations. More particularly, host controller110manages memory distributed throughout the plurality of SMCs140A-N in the memory appliance system100A. Additionally, the host controller110is operable to be coupled to one or more communication channels with a command interface, wherein the communication channels are coupled over an interface125to memory. Also some form of notification (e.g., pointers to memory) or results is also delivered through the interface125back to the host controller110. The host controller110includes a processor112and an optional switch114, in one implementation. The processor112generates and communicates commands over the one or more communication channels, wherein the commands are configured for accessing memory distributed throughout a plurality of SMCs. For example, the processor112is configured to receive high level commands (e.g., from a client side database application implementing Memecached) and translate those commands to a series of primitive commands that are operable within each of the SMCs for accessing and/or operating on data stored in memory. In addition, the switch114is configurable to deliver a corresponding command or series of commands to the proper SMC for accessing and/or performing operations on memory. The processor112in the host controller110is configured to receive and send communications over an external network105. In one example, the external network provides an interface with a client device. In another example, an external network106is configured provide communications between memory appliances. In one embodiment, the external networks105and106are similarly configured. In one embodiment, the processor112is coupled to a NIC to provide access to the external network. In another embodiment, the processor112is configured as a NPU that includes an internal communication interface for communicating with the external network. In still another embodiment, the processor112is configured as an FPGA. Various configurations are supported for the host controller. For illustration purposes only, as shown inFIG.1A, a first configuration131includes a CPU (e.g., an Intel XEON 0 processor); a second configuration132includes an NPU configured for performing processing operations, and a switch for performing switching operations; a third configuration133includes an FPGA configured for performing processing operations, and a switch for performing switching operations; and a fourth configuration134includes an NPU configured for performing processing operations, and an FPGA configured for performing switching operations. Other configurations are supported, such as an Intel XEON® processor and a switch for performing switching operations. A specific configuration including an NPU as a host controller is further described inFIG.1B, in accordance with one embodiment of the present disclosure. Specifically, the memory appliance100B includes a plurality of SMCs180A-N, each of which include memory. An appliance controller165is coupled to the plurality of SMCs180A-N through an interface that is a PCIe switch150to provide configuration information to the memory. In one implementation, the appliance controller165is coupled to a higher level controller through the external management network170for remote management. In addition, the memory appliance system100B includes a host controller that is an NPU160, and is configured for managing memory distributed throughout the plurality of SMCs180A-N. Each of the SMCs includes a programmable SMC controller (e.g., FPGA)181and memory182. Communication between the NPU160and the plurality of SMCs180A-N is achieved through the PCIe switch150. As such, commands generated by the NPU160and configured to access and operate on memory in the SMCs180A-N is delivered through the PCIe switch150for operation by the corresponding programmable SCM controller. Also some form of notification or results is also delivered through the PCIe switch150back to the NPU160. Returning toFIG.1A, as previously presented, the processor112is configured to manage memory throughout the plurality of SMCs in the memory appliance system when performing host controller duties. For example, the processor112in the host controller110is configured to provide memory services, such as, load balancing, quality of service, connection management, and traffic routing. Further, in one embodiment, the host controller110manages memory in the memory appliance system as a virtual memory system. The plurality of SMCs140A-N is coupled to the processor112through one or more communication channels established through a command interface125, also referred to as the SMC interface125. In that manner, commands generated by or passed through the processor112are delivered to the plurality of SMCs140A-N through the command interface125. In one embodiment, the communication channels in the command interface125comprises a network interface for providing communication between the host controller110and the plurality of SMCs140A-N. That is, communication between the processor and the plurality of SMCs is accomplished using networking protocols. For instance, the network interface may be configured using one of the following protocols: a TCP; a UDP; Ethernet; Infiniband; Fiber Channel, and other networking protocols. In another embodiment, the communication channels in the command interface125comprise a direct interface. That is, the processor112and each of the plurality of SMCs communicate over a point-to-point communication channel or link between two ports. For example, the link may establish a point-to-point communication using the PCIe interface, or one of its derivatives, that is a high-speed serial computer expansion bus standard. Each SMC includes a brick or unit controller (also referred to as the SMC controller) that is hardwired or programmable to execute application specific commands and/or operations generated by an external client and/or application. For illustration, SMC140A, including its components, is representative of each of the plurality of SMCs140A-N. For example, SMC controller141is configured to perform data operations on the content that is included in memory142. In one embodiment, the data operations are performed transparently to the command interface and/or requesting client (communicatively coupled through the external network105). That is, once a high level command or instruction is delivered over the command interface from the requesting client, control over execution of the primitive data operations based on the high level command is handed over to the SMC controller141. For example, data operations include search, sort, and other custom accelerations. In one embodiment, the SMC controller141in SMC140A is configured as a FPGA that is pre-programmed with the proper functionality to handle a requested command. In another embodiment, the FPGA is programmed on-the-fly depending on the request made on the memory142contained within SMC140A. For example, the FPGA is configured to generate and compile primitive operations when receiving one or more high level commands, wherein the primitive operations are executable by the FPGA. In another embodiment, the FPGA is configured to access configuration files for programming with the proper functionality. In still another embodiment, the SMC controller141is implemented through an ASIC device providing application specific operations. In embodiments, the SMC controller141is configured to respond to primitive commands delivered over the command/SMC interface125to access and/or perform operations on content stored in memory142. More specifically, processor112is configured to receive high level commands over the external network105(e.g., from a client application) and translate each of the commands to one or more primitive operations. The primitive operations are delivered over the command/SMC interface125for handling by the SMC controller141. In that manner, by handling these primitive operations at the memory, the step by step control of the primitive operations associated with a particular high level command need not be controlled by processor112, thereby reducing and/or avoiding any latency due to increased scaling of memory in the plurality of SMCs140A-N. For example, the plurality of memory devices in memory appliance100A may be configured as a Memecached memory system that is a general-purpose distributed memory caching system. As such, the primitive commands are designed to implement access and manipulation of data within the Memecached memory system. In particular, access to memory in the Memcached memory system is performed using a key value pair or key value functions as implemented through the primitive operations. For example, using one or more primitive operations, a key within a command is hashed using the appropriate algorithm in order to determine proper addressing within the memory. Typical key value commands/functions include “GET” and “SET” and “DELETE” operations that are each further translated into one or more primitive operations handled by the corresponding SMC. Further, in one embodiment the SMC controller141in SMC140A is configured to respond to high level commands delivered over the command/SMC interface125to access and/or perform operations on content stored in memory142. That is, the SMC controller141can be configured to translate the high level commands into a format suitable for use within the SMC controller141when interfacing with memory142. That is, instead of performing translation at processor112, the translation of high level commands into primitive operations suitable for use within the SMC controller141is performed locally. In one embodiment, SMC controller141is configured to provide custom acceleration of data operations. Some examples of custom accelerations include, but is not limited to, error recovery, data manipulation, and data compression. For example, SMC controller141may be configured to handle one or more application specific operations (e.g., Memecached search operation). In one embodiment, SMC controller141is programmable such as through an FPGA to handle a specific operation. In another embodiment, SMC controller141is programmed on-the-fly to handle an incoming operation. In still another embodiment, SMC controller is implemented through an ASIC that is configured to handle one or more application specific operations. Further, the SMC controller141may include an additional processor for handling less time sensitive functions, such as, management and control of the memory devices. For instance, instructions coming from the appliance controller120are handled by this additional processor (e.g., SMC micro-controller described inFIG.4). In addition, each SMC includes a plurality of memory devices. For example, SMC140A includes memory142. In one embodiment, the plurality of memory devices in a corresponding SMC includes memory devices packaged in a DIMM, registered memory module (RDIMM), and/or load reduced memory (LRDIMM). In one further embodiment, the memory devices packaged in a corresponding DIMM include DRAM memory devices. In another embodiment, the memory devices packaged in a corresponding DIMM include non-volatile read/write memory (e.g., FLASH). In still another embodiment, the memory devices packaged in a corresponding DIMM include non-volatile memory devices (e.g., FLASH, EEPROM). In one embodiment, each SMC is configured with multiple channels (e.g., four), each of which is suitable for handling multiple DIMMs (e.g., six). In an example, SMC140A is able to handle up to and more than twenty-four DIMMs given four channels, and six DIMMs per channel. As demonstrated, embodiments of the present disclosure provide for a larger amount of DIMMs per SMC for increased scalability. FIG.2is a block diagram of a memory system200including plurality of memory appliances260A-N, in accordance with one embodiment of the present disclosure. The plurality of memory appliances260A-N provide access to internal memory devices. That is, each of the memory appliances260A-N provides access to corresponding memory. In particular, the plurality of memory appliances260A-N includes a first memory appliance system (e.g.,260A) and at least one other, or second, memory appliance system (e.g.,260B). Both memory appliance systems are similarly configured, such as, that described inFIGS.1A-B. For example, each of the memory appliance systems include a host controller for managing data across a corresponding plurality of SMCs. For illustration, memory appliance260A provides access to memory262A through host controller261A, wherein memory262A includes one or more SMCs; memory appliance260B provides access to memory262B through host controller261B, wherein memory262B includes one or more SMCs; and memory appliance260N provides access to memory262N through host controller26IN, wherein memory262N includes one or more SMCs. In one embodiment, the memory devices are configured as virtual memory, wherein distributed memory devices are accessible by each of the host controllers of the plurality of memory appliances. In one embodiment, the host controllers of the plurality of memory appliances260A-N are in communication to facilitate a distributed memory system200. For example, an external communication interface is configured to provide communication between host controllers within the plurality of memory appliances260A-N to provide access to memory virtualized across one or more memory appliance systems. The communication interface can include a fat pipe configured as a higher speed and higher bandwidth communications channel for communicating data, and a skinny pipe as a lower speed and lower bandwidth communications channel configured for communicating instructions/control. FIG.3is an illustration of various implementations of a memory appliance command interface within a memory appliance system310, wherein the interface is established to facilitate communication between a host controller and one or more SMCs within a memory appliance310, in accordance with one embodiment of the present disclosure. These examples are provided for illustration only as various other implementations of a memory appliance interface are supported. In the first example, the memory appliance system310is implemented as a network based memory appliance system310A. For instance, the memory appliance system310A is supported by a network interface, and includes a NPU321that is coupled to one or more SMCs (e.g., four as shown inFIG.3), wherein each SMC includes a programmable FPGA322and memory323, as previously described. For example, NPU321is coupled to a host controller via a network interface in order to pass commands and data. That is, the network interface relies on network addresses identifying the network nodes of the host controller and the network based memory appliance system310A to deliver communications. In the second example, the memory appliance system310is implemented as a PCIe memory appliance system310B, wherein the PCIe provides a direct interface between the PCIe switch331of the host controller and the one or more SMCs (e.g., four as shown inFIG.3). Each of the SMCs includes a programmable FPGA332and memory333. For example, PCIe switch331is coupled to a host controller via a direct interface (e.g., PCIe) in order to pass commands and data. PCIe devices communicate via a point-to-point connection or interconnect, wherein a direct channel is established between two PCIe ports of computing device allowing both to send/receive ordinary PCIe requests and interrupts. In the third example, the memory appliance system310is implemented as a PCIe fabric memory appliance system3IOC. For instance, the memory appliance system3IOC is supported by a PCIe fabric providing a direct interface between the PCIe switch and fabric controller341and one or more SMCs (e.g., four as shown inFIG.3). Each of the SMCs in the memory appliance system310C includes an FPGA342and memory343. For example, a PCIe-based fabric enables straightforward sharing of I/O devices at low cost and utilizing a low power envelope. Direct coupling of the host controller to the PCIe fabric, and then to memory does not require other intermediary devices, as in an Infiniband network. For example, the PCIe fabric controller341is coupled to a host controller via a direct interface through a PCIe-based network fabric in order to pass commands and data. The PCIe based fabric is used as a unified fabric to replace traditional communication interconnects (e.g., replace small Infiniband clusters) to achieve high-speed clustering. FIG.4is a flow diagram400illustrating steps in a method for an SMC power up sequence, in accordance with one embodiment of the present disclosure. Diagram400is described within the context of a memory controller including an SMC having a SMC controller implementable as an FPGA communicating over a PCIe interface with a host controller, though other SMC configurations are contemplated and supported. In still another embodiment, flow diagram400illustrates a computer implemented method for implementing an SMC power up sequence within a corresponding SMC of a memory appliance. In another embodiment, flow diagram400is implemented within a computer system including a processor and memory coupled to the processor and having stored therein instructions that, if executed by the computer system causes the system to execute a method for implementing an SMC power up sequence within a corresponding SMC of a memory appliance. In still another embodiment, instructions for performing a method as outlined in flow diagram400are stored on a non-transitory computer-readable storage medium having computer-executable instructions for implementing an SMC power up sequence within a corresponding SMC of a memory appliance. The method outlined in flow diagram400is implementable by one or more components of the computer system1700, storage system1800, and memory appliance systems100A-B ofFIGS.1A-B. Flow chart400describes operations which can be implemented by a SMC including an FPGA and separate microcontroller, wherein the FPGA acts as a memory controller and the microcontroller performs general management. As such, in some embodiments, the microcontroller can perform the power-up sequence illustrated in flow chart400, while in other embodiments, the microcontroller is implemented within the FPGA, and the FPGA can perform the power-up sequence illustrated in flow chart400. At410, the method includes booting up the SMC controller from non-volatile memory (e.g., FLASH). At420, the method includes having the SMC controller power up all the FPGA and memory power supplies in a prescribed sequence. At430, the method includes having the SMC controller read the DIMM configuration for the attached memory. At440, the SMC controller loads the PCIe and self-test configuration to the FPGA and initiates a self-test sequence. At450, the SMC controller responds to the host controller PCIe discovery, while simultaneously checking the DIMM memories. At460, the SMC controller loads a default operational configuration to the FPGA if the FPGA passes the test. In another implementation, the host controller is configured to load the operational configuration via the PCIe interface. At470, the SMC controller reports the SMC, brick or unit identifier, configuration and initialization status to the host controller. At480, the SMC controller executes system management commands, monitors sensors, and handles critical system errors. For example, the SMC controller executes system management commands received from the host controller (e.g., loads custom FPGA configuration, updates its own and FPGA boot flash, enters/exits power stand-by or power off, sets clock, etc.). Also, the SMC controller monitors all sensors (e.g., temperature, power supplies, etc.), and FPGA status periodically, and reports it back to the host controller. In another case, the SMC controller handles critical system errors (e.g., power brown-out, overheating, hardware failures, etc.). Application Aware Acceleration of Programmable Memory Interfaces in a Memory Appliance System In one embodiment, the memory appliance100A ofFIG.1Aincludes a plurality of programmable SMCs, wherein a host controller communicates with the programmable SMCs to control management of data across the memory appliance100A. Each of the SMCs includes a programmable interface or SMC controller for independently controlling one or more groupings of memory devices within that SMC. For example, in SMC140A, programmable SMC controller141is configured to perform one of a plurality of predefined or on-the-fly, compiled functionalities for managing data within memory142. In one embodiment, each SMC controller is configured to provide custom acceleration of data operations performed on corresponding memory or memories (e.g., memory device or devices). For example, SMC controller141may be configured to handle one or more application specific operations (e.g., search, get, store, and/or delete operations used for accessing memory using key-value functions in a Memecached memory system). In another example, a memory appliance including one or more SMCs is configured as a fast and large capacity disk, which can be used as a burst buffer in high performance applications, or as a fast swap space for virtual machines/operating systems, or as an intermediate storage used in a Map Reduce framework. In one embodiment, SMC controller141is programmable such as through an FPGA to handle a specific operation. In another embodiment, SMC controller141is programmed on-the-fly to handle an incoming operation. In still another embodiment, SMC controller is implemented through an ASIC that is configured to handle one or more application specific operations. Some examples of programmable functionalities are listed, but not limited to, as follows: get, store, delete, minimum, finding a maximum, performing a summation, performing a table joint operation, finding and replacing, moving data, counting, error recovery, data manipulation, and data compression, and other data manipulation operations. In another embodiment, the function that is programmed includes a Hadoop operation within the open-source software framework (e.g., Apache Hadoop) that is configured for enterprise storage and/or large-scale processing of data sets. For example, the Hadoop operations include a map reducing operation. In one embodiment, the function that is programmed for acceleration within the SMC controller141includes a DPA operation configured for protecting bit streams entering or exiting a corresponding SMC140A. Specifically, DPA is performed to analyze the power signature of SMC140A to extract any keys within a bit stream. DPA countermeasures can then be performed to secure SMC140A from releasing information through analysis of power consumption by altering the power signature. In one embodiment, a counter DPA module is located within SMC140A and is configured for performing DPA countermeasures on the SMC controller141. For instance, control messages are delivered from the SMC controller141over a control channel through a control/network interface. These control messages may include a key (e.g., used within a Memcached memory system). Encryption may be performed to generate an encrypted bit stream that includes the key. DPA countermeasures are taken on the encrypted bit stream at the counter DPA module in order to prevent extraction other encryption keys, in one embodiment. In another embodiment, DPA countermeasures are taken within the SMC controller141to mask its power signature when executing commands in the encrypted bit stream. In still another embodiment, a counter DPA module is located at the host controller to perform DPA at the host controller110level. In still another embodiment, the function that is programmed includes a recovery operation to recover from failures within the memory appliance (e.g., DIMM, SMC, bit, etc.). In one embodiment, the programmability of a corresponding SMC controller, such as, SMC controller141in SMC140A, may be performed through the selection of one or more configuration files in a library. The configuration files are used to reconfigure the corresponding programmable interface of programmable SMC controller141to perform one of a plurality of predefined or on-the-fly generated functionalities. In one embodiment, the host controller110accesses one of the configuration files in order to reconfigure programmable SMC memory controller141in association with a command directed to the SMC140A. In another embodiment, SMC memory controller141accesses one of the configuration files in order to reconfigure itself in association with a command directed to the programmable SMC140A. In another embodiment, the programmability of a particular SMC controller, such as, SMC controller141of SMC140A, may be performed on-the-fly through the compilation of acceleration functions to generate a configuration file. A configuration file is used to reconfigure the corresponding programmable interface of programmable SMC controller141to perform one of a plurality of predefined or on-the-fly generated functionalities. That is, programmable SMC controller141is reconfigured on-the-fly in response to a command directed to memory associated with the programmable SMC140A that is delivered from the host controller110. FIG.5is a flow diagram illustrating a method for a memory appliance implementing application aware acceleration within a corresponding SMC, in accordance with one embodiment of the present disclosure. In still another embodiment, flow diagram500illustrates a computer implemented method for implementing application aware acceleration within a corresponding SMC of a memory appliance. In another embodiment, flow diagram500is implemented within a computer system including a processor and memory coupled to the processor and having stored therein instructions that, if executed by the computer system causes the system to execute a method for implementing application aware acceleration within a corresponding SMC of a memory appliance. In still another embodiment, instructions for performing a method as outlined in flow diagram500are stored on a non-transitory computer-readable storage medium having computer-executable instructions for implementing application aware acceleration within a corresponding SMC of a memory appliance. The method outlined in flow diagram500is implementable by one or more components of the computer system1700, storage system1800, and memory appliance systems100A-B ofFIGS.17,18, and1A-B, respectively. At510, the method includes receiving a command at a host controller of a memory appliance system. As previously described in relation toFIG.1A, the host controller manages data across one or more of a plurality of SMCs communicatively coupled together through a network. Each SMC comprises memory (e.g., one or more memory devices packaged into one or more DIMMs) and a programmable SMC memory controller for managing data within the memory. The command is directed to a first programmable SMC memory controller. At520, the method includes determining a function type corresponding to the command. The function type is determined on-the-fly at the host controller, in one embodiment. For example, the client application sends the function type when also sending the command and/or request. In that manner, the host controller can forward the information to the corresponding SMC, or can retrieve the proper configuration file for delivery to the SMC in association with the command and/or request. In one embodiment, the function type is associated with a first configuration file, wherein the first configuration file is used to reconfigure the first programmable SMC memory controller in order to execute the command and/or request. In one embodiment, the configuration file is a bit file. In another embodiment, the configuration file is compiled from the command and/or request by the host controller, and then delivered to the programmable SMC controller. Once the function type is known, the method includes accessing the first configuration file from a library of configuration files. As such, the first configuration file can be used to reconfigure, or reprogram, or preprogram the first programmable SMC memory controller in association with the command and/or request. In another embodiment, the first configuration file is compiled from an application including the command. That is, the first configuration file is generated on-the-fly. The first configuration file is then provided to the first programmable SMC controller. As such, the method includes receiving the first configuration file at the first programmable SMC memory controller. The method also, includes loading the first configuration file at the first programmable SMC memory controller, and reprogramming the first programmable SMC memory controller using the first configuration file. As a result, the first programmable SMC memory controller is configured to and executes the command. Some examples of programmable functions include, but is not limited to the following: get, store, delete, minimum, finding a maximum, performing a summation, performing a table joint operation, finding and replacing, counting, a DP A operation configured for protecting bit streams entering or exiting a corresponding SMC, an authentication operation configured to authenticate components of a corresponding SMC against authorized signatures, and a recovery operation. Reliability, Availability, and Serviceability (RAS) within a Memory Appliance Including Programmable Memory Interfaces RAS features are included within a memory appliance system to maintain throughput with acceptable latencies, and to address memory errors without unduly access to memory. Reliability gives an indication of how long a memory system will give correct data outputs, and utilizes detection of errors, and correction of those errors. Availability gives the probability that a memory system is available at any given point in time. Serviceability or maintainability gives an indication as to how simple or complicated is a memory system's recovery process, and provides a clue as to the performance of reliability and availability of the memory system. The RAS features are implemented within the memory appliance systems100A-B ofFIGS.1A-B, in some embodiments. A memory appliance system (e.g., memory appliance system100A ofFIG.1) comprises a plurality of SMCs, wherein each SMC includes a programmable SMC controller for independently controlling one or more groupings of memory within that SMC. A host controller communicates with the programmable SMC controllers combined to control management of data across the memory appliance system. Each SMC comprises memory and a programmable SMC controller, wherein the SMC controller includes a programmable interface for managing data within the memory. In particular, the programmable interface is used to accelerate functions performed on a corresponding memory or memories, as previously described. Redundancy of data within an SMC is provided using memory in other SMCs. Further, during recovery of a particular SMC, the programmable FPGA within the SMC is reconfigured to perform recovery functionality, and in particular communicates with the other SMCs to retrieve backup data in order to reconstruct the data files in the crashed SMC. In particular, the memory appliance includes first memory in a first SMC for storing first data. Redundancy of the first data is located on one or more remaining SMCs in the memory appliance, or across one or more memory appliances. In particular, the memory appliance includes second memory that is included in the one or more remaining SMCs for storing second data, wherein the second data comprises redundant data of the first data. The remaining SMCs may be located within one or more memory appliances. In one embodiment, the second data comprises a mirrored copy of the first data. That is, a mirrored copy of memory groupings in one SMC is mirrored within another memory grouping in another SMC. As an example of mirroring, two SMC controllers are configured to execute the same instructions (e.g., nearly simultaneously). Mirroring may occur in any grouping of data (e.g., RANK, DIMM, etc.). In other embodiments, explicit copying or moving of data is performed for data redundancy. In one implementation the copying or movement of data is performed via programmed I/O. In another implementation, the copying or movement of data is performed directly via a DMA channel. As examples, a RANK of memory within a DIMM may be copied or moved to another RANK. Also, a DIMM may be copied or moved to another DIMM. Other groupings of data are supported. In another embodiment, the redundant second data is striped across one or more remaining SMCs, wherein the SMCs are included within a memory appliance, or are included across one or more memory appliances. As such, data is interleaved across the one or more remaining SMCs, thereby providing increased prevention of data loss, and quicker access to data. In one embodiment, the redundant data is managed between host controllers at the memory appliance level. For instance, a plurality of memory appliances includes a first memory appliance system and another, or second, memory appliance system. Both memory appliance systems are similarly configured, such as, that described inFIGS.1A-B. Each of the memory appliance systems include a host controller for managing data across a corresponding plurality of SMCs. Further, an external communication interface is configured to provide communication between host controllers of the plurality of memory appliances to provide access to memory virtualized across one or more memory appliance systems. The external communication interface also provides redundancy of data and recovery of data. For example, the communication interface includes a fat pipe as a higher speed and higher bandwidth communications channel pipe configured for communicating data, and a skinny pipe as a lower speed and lower bandwidth communications channel configured for communicating instructions/control. In still another embodiment, redundant data is managed at the programmable SMC controller level. That is, SMC controllers communicate with each other to manage storage of redundant data, and recovery of redundant data. That is, a communication interface is established to provide communication between a plurality of SMCs in order to provide redundancy and recovery of data. As previously described, each programmable SMC controller includes a programmable interface for managing data within corresponding memory. In particular, the programmable interface is used to accelerate functions performed on corresponding memory or memories (e.g., memory device or devices). In one embodiment, the programmable interface is configured to perform reconstruction of data within the corresponding memory. In another embodiment, an SMC is configured to provide for internal redundancy to protect against catastrophic failure. For example, memory within an SMC platform includes DRAM memory devices for storing data, and non-volatile memory devices (e.g., FLASH, EEPROM) configured for backing-up the DRAM memory devices during failover. For example, the density of FLASH devices can be typically five to ten times that of DRAM memory devices. In this example, one-tenth of the number of DRAM devices, in the form of FLASH devices, can be used to back-up a number of DRAM devices. The backing-up may occur periodically, or upon failure, wherein upon failure, the data from DRAM is immediately stored in the FLASH devices. In another embodiment, for serviceability, a SMC is a field replaceable item, and designed to be hot-swap capable. In another embodiment, the SMC is configured to provide another way for internal redundancy to protect against catastrophic failure. Specifically, a back-up power source (e.g., battery, capacitors, ultra-capacitors, super-capacitors, electrical double-layer capacitors, pseudo-capacitors, etc.) is provided to provide back-up power to the memory devices. In that manner, data is preserved until more permanent back-up of the data is performed. For example, the battery back-up provides power to memory devices packaged in a DIMM of DRAMs of a corresponding SMC. The DRAMs are powered to enable further copying of the data to more permanent devices, such as, FLASH memory devices, previously described. Reducing Latency within a Memory Appliance A reduction in latency is required for acceptable performance of a memory controller. Latency may be incurred throughout the delivery of high level commands, and the returned results. In particular, the communication process includes receiving high level commands from a client, delivering the high level commands from a host controller to one or more SMCs executing related primitive commands over an SMC interface, and returning results back to the client device. The reduction in latency is achieved within the memory appliance systems100A-B ofFIGS.1A-B, in some embodiments. Embodiments of the present disclosure provide for improved memory density and power efficiency for network-attached DRAMs in a distributed memory environment, such as memory appliance systems100A-B ofFIGS.1A-B. Specifically, embodiments of the present disclosure reduce the amount of time a host controller/processor110/112handles data movement and I/O through translating high level commands to primitive operations that are handled and controlled by corresponding SMCs140A-N. As the memory size increases for each SMC, an increased reduction of processor I/O is realized because network latency has a disproportionate affect on payloads inversely proportional to their size in embodiments of the present disclosure. More succinctly, the larger the data, the less it is actually impacted by latency. This is because the cost of round-trip-times is amortized across more data as payload sizes grow. Embodiments of the present disclosure optimize data movement between SMC memory and the outbound NIC (such as NIC665inFIG.6A). Using Facebook as the canonical Memcached use case, it is expected that greater than ninety percent of all requests to be UDP-based “GET” requests. Research on Facebook's use of Memcached shows that greater than ninety percent of objects are five-hundred bytes or less in size with hit rates in the cache approaching ninety-eight percent. For example, embodiments of the present disclosure optimize data movement between the SMC memory and the outbound NIC when processing the GET requests, while limiting host controller involvement. Specifically, UDP response packets are prepared by the FPGA (of the SMC controllers140A-N), while the NIC receives DMA packets directly from device memory without using the host controller/processor. In general, after the FPGA initiates the transfer of data over a DMA channel in cooperation with the host controller/processor (e.g., the host controller is notified of the result from the command and/or request), the DMA controller handles the transfer of data from device memory to the NIC without involving the host controller. For instance, the DMA controller is configured to generate an interrupt that notifies the FPGA when the transfer is complete. This eliminates unnecessary copying from device memory to system memory prior to transmitting a packet because the involvement of the host controller is limited. In one embodiment, a memory appliance system comprises a plurality of SMCs, wherein each SMC includes a programmable SMC controller for independently controlling one or more groupings of memory within that SMC. A host controller communicates with the programmable SMC controllers combined to control management of data across the memory appliance system. Each SMC comprises memory and a corresponding programmable SMC controller, wherein the programmable SMC controller comprises a programmable interface for managing data within the memory. The Programmable Interface is Used to Accelerate Functions Performed on a Corresponding Memory or Memories. In one embodiment, the host controller pushes a command to a corresponding SMC over an interconnect (e.g., network or direct interface) in the form of one or more primitive operations. In another embodiment, the host controller pushes a pointer to a command and its corresponding primitive operations that are stored in memory to a corresponding SMC. The corresponding SMC retrieves the command and/or the primitive operations from memory using the pointer. In still another embodiment, a corresponding SMC polls a host queue of a host controller to discover commands directed to that corresponding SMC. Upon discovery, the command and/or primitive operations are pulled and delivered to the corresponding SMC. Thereafter, the corresponding SMC handles the execution of the command and/or primitive operations. In one embodiment, a pointer to the data contained within memory is returned. Latency is reduced within the SMC by shrinking the data path between the device memory and the NIC supporting one or more SMCs within a memory appliance that is configured for external communication. The NIC is configured to provide external communication for the one or more plurality of SMCs. Specifically, latency is reduced by establishing communication directly between memory of a corresponding SMC and the network interface. For example, DMA is used to allow the NIC direct access to memory within a corresponding SMC (e.g., a pointer) to enable the delivery of data across the external network. In particular, communication is established directly between memory of a corresponding SMC and the NIC via a DMA controller for purposes of transferring data between the memory and the NIC over a DMA channel. For example, a DMA register/stack operates independently of the host controller's command stack to hand off DMA addressing thereby providing direct access to memory from the NIC, and vice versa. High Level Instructions Translated to Lower-Level Assembly Code Style Primitives within a Memory Appliance Architecture Embodiments of the present disclosure provide for a memory appliance that includes a processor and a plurality of SMCs, wherein each SMC includes a plurality of memory devices, and an SMC controller for independently controlling the management of one or more groupings of memory within a plurality of memory devices of a corresponding SMC. The memory appliance is configured to receive high level instructions from a client system (e.g., OSI layer 7 protocol or interface commands), and to translate the instructions into lower-level assembly code style primitive operations that are executable by the plurality of SMC controllers. That is, high-level, application layer commands are translated to primitive operations (e.g., low level operations corresponding to machine code instructions) for execution by the corresponding computing device (e.g., SMC controller). The methods and systems disclosing the translation of high level instructions to lower-level primitive operations in embodiments of the present disclosure are imp lamentable within the systems and flow diagrams described inFIGS.1-5. For example, the memory appliance systems100A-B ofFIGS.1A-Bare configured to receive high level instructions from a client system and translate those instructions into lower-level primitive operations that are formatted for execution by a plurality of SMC controllers each configured to manage corresponding memory devices. FIG.6Ais a block diagram of a memory system600A including a client system615that is communicatively coupled with a memory appliance660, wherein the memory appliance660is configured to translate high level instructions into lower-level assembly code style primitive operations that are executable by a plurality of SMC controllers implemented as FPGAs and/or ASICs, in accordance with one embodiment of the present disclosure. The memory appliance system provides for a higher capacity and higher bandwidth scaling of memory and computation offloading to the memory within the memory appliance having the SMC controller manage the execution of the primitive operations. As shown inFIG.6A, memory system600A includes a client system610and a memory appliance system660, wherein the memory appliance system can be further partitioned into a host system620, an SMC interconnect/interface630, and a plurality of SMCs640. The client system610is communicatively coupled with the memory appliance system660over an external network650. For example, the external network650allows two different computing systems to communicate using a communication protocol. In particular, client system610provides an interface to the memory appliance system660. The host system relays client side requests and commands used for accessing data stored within the memory appliance system. In particular, client system610is configured to deliver a high level command/instruction to the memory appliance system660for execution. For example, the command may be implemented within the highest layer of the OSI model—application layer 7. That is, the command is formatted as a protocol or interface used for computer systems across a communication network. Though one memory appliance system660is shown coupled to the client system610, it is understood that client system610may be coupled to one or more memory appliances providing distributed memory storage. For illustration purposes only, client system610may be a database system, managed by a social networking company, storing data about its members in distributed memory, and is accessing data contained within the memory appliance system660. In the example, client system610may be accessing and managing data stored within the memory appliance660using high level commands. As an example, the memory appliance660may be structured as a Memecached memory system, wherein the client system610accesses data using Memecached application layer instructions. In another illustration, the client system610may be a computing resource associated with a user, wherein the computing resource is used for accessing information across an external network650that is stored on the memory appliance660. As shown, the host system620of the memory appliance system660includes a processor625and a communications or network interface665. The network interface665communicatively couples the memory appliance system660to the external network650, such that client system610is able to communicate with memory appliance system660using a communication protocol. In one implementation, the network interface665can be a NIC. In another implementation, the network interface665is internal to an NPU. For instance, client system610delivers a high level command through the external network650to the NIC665. Processor625is configured as a host controller that manages a plurality of memory devices distributed throughout a plurality of SMCs, as previously described. For example, processor625is able to provide memory services, such as, load balancing, quality of service, connection management, and traffic routing. As shown, processor625is configured to receive a high level command originating from the client system610via the NIC665, and translate the high level command into application specific primitive commands or operations that are formatted by execution by the plurality of SMCs640. For example, the high level command may be structured to access memory in a Memecached distributed memory caching database using a key value pair or key-value functions to access memory. For example, a key within a command is hashed using the appropriate algorithm in order to determine proper addressing within the memory. Typical key value functions include “GET” “SET”, and “DELETE” operations. Further, the high level command is translated by processor625into one or more primitive operations executable by the SMCs to access memory. For instance, the primitive operations are function or application specific (e.g., search, sort, and other custom accelerations, such as, error recovery, data manipulation, data compression). In the example of a Memecached database, the primitive operations are tailored for accessing and manipulating data, and/or may be tailored for performing a specific operation (e.g., search, write, etc.) to memory in the Memecached database. For instance, “GET” is implemented with a set of primitive operations that search for a key match, retrieve pointer to value field and update the key-value metadata. Processor625is coupled to one or more communication channels over the SMC interconnect630. For instance, interconnect630is a command interface635that allows for the primitive operations to be delivered from the processor625to the plurality of SMCs640over one or more communication channels, wherein the primitive operations are configured for accessing memory distributed throughout the SMCs. In one implementation, interface635includes communication channels configured as a network interface (e.g., TCP, UDP, Ethernet, Infiniband, etc.) using a network protocol. In another implementation, interface635includes communication channels configured as a direct interface (e.g., PCI, PCIe, XAUI, QuickPath, Infiniband, Serial Rapid 10 (SRIO), 1/10/40/100 Gigabit Ethernet, Interlaken, FiberChannel, FiberChannel over Ethernet (FCoE), SAS, iSCSI, SATA, other protocols using Ethernet as an underlying layer, etc.) that provides for communication over a point-to-point communication channel/link/connection between two ports. In one embodiment, the primitive operations and results are delivered to optional queue combinations645A-N, wherein each queue combination is associated with a selected SMC. Each queue combination includes an input queue (e.g., delivering commands to the SMC controller) and a response queue (e.g., returning results after executing commands). In other embodiments, each SMC can have a plurality of matched queues combinations, rather than a single queue combination per SMC. Each of the individual queues may be located on either side of interface635, such that they may be co-located on one side, or separately located on opposite sides of interface635. For example, queue combination645A is associated with SMC640A, and includes input queue646A and response queue647A. In that manner, primitive operations are asynchronously executed by the plurality of SMCs640. FIG.6Billustrates one embodiment of input/command queue pairs and response queue pairs located on either sides of an interface635for the memory system600A shown inFIG.6A, in accordance with one embodiment of the present disclosure. That is, an input/command queue located on the at one of the plurality of SMCs640has a matching input/command queue located on the host system620. As shown inFIG.6B, a host system620includes a host CPU/processor625configured to execute a data storage application (e.g., Memecached). The host system sends primitive commands to the plurality of SMCs across an interface635, such as PCIe. As shown, matching queue-pairs are located on opposite sides of the interface635(e.g., PCIe), wherein each SMC command/response queue-combination has a matching pair or counterpart command/response queue-combination maintained by the host processor625. For example, an SMC controller (not shown) in a corresponding SMC645A manages command/response queue-combination681(e.g., SMC command queue and response queue), which has a matching pair or counterpart command/response queue-combination682managed by processor625. In that manner, the host processor625is able to perform under a fire and forget module by loading commands into a corresponding command queues, which are then delivered to corresponding SMCs for execution and returned back to the host processor625via matching command queue pairs and response queue pairs. As such, the overhead of executing the commands is transferred from the host processor625to the SMCs, thereby reducing processor latency. In general, processor625fills its command queue, the corresponding SMC controller reads it, and copies commands into its own queue. The SMC controller then places responses into its outgoing response queue before transferring them into the processor response queue across the interface635. A CCD655manages the processor queue combination, while a SMC controller manages a corresponding queue combination. For example, queue aggregate/management8loads a command into the command queue in queue combination682, which is then delivered over interface635and loaded into the matching command queue in queue combination681of the corresponding SMC645A. In one embodiment, SMC645A requests delivery of the commands between the matching command queue pairs. After processing, the response is loaded by SMC645A into the response queue in queue combination681, which is then delivered over interface635and loaded into the matching response queue in queue combination682. In addition another SMC controller (not shown) of SMC645N manages command/response queue-combination683, which has a matching pair or counterpart command/response queue-combination684managed by host processor625. Returning toFIG.6A, each of the plurality of SMCs640includes an SMC controller and a plurality of memory devices. The SMC controller includes an interface for managing data or memory throughout corresponding memory devices. For example, the interface may be used to accelerate functions performed on a corresponding memory or memories. For example, SMC640A includes SMC controller641A and memory devices642A. An SMC controller may be programmable (e.g., FPGA) or statically configured (e.g., ASIC) to execute application specific commands and/or operations generated by an external client and/or application. As shown inFIG.6A, input queue646A is configured to receive a plurality of primitive operations from processor625and deliver those primitive operations to the SMC controller641A for execution on memory devices included in memory642A. The primitive operations are translated from a high level command that is directed to memory on SMC640A, and executed by SMC controller641A. A result of the primitive operations is delivered to the result queue647A for access by processor625or delivery to processor625. In one embodiment, the result comprises a pointer to a memory location, wherein the data stored in that memory location satisfies the query associated with the high level command and/or plurality of primitive operations. Further, in one embodiment the processor is notified of the result, and initiates a direct memory transfer (e.g., DMA) of the data stored in the memory location with the network interface665using the pointer. That is, once the direct memory transfer is initiated by processor625, and the pointer is delivered to the network interface665, the processor625no longer controls the transfer of data across the external network650. In that manner, redundant and unnecessary copies of the data are not made within the local memory associated with the processor625. For example, a direct memory transfer may be initiated as a DMA operation, wherein a DMA controller (not shown) monitors and/or controls the movement of data from memory642A across the external network650via network interface665to the client system610. In that case, the DMA controller may send an interrupt to the processor indicating that the data has been delivered across the external network650. FIG.7is a flow diagram700illustrating steps in a method for accessing data within a memory appliance that is configured to translate (high level) instructions into lower-level assembly code style primitive operations that are executable by a plurality of SMCs and their SMC controllers, in accordance with one embodiment of the present disclosure. In still another embodiment, flow diagram700illustrates a computer implemented method for accessing data within a memory appliance that is configured to translate high level instructions into lower-level assembly code style primitive operations that are executable by a plurality of SMCs and their SMC controllers. In another embodiment, flow diagram700is implemented within a computer system including a processor and memory coupled to the processor and having stored therein instructions that, if executed by the computer system causes the system to execute a method for accessing data within a memory appliance that is configured to translate high level instructions into lower-level assembly code style primitive operations that are executable by a plurality of SMCs and their SMC controllers. In still another embodiment, instructions for performing a method as outlined in flow diagram700are stored on a non-transitory computer-readable storage medium having computer-executable instructions for accessing data within a memory appliance that is configured to translate high level instructions into lower-level assembly code style primitive operations that are executable by a plurality of SMCs and their SMC controllers. The method outlined in flow diagram700is implementable by one or more components of the computer system1700(e.g., processor1714), storage system1800(e.g., server1845), and memory appliance systems100A-B (e.g., processor112, SMC controller141, etc.) and600A (e.g., processor625, SMC controller641A, etc.) ofFIGS.1A-B,6A,17, and18respectively. Further, in one embodiment, some operations performed in flow diagram700are further described in relation to information flow diagram800illustrating the execution of high level instructions that are translated to lower-level primitive operations when performing data manipulation. Flow diagram700is implemented within a memory appliance that includes a processor acting as a host controller configured to manage a plurality of memory devices distributed throughout a plurality of SMCs. Each of the SMCs includes a processor and a plurality of memory devices, wherein the processor is configured to access memory in corresponding memory devices. For example, the plurality of memory devices includes memory devices (e.g., DRAM, EEPROM, FLASH, non-volatile memory, etc.) packaged in a DIMM. At710, the method includes receiving a high level command. For example, the high level command is received over a network using a communication protocol in one embodiment, the high level command can be a memory related command received from a client system that is in communication with a memory appliance via the communication protocol. The receiving can be performed by a processor, such as, a host controller that is configured to manage a plurality of memory devices distributed throughout a plurality of SMCs, as previously described. For example, the memory related command can be a high level command associated with the application layer-7 of the OSI model. At720, the method includes translating the command into one or more primitive commands. For example, the memory related command is translated into a plurality of primitive commands that are formatted to perform data manipulation operations on data of or within the plurality of memory devices. The memory devices are configured in data structures. In particular, the translating is performed by the processor. In addition, the processor is configured to route the primitive commands to the proper SMC for data manipulation, such as over a command interface. The command interface can be configured as a network interface or direct interface (e.g., PCIe). In this manner, the processor is able to hand-over control of the execution of the memory related command to the corresponding SMC, thereby reducing the amount of I/O traffic handled by the processor. That is, I/O traffic at the processor that would be associated with the transfer of data performed during the intermediate states of the primitive operations to the processor is reduced and/or eliminated, since the control of all the primitive operations can be performed by the SMC controller of the SMC to which the primitive commands were directed, such as a pointer. At730, the method includes executing the plurality of primitive commands on the data to produce a result. In particular, the executing is performed transparently to the processor by the SMC controller, such that the execution of commands occurs without processor input. As previously described, the processor has handed-over control of the execution of the primitive commands to the corresponding SMC controller, and only receives the result of the execution of the primitive commands. In one embodiment, the result comprises data that satisfies or is responsive to the high level command. In another embodiment, the result is associated with additional information that is used to access the data that satisfies or is responsive to the high level command. At740, the method includes establishing a direct memory transfer of the result over the communication protocol to a network. In particular, the establishing is performed responsive to receiving the result by the processor, and the direct memory transfer is performed transparently to the processor. That is, the direct memory transfer is controlled by another device, such as, the network interface or a controller. For example, a DMA controller may be used to control the transfer of the result without participation from the processor. In one embodiment, the result is associated with a pointer that is directed to a location of memory that stores data, wherein the data satisfies or is responsive to the original high-level command and/or the translated primitive operations. In particular, the pointer is stored in a buffer accessible by the processor and/or the network interface. Once the pointer, or notification of the pointer stored in the buffer, is received by the processor, the direct memory transfer of the data is initiated. That is, the processor hands over control of the transfer of data to a network interface providing communication over an external network. After initiation, the pointer is accessed by a network interface in the buffer, in one implementation. In another implementation, the processor delivers the pointer to the network interface. The pointer is used by the network interface to request and/or access the data at the previously described memory location, wherein the data is responsive to the high level command. Without further involving the processor, the data is returned to the network interface for delivery over the network, such as to a client device. Notification of the delivery may be delivered to the processor. FIG.8is an information flow diagram800illustrating the accessing of data within a memory appliance that is configured to translate high level instructions into lower-level assembly code style primitive operations that are executable by a plurality of SMCs and their SMC controllers, in accordance with one embodiment of the present disclosure. The information flow diagram800is implemented within a system including a client system810and a memory appliance, wherein the memory appliance includes a network interface (e.g., NIC)811, a host system812, an input queue898on the host side, an output queue899on the host side, a command interface813(e.g., PCIe), input queue,814on the SMC side, result queue815on the SMC side, and a corresponding SMC816. The host system/processor812is configured to manage memory devices distributed throughout a plurality of SMCs, wherein each SMC includes an SMC controller and a plurality of memory devices. For example, SMC816includes an SMC controller and a plurality of memory devices, as previously described in relation toFIGS.1A-Band6A-B, wherein the SMC controller is a programmable logic device (e.g., FPGA) in one implementation, or a logic device with pre-determined functionality (e.g., ASIC). As shown inFIG.8, at820a high level command is delivered over a communication network from a client system810to the host system/processor812in the memory appliance via a network interface, such as, NIC811. The NIC enables communication between the memory appliance and the client system810using a communication protocol over an external network. At825, the host system/processor812translates the memory related command into a plurality of primitive operations/commands. In addition, the processor is able to route the primitive commands to the proper SMC within the memory appliance through interface813(e.g., PCIe). For example, in one implementation, the proper SMC controls the physical memory within which data to be manipulated is stored. In that manner, the primitive commands can be grouped into a chain of commands that is directed to a specific SMC. At825, the chain is placed into the output queue899of the host system processor812that corresponds to the proper SMC816. At830, SMC816fetches the chain from output queue899stores the primitive operations into its own input queue814through interface813. In another embodiment, the primitive operations are delivered to the input queue814without traversing interface813. At831, the primitive operations are fetched from the input queue814by the corresponding SMC816for execution. In particular, at835the SMC controller in the SMC816reads the primitive operations from the input queue814, and executes the primitive commands as performed on the corresponding memory devices in SMC816, wherein the execution is performed transparently to the host system/processor812. The commands in the chain can be executed sequentially by the SMC controller. For instance, the primitive operations are performed on data stored in the memory devices, and include data manipulation instructions formatted for operation on data stored in blocks of memory within the memory devices. In that manner, the host system/processor812is able to hand off management and control of the execution of the high level command to the SMC controller in SMC816, thereby reducing the number of I/O transactions handled by the host system/processor812. That is, the high level command and/or primitive operations can be accelerated via the execution by the SMC controller. At835, execution of the primitive operations produces a result, and the host system/processor is notified of the result. In particular, the result is stored in the result queue815. In one embodiment, the result includes data that is stored at a location in the memory devices, wherein the data satisfies or is responsive to the high level command and/or primitive operations. In another embodiment, the result is associated with information that leads to the data that satisfies or is responsive to the high level command and/or primitive operations. For instance, the information includes a pointer that identifies the location of memory that stores the data that satisfies or is responsive to the high level command and/or primitive operations. At840, the pointer is delivered across the interface813to the corresponding input queue898of the host system/processor812. In one embodiment, the pointer is stored in a buffer that is accessible by the host system/processor812. Upon notification, the host system processor812is able to access the pointer stored in the buffer at841. At845, a direct memory transfer is established to transfer the result over the external network to a client system using a communication protocol. In particular, the host system/processor may initiate the direct memory transfer, but after initiation, is no longer involved in the transfer of the result over the network. That is, the direct memory transfer is performed transparently to the host system/processor. For example, the direct memory transfer may be a DMA process that includes a DMA controller that establishes and manages the transfer of the result without participation of the host system/processor812. As shown inFIG.8, upon initiation of the direct memory transfer, the pointer is delivered to the network interface811, or NIC. At850, the NIC811fetches the data from the location in memory of the SMC816as directed by the pointer, wherein the data satisfies or is responsive to the high level command and/or primitive operations. At855, the data is returned to the NIC811, and then delivered to the client system810over the external network. Command Chain of Primitive Operations Executable within a Memory Appliance Architecture In one embodiment, the primitive operations may be combined into a command chain, that is executable with input parameters. The command chain includes a set of primitive operations/commands and their arguments/parameters that implement a high-level application command (e.g., Memcached Get, Set, and Delete operations). Ah the commands in a chain are executed sequentially by a single processor engine or host controller of an SMC. In one implementation, the host system/processor812ofFIG.8(e.g., including a command copy daemon) places the command chain into input queue814acting as a circular PCQ in local memory. The CCD updates a PCQ queue tail register of the SMC controller (e.g., programmable FPGA), and the SMC controller fetches the command chain from the PCQ until reaching the Tail. Also, the SMC controller will update the head register after each transfer. For execution of the command chains, a fetch engine in the host system/processor812reads the command blocks continuously until it fills its local FIFO, or reaches the Tail address. A command chain dispatch engine parses the magic header/checksum (wherein magic number identifies a protocol or file format, for example, and the checksum is used for debugging) and chain-size fields to confirm command block alignment and determine command chain size (may include checksum, magic number, and commands plus parameters). The dispatch engine then dispatches a complete command chain to the next available SMC controller. The magic header is also removed. The selected SMC controller runs a command interpreter that maps each command in the chain into a corresponding procedure call and executes it. The SMC controller executes the commands of each chain sequentially. In other embodiments, the commands may be executed out of order, as long as the results are guaranteed. The SMC controller generates a command response block which the SMC controller returns to CCD on command chain completion. Since sequential command chains are executed independently by different SMC controllers, they can and will in general complete out-of-order. Therefore, the host CCD driver cannot assume that response blocks will match the command chain order in the input command queue FIG.9is an illustration of a host system/processor812local buffer900used for storing a command chain, in accordance with one embodiment of the present disclosure. In one implementation, the host system/processor812allocates a fixed-size internal buffer (e.g., IkB) for each command chain. In one embodiment, the command chains are multiples of 64 byte blocks and are 64 byte aligned. The last block may need to be padded if the command chain does not fill the entire block. Each command chain includes various sections or frames. In one implementation, each frame is 8 byte aligned, wherein padding with Os may be necessary if the frame is not full. Frame910includes a magic number/checksum. Frame920includes metadata (e.g., opaque values, time stamps, etc.). Frame930includes a commands list, wherein each fixed-size list entry includes an operation code (opcode) associated with a command, and a set of associated parameter offsets that point into the parameters frame provided in section940. For example, the op-codes may specify primitive operations on blob metadata (e.g., increment, decrement fields), primitive list operations (e.g., select, unlink, append, prepend, etc.), and flow control operations, such as, explicit (e.g., “jump-by-offset if parameter is 0”), implicit (e.g., conditional execution based on a status bit: if an command fails, all subsequent commands in chain are executed as NOPs), end of chain or return, procedure calls (e.g., address of another command chain located in SMC memory). The parameters/arguments frame940includes a contiguous region in the command blocks that stores parameters of all the commands, wherein parameters can be both inputs and outputs. Also, commands reference their parameters according to the offsets specified by the command fields. Inputs are passed as in-line values or as memory references via the command blocks. Command outputs are stored in the parameters frame at an offset corresponding to the result parameter. This allows subsequent commands to reference them during execution. In one implementation, parameters are 8-byte aligned. Some parameter types include memory references in global address space (MAS, or MSes); immediate values, and intermediate variable-size values, wherein the first byte of the value defines its size (e.g., valid values include 1-255). Parameter offsets are relative to the parameter frame base and specify location as a multiple of 8 bytes. Offsets can address up to 256 parameters, 8-byte each, i.e. they can theoretically cover a 2 Kbyte range in the parameter frame. In addition, the base (first word) of the last variable-size value can be within this range, but the value itself may overflow beyond the 2 KB boundary. Multiple variable-length parameters can be supported as long as they all fit within the buffer allocated for the processor (IkB) and meet the 8-byte alignment requirements, otherwise zero-padding is required. The buffer space left over after loading the command chain is reserved for temporary variables frame950. For example, values to be generated at run time and passed between commands, or values to be returned via the response block are stored in frame950. In this manner, frame950expands the size of the commands “register set” without bloating the command chain with dummy place-holders In one implementation, the command chain interpreter maintains a 32-bit global command status variable that is updated by each command. Besides flagging execution errors, global status can provide a fast path for the current command to convey specific results to the next command in the chain. For example, an error code may be returned if any error was detected during execution of a command. In the typical use scenario, a non-zero error field will abort the command chain and return this error code and its associated command index via the response block to the host. An example for using the return value field can be a Select command which returns the number of matching items via the global status and a pointer to the list of matching items via the parameter frame. A conditional Jump following Select can test the number of matches to decide whether to continue execution with the next command or jump ahead in the chain Each command chain returns a single response block to the CCD, in one implementation. The response blocks may have a fixed size of 64 bytes. A response block may include three frames, including a metadata frame (e.g., status, queue head pointer, opaque value, etc.); a completion status frame, and a retune parameters frame. The sections are each 8 byte aligned in one implementation. The return parameters can be a data value or a memory reference. Multiple, or variable size values are expected to be stored in the MS memory and they are returned by reference. The arguments of the last command in chain (RET) specify the parameter(s) to be returned to the host system/processor. The RET command is the last command in the command chain, and waits for all asynchronous DMSs initiated by commands belonging to the same chain to complete before it executes. The RET can specify a variable number of return values (e.g., 0 to 54) to be placed in the command response block. The number of values to be returned can also be specified in the RET. This mechanism can be used to pass more opaque data values via the command chain, as follows: insert the opaque value(s) as a dummy parameter in the chain and specify it as one (or more) of RET arguments. Flow control operations include commands such as conditional and unconditional jumps. For example, the target jump offset relative to the current command is directly specified by the first command argument as an immediate 8-bit2's complement value, rather than as a pointer to the value stored into the parameter frame. Certain errors will cause a command chain to abort, and return an error code via the response block status. For example, an error code of “0” returns no error; an error code of “1” indicates an illegal chain size (e.g., size larger than 1 KB); error code of “2” indices an illegal opcode or opcode extension that is unsupported; error code of “3” indicates an illegal parameter offset (e.g., exceeding chain buffer size of 1 KB); and additional errors such as, command chain time out indicating the execution exceeds a present time frame, DMA error (indicating illegal arguments, time outs, etc.), illegal memory or register access (wherein the processor tries to access an address that is not mapped to a physical register or memory, or to a protected address. In one embodiment, the host system/processor is able to provide additional information to help with error recover and debugging via the response block. For example, a list of commands that executed successfully can be returned (e.g., via bitmap), or providing a core dump (e.g., save a copy of relevant internal processor state to a DRAM buffer). Certain commands copy data from the host system/processor to the FPGA memory (e.g. SET), or vice versa. As part of the command execution, the host system/processor will program one of the SMC controller DMA engines to perform the data transfers. The DMA operation is allowed to proceed asynchronously while the remaining commands in the chain continue to execute, unless a fence command or a RET is encountered. That will force the chain to wait for the DMA transfer to complete before proceeding further. In one embodiment, the plurality of primitive operations are stored in a separate location as a command chain. As such, the command chain comprises a program operable for re-execution in response to another high level memory command from the client system. Each time a high level command is presented for execution, a corresponding set of parameters is also provided for re-execution thereof by the command chain. In various embodiments, command chains offer the opportunity for application developers to minimize queue and command round trips by combining multiple commands to be executed as a group before returning the results from the last command in the chain. For example, a single round-trip to the FPGA could combine multiple command primitives into compound operations on the FPGA.FIG.10illustrates a command chain1010that includes multiple commands1020and1030, wherein the commands in the command chain1010are executed by a corresponding FPGA in an SMC. The command chain1010can be executed by one or more components of the computer system1700(e.g., processor1714), storage system1800(e.g., server1845), and memory appliance systems100A-B (e.g., processor112, SMC controller141, etc.) and600A (e.g., processor625, SMC controller641A, etc.) ofFIGS.1A-B,6A,17, and18, respectively. As shown inFIG.10, command1020in command chain1010includes one or more parameter indices, such as indices1022and1024. For example, parameter index1022is used to access parameter1062, and index1024is used to access parameter1064. In addition, command1030in command chain1010includes one or more parameter indices, such as indices1032,1034, and1036. For example, parameter index1032is used to access parameter1068. In addition, parameter index1034is used to access a return value1074resulting from a previous command (e.g., command1020) in the chain1010. Also, parameter index1036is used to access return value1078. More particularly,FIG.10is an illustration of command chain1010and its array of variants, in accordance with embodiments of the present disclosure. A significant aspect of command chains is how parameters are defined and passed between commands in the chain. The command chain execution begins in the context of a “parameter space” which can include the parameters passed in by the chain's author. Command chains can be accompanied by parameters for each command in the chain inclusive of a parameter type that supports binding of parameters to return values from previous commands in the chain. Parameters are passed as arrays of type variant t. Variants types include variant type known as a “REFERENCE” which contains an encoded reference to any variant in the execution context. Each command in the chain has a deterministic number of return values so reference offsets into the execution context can be computed in advance of the actual execution of the chain. In this way, command chains can be constructed in a way that both immediate parameters supplied by the caller and values yielded by command execution can be used to parameterize subsequent commands. In embodiments, multiple commands are enqueued with embedded fields indicating where a chain begins and ends. In another embodiment, a single command is enqueued, which contains a pointer to a command-chain+parameters that should be executed prior to returning, which is similar to a procedure call. In embodiments, when creating command-chains, commands are accompanied by array variants representing the command-chain execution context. For example, this is similar to a global stack. Command-chain input parameters can be pre-staged in the execution context. Each command contains an array of indices into the execution context corresponding to each of the required parameters for the command. Command execution yields a deterministic number of return values which are appended to the execution context as each command executes. This can allow for input parameters to a command to include the pre-staged parameters (e.g., 1062, 1064, 1066, and 1068) or the subsequent return values (e.g., 1072, 1074, 1076, and 1078). In some implementations, only the first command in the chain is limited to using pre-staged parameters in its execution. In embodiments, command chains are a variable-length array of commands+parameter indices. The indices represent offsets into the execution context. Decoupling command-chains from their execution context can allow for command chains to then be pre-staged in device memory and entire chains can be enqueued “by reference”—meaning that rather than enqueue the chain a reference to a preconstructed chain in device memory can be enqueued. Furthermore, decoupling the execution context can allow for a single command chain to be executing multiple times in parallel so long as the execution context per thread is unique. This capability allows for performing multi-object operations within the SMC because entire arrays of execution contexts can be constructed by the application and manipulated in parallel. In embodiments, command chains contain both the length of the execution context (size of (variant)*# of parameters), and also include information on the total space required during execution (e.g. size of (variant)*(parameter count+return value count)). An example of a command chain is illustrated in a SET operation for a hashtable, which involves selecting a hashbucket (i.e., a specific LIST), and then utilizing the following command chain of operations: ALLOC->INCR_REFCOUNT->BLOB_WRITE_DATA->LIST_APPEND->LIST_APPEND (The first LIST_APPEND adds it to the chosen hash bucket while the second LIST_APPEND adds it to the LRU list). Memory Packet. Data Structure and Hierarchy within a Memory Appliance Architecture Embodiments of the present disclosure provide for reconfigurable memory structure implemented within a memory appliance architecture including programmable memory interfaces for accessing memory. Implementation of the memory structure is achieved through a content-aware memory controller which comprehends logical data structure and not memory raw bits. The reconfigurable memory structure in embodiments of the present disclosure is implementable within the systems and flow diagrams described inFIGS.1-10. For example, the memory appliances and systems100A-B,200,310,600A ofFIGS.1A-B,2, and6are configured to receive high level instructions from a client system and translate those instructions into lower-level primitive operations that are formatted for execution by a plurality of SMC controllers on the reconfigurable memory structure, wherein each SMC controller is configured to manage corresponding memory devices. Embodiments of the present disclosure provide for a memory appliance that includes a processor and a plurality of SMCs, wherein each SMC includes a plurality of memory devices, and an SMC controller for independently controlling the management of one or more groupings of memory within a plurality of memory devices of a corresponding SMC. The memory appliance is configured to receive high level instructions from a client system, and to translate the instructions into lower-level assembly code style primitive operations that are executable by the plurality of SMC controllers on the reconfigurable memory structure to produce a result. In particular, each of one or more SMCs includes a hardware based memory controller and memory. The memory controller may be programmable (e.g., FPGA) or include static functionality (e.g., ASIC) to controller the management of a plurality of memory devices contained in the memory. The primitive commands include data manipulation instructions formatted for operation on the items of data accessed by the SMC controller through one or more data structures stored in the device memory. In particular, the set of data structures are configurable to be comprehended by the SMC controller, upon which various primitive operations can be performed. That is, the controller is configured to respond to primitive commands configured to access content stored in one or more of the plurality of memory devices, and to perform data operations on content accessed from the plurality of memory devices. For example, the data structure organizes chunks of memory into discontinuous “collections” that are comprehended and operable by the SMC controller. The memory controller is data structure aware such that the controller is configured to traverse the memory structure and perform operations on the memory structure based on metadata and relationship information. Specifically, the content-aware memory controller comprehends the logical data structure rather than the raw bits without taking the logical data structure into account. In particular, the command-set of primitive operations is configured to expose a set of functionality, higher-level than simple loads and stores, upon which much more sophisticated functionality is built. For example, the memory structure includes variably sized containers that are arranged in relational configurations. In one embodiment, the relationship is defined by lists, which provide a building block for many other data structures and functionality (e.g., heap managers, queues, trees, graphs, etc.). As such, supporting basic list operations can offer a basic capability onto which richer applications are built. For instance, a primitive command as executed by the controller is configured to perform a management operation on the plurality of containers defined within the memory structure. For example, a management operation may include adding a list, modifying a list, deleting a list, etc. In another instance, a primitive command is configured to perform on raw memory within the memory structure. In still another instance, the primitive command is configured to perform a management operation on the relationship information. FIG.11Ais an illustration of a data packet or container1100A used within a reconfigurable memory structure implemented within a memory appliance architecture including programmable memory interfaces for accessing memory, in accordance with one embodiment of the present disclosure. The container1100A includes data. As shown, the container1100A is configurable to be comprehended by a corresponding SMC controller, upon which various primitive operations can be performed, in accordance with one embodiment of the present disclosure. Container1100A is stored in device memory of the memory appliance, previously described (e.g., memory appliances and systems100A-B,200,310,600A ofFIGS.1A-B,2, and6), wherein the reconfigurable memory structure comprises multiple and variably sized containers. That is, within a reconfigurable memory structure, containers1100A are variably sized, such that one container may be of a different size than another container. As shown, the data packet1100A includes a payload1130of data (variably sized), metadata1110, and relationship information1120(variably sized). Metadata1110includes information specific to container1100A, wherein metadata1110is a fixed portion of container1100A. For example, metadata1110includes information, such as: total length or length of the container; list count illustrating the number of lists the container is a member of; data length illustrating the length of the data portion; access time indicating when the container was last accessed; create-time indicating when the container was created; reference count; flags; etc. Relationship information1120provides information that associates a corresponding container1100A with one or more other containers that are stored in the memory structure. In that manner, the relationship information in a plurality of containers defines the memory structure. The memory structure is reconfigurable since any change in the relationship information in any of the containers will affect and change the overall memory structure. The relationship information allows the controller to traverse the memory structure. The payload1130contains data specific to the container1100A. Because the length of the data can be defined, the memory structure includes a plurality of variably sized containers. As such, a first container may include data of a first length, while a second container may include data of a second length. In one embodiment, memory management revolves around the concepts of “blobs” as containers, and “lists” providing relationship information. A “blob” is a fixed-size chunk of device memory that carries with it certain metadata (e.g., last access time, creation time, etc.) as well as a variable array of “list entries” which facilitate its membership in one or more “lists”. Lists are traditional singly or doubly linked lists of blobs. In particular, the SMC controller is configured to walk and modify lists in a thread-safe way in response to the invocation by the processor of various list primitives. Each blob contains an array of “listentries” which represent a given blob's membership in various lists. Those lists may include additional blobs. Further, a blob can exist in multiple lists simultaneously. SMC controllers comprehending the list and blob structures, can link, unlink, prepend or append as well as search and find items within a list based on very rudimentary selection criteria. The SMC controller will expose a set of list, blob, and raw memory primitives that can be invoked by enqueing a command block (command+parameters) to a queue. In addition to enqueing individual commands, command-chains can be enqueued. Command-chains are variable length arrays of command blocks for which the output of each command is passed to the subsequent command as a parameter. Command-chains facilitate the design goal of minimizing round-trips and queuing latency by allowing compound operations to be constructed and performed with a single command/response round trip to the SMC controller. In one embodiment, various primitive operations will increment and decrement reference counts associated with each blob. Some primitive operations are only valid for unreferenced blobs (e.g., free) advertisement may logically “succeed” but are only committed once the reference count goes to “0”. The specific case for this behavior is when a blob is in use for I/O but has been freed by the user-mode application. When the I/O completes and the reference count goes to zero, then the blob can only be added back to the free list. FIG.11Bis an illustration of a data packet and/or container1100B used within a reconfigurable Memcached memory structure implemented within a memory appliance architecture including programmable memory interfaces for accessing memory, in accordance with one embodiment of the present disclosure. Container1100B is a specific implementation of the generic container1100A shown inFIG.11A, wherein container1100B is implemented within a Memcached memory structure. As shown, the container1100B is configurable to be comprehended by a corresponding SMC controller, upon which various primitive operations can be performed, in accordance with one embodiment of the present disclosure. Container1100B is stored in device memory of the memory appliance, previously described (e.g., memory appliances and systems100A-B,200,310,600A ofFIGS.1A-B,2, and6), wherein the reconfigurable memory structure comprises multiple and variably sized containers. In particular, container1100B includes metadata1140, relationship information1150, and a payload1160containing data. In the example of a blob container (for example as implemented within a Memcached memory structure), a blob is a contiguous memory region (e.g., allocated from a heap). A memory slab is a collection of blobs of equal size. As such, the reconfigurable memory structure includes containers (e.g., blobs) that are part of one or more lists, which is defined in the relationship information. That is, the relationship information1150may include one or more list entries, which provide membership of the data in the payload into one or more lists, and/or a link or pointer to the data. For example, a listentry exposes an item of data in a corresponding list. Free, LRU, and hash bucket are examples of lists. Container1100B may be part of a classification of containers, which is defined by a free list. The free list pointer1151points to a previous container in the same classification. The free list pointer1152points to the next container in the same classification. The LRU pointer1153points to the previous container in the LRU list, and LRU pointer1154points to the next container in the LRU list. The bucket list pointer1155points to the previous entry in a bucket list, such as one defining the first container in a list of related containers. The bucket list pointer1156points to the next entry in the bucket list. FIG.12is an illustration of a reconfigurable Memcached memory structure1200, in accordance with one embodiment of the present disclosure. The widespread use of distributed key/value stores as a way to exploit large pools of network attached memory makes Memcached suitable for implementation in the reconfigurable memory structure. The Memcached memory structure provides for a network-based service for storing and retrieving values associated with text-based keys, wherein keys can be up to 250 bytes in length, and their associated values can be up to 1 megabyte, in one implementation. For example, the Memcached memory structure1200may include a plurality of containers described inFIGS.11A-B, wherein the each container includes relationship information relating a corresponding container to other containers. In addition, the containers and/or data included within the Memecached memory structure1200may be manipulated by the memory appliances and systems100A-B,200,310,600A ofFIGS.1A-B,2, and6. In particular,FIG.12illustrates how the data for a Memcached implementation might be organized on top of the kind of command primitives using data structures previously described (e.g., such as data structures managed by memory appliances and systems100A-B,200,310,600A ofFIGS.1A-B,2, and6), wherein Memcached provides a network-based service for storing and retrieving values associated with text-based keys, in accordance with one embodiment of the present disclosure. On startup, an implementation of Memcached would compute a sea of pointers representing addresses in device memory that reflect the division of memory into smaller pools of varying sized objects along with space reserved for the list arrays needed for the requisite Memcached functionality. Objects in Memcached exist in one and sometimes two lists. These objects are taken from a pool1250, such as a pool of variably sized blobs or containers. Initially, all objects exist in an array of free lists1210, each free list holding all objects of a given size (e.g., a particular class). Free lists1210are used to satisfy allocation requests in response to SET operations in the cache. During processing of a SET, an object is plucked from the free list for the appropriately sized object, and inserted into two other lists. First, a hash for the key is computed and used to select a list from an array of lists1230, wherein each entry in the array commonly referred to as a “bucket”. The object is inserted into the list chosen for the given hash, and then inserted into a doubly-linked list called the LRU list1220. The LRU list1220is used very much like a queue (e.g., the oldest entry is the one returned to the allocation pool, i.e. FIFO). The list can be walked backwards from the tail to go from oldest to youngest or forward from the head to go from youngest to oldest. In satisfying new object allocation requests, Memcached walks a few nodes in the list from oldest to youngest to see if any objects in the cache have expired before abandoning the LRU list in favor of satisfying the allocation request from the appropriate free list. During Memcached initialization, the MWRITE primitive command would provide a way to initialize large numbers of empty blobs with a very small number of round-trips from host to device. The FILL command would facilitate array initialization for setting up the requisite list arrays. The host application would maintain pointers to device memory representing the various lists required to implement the needed functionality. Using pointers to lists and blobs in device memory (e.g., stored in the meta-fields ofFIGS.11A-B), the computed blob pointers would be added to the various free lists on startup while the head and tails of the bucket and LRU lists would be initialized to NULL. On processing a SET command, the host would enqueue an ALLOC command passing the LIST pointer for the pre-constructed list containing blobs of the appropriate size. Using the blob pointer returned by ALLOC, the host would enqueue a BLOB WRITE DATA command to initialize the allocated blob, and LINK commands for the relevant LRU and bucket lists. To minimize round-trips through the queue, the ability to enqueue command chains would allow the host to construct a chain of ALLOC->BLOB_WRITE_DATA->LINK->LINK with the BLOB returned by each command passed in as the input blob to the following command in the chain. Command chains allow for reduced queuing latency and simplify the implementation of operations encompassing multiple primitives. On processing a GET command, the host would compute a hash and enqueue a SELECT command, having constructed a CRITERIA that compares the requested key for equality. Alternatively, the SMC controller could implement the hash function and fully automate the selection of a bucket list and subsequent key comparisons. FIG.13is an illustration of the classifications of variably sized containers within free lists, in accordance with one embodiment of the present disclosure. For example, a memory structure may include two classes of containers (e.g., blobs). The first class (i) is defined in a free list1310that includes container1312and1314. A second class (p) is defined in free list1320, and includes containers1322,1324, and1326. As shown, containers in class (i) are of a first size, and containers in class (p) are of a second size, wherein the sizes are different to accommodate varying sized of data. In order to manage the allocation of containers within a memory structure, containers can be ordered and listed in free lists (e.g.,1310and1320) within a classification so that each is available for inclusion within other linked lists of the memory structure. For example, an available container within free list1310may be allocated to a linked list of related containers, at which point that container is removed from the free list. The variably sized containers may be implemented within Memecached memory structure1200ofFIG.12. In addition, the containers included within the free lists1310and1320may be implemented by the memory appliances and systems100A-B,200,310,600A ofFIGS.1A-B,2, and6. The free list1310for class (i) can be walked backwards from the tail1317to the head1318. The containers may be listed from oldest to youngest, or youngest to oldest. For instance, when walking from tail1317along path1350towards the head1318, container1314is next. From container1314, the previous pointer1360points to container1312along path1351. Again, from container1312, the previous pointer1361points to head1318, along path1352. Similarly, the class (i) can be walked from head1318to tail by following path1353to container1312. The next pointer1362points to container1314. In container1314, the next pointer1363will point to the tail1317. Similarly, the free list for1320for class (p) can be walked backward from the tail1327to head1328. For instance, when walking from tail1327along path1370toward head1328, container1326is next. From container1326, the previous pointer points to container1324along path1371. From container1324, the previous pointer points to container1322along path1372. In container1322, the previous pointer will point to the head1328. FIG.14is an illustration of LRU container lists within classifications of variably sized containers within free lists (e.g., free lists ofFIG.13), in accordance with one embodiment of the present disclosure. For example, a memory structure may include two classes of containers (e.g., blobs). The first class (i) includes container1412and1414. A second class (p) includes container1422. As shown, containers in class (i) are of a first size, and containers in class (p) are of a second size, wherein the sizes are different. In order to manage the containers within a memory structure, containers in a free list of a classification may be ordered such that the last recently used container is known. In that manner, containers in a free list may be ordered by use over a period, such that the oldest containers may be allocated before newer containers in the free list. The variably sized containers by class may be implemented within Memecached memory structure1200ofFIG.12. In addition, the containers included within the lists1410and1420may be implemented by the memory appliances and systems100A-B,200,310,600A ofFIGS.1A-B,2, and6. When walking the containers in class (i) from tail1415to head1416, container1412is next following path1450, then container1414along path1451from the previous pointer, and then to head1416along path1452from the previous pointer. Similarly, when walking the containers in class (p) from tail1425to head1426, container1422is next. Since there is only one container in class (p), the previous pointer in container1422will point to head1426. In addition, in the Memcached implementation of the memory structure, a key is hashed and matched to one of the values in the hash list1440. For example, a key (of a key-value pair stored in the data portion of a container) that is hashed may be represented by hash1441. That hash1441points to a bucket list (k). The hash value1441includes a pointer to the first entry in the bucket list (k), which is container1412. From the relationship information in container1412, the next bucket list pointer leads to container1422in class (p) along path1456. In that manner, the keys in the data portion of containers1412and1422can be matched with the original key (or their hashes can be matched) to determine which container, if any, belongs to the originally presented key. A similar process may be followed to determine if any containers belong to a key that hashes to hash1442in the list1440. FIG.15is an illustration of a combination of free lists and LRU lists within classifications of variably sized containers, in accordance with one embodiment of the present disclosure. In addition, the containers are organized within a Memcached memory structure. For example, a memory structure may include two classes of containers (e.g., blobs). The first class (i) is defined in a free list that includes four containers1521-1524. A second class (p) is defined in a free list that includes four containers1531-1534. As shown, containers in class (i) are of a first size, and containers in class (p) are of a second size, wherein the sizes are different. In order to manage the containers within a memory structure, containers in classification list may be related such that the last recently used container is known, and free containers are known. The variably sized containers by class may be implemented within Memecached memory structure1200ofFIG.12, for example. In addition, the containers included within the lists1410and1420may be implemented by the memory appliances and systems100A-B,200,310,600A ofFIGS.1A-B,2, and6. In addition, the hash table1510allows for keys to be linked to a proper container, and one or more of its associated containers. This is accomplished by walking the bucket list to match keys in containers of the bucket list (e.g., list k) to the originally presented key. For example, bucket list k from hash value1515includes containers1521and1532. FIG.16is an illustration of two memory structures based on the same set of containers1600within a memory appliance architecture including programmable memory interfaces for accessing memory, in accordance with one embodiment of the present disclosure. That is, in one memory appliance, depending on how relationships are defined between containers stored in the memory appliance, there may be multiple data structures, such as data structure1and data structure2shown inFIG.16. For example, the set of containers includes containers N-1 through N-3. Depending on how these containers are arranged (e.g., as defined by their relationships) multiple memory structures can be defined. That is, by performing an operation on the relationship information of any of the containers in the set1600, the memory structure is reconfigured. Though the data structures are shown having three containers, it is understood that data structures1and2may contain any number of variably sized containers, and that the total number of containers may be different in each of the data structures1and2. In that manner, the memory appliance is reconfigurable depending on the defined relationships between containers N-1 through N-3, for example. The memory structures (e.g., data structures1and2) may be implemented by the memory appliances and systems100A-B,200,310,600A ofFIGS.1A-B,2, and6. For example, data structure1includes all three containers N-1, N-2, and N-3, but is defined as having a structure that has container N-1 preceding container N-2, and wherein container N-2 precedes container N-3. For example, the relationship information may define a list and the orders of containers within the list. In addition, data structure2includes all three containers N-1, N-2, and N-3, just as data structure1. However, data structure2is configured differently from data structure1, and is defined as having a structure with container N-1 preceding container N-3, and wherein container N-3 precedes container N-2. Data Structures, Types, and Commands As previously described, low-level memory primitives supporting read and write operations on absolute device memory addresses is supported by the SMC controller to allow the overall memory management required to facilitate the creation and manipulation of key global data structures. The SMC controller supports the allocation of variable-length blobs and their association with various device-based collections in the form of lists. Lists are an enabling vehicle for generalized slab management and free lists, hash tables, queues, command chains, etc. Applications that create blobs can be configured to explicitly anticipate the maximum number of lists that a blob will be a member of, concurrently, during its life time. Each blob contains a variable sized “listentry” array to accommodate list memberships. All blobs contain at least one listentry for use by the slab manager. In that manner, the primitive commands comprise data manipulation instructions formatted for operation on data stored in linked lists within the device memory. For example, the instructions may include operations configured for accessing data of a linked list; searching data of a linked list; modifying data of a linked list; adding data items to a linked list; and removing data items from a linked list. A list of commands used to facilitate discovery of SMC resources is provided. For example, an attributes structure containing application relevant SMC information (e.g., starting device address of available memory, size of available memory, etc.) is populated in response to the SMC ATTRS command. Various exemplary primitive commands are listed below. The “READ<SRC, DST, LENGTH>” primitive command copies an entry from device memory into system memory over a specified length. The “SRC” term defines the device source address. The “DST” term defines the system memory destination address. The “LENGTH” term defines the data length (e.g., in bytes) that are copied. The “READ” primitive command is implementable on containers1100A-B and within memory structure1200ofFIGS.11A-Band12, and on containers included within lists ofFIGS.13-15. The “WRITE<SRC, DST, LENGTH>” primitive command copies from system memory to device memory over a specified length. Again, the SRC″ term defines the device source address. The “DST” term defines the system memory destination address. The “LENGTH” term defines the data length (e.g., in bytes) that are copied. The “WRITE” primitive command is implementable on containers1100A-B and within memory structure1200ofFIGS.11A-Band12, and on containers included within lists ofFIGS.13-15. The “LREAD<LIST, OFFSET, LENGTH, DST>” command reads data from a list, wherein the list is a continuous or contiguous block of memory. For example, the memory controller walks the list to fulfill the request. The term “LIST” points to a list in the device memory. The “LREAD” primitive command is implementable on containers1100A-B and within memory structure1200ofFIGS.11A-Band12, and on containers included within lists ofFIGS.13-15. The “LWRITE<SRC, LIST, OFFSET, LENGTH>” primitive command writes data to a list, wherein the list is a continuous or contiguous block of memory. For example, the memory controller walks the list to fulfill the write request. The term “SRC” defines the source address in system memory. The term “LIST” points to the list in device memory. The term “OFFSET” provides for seeking the location across the list of blobs. The term “LENGTH” defines the length of data to be copied. The “LWRITE” primitive command is implementable on containers1100A-B and within memory structure1200ofFIGS.11A-Band12, and on containers included within lists ofFIGS.13-15. The “READFIELDS{circumflex over ( )}BLOB, COUNT, FIELDID, DST>” primitive command reads a specific blob metadata field into a system memory destination. This command can be performed across multiple blob objects. For example, this command can be used when performing various operations related to cache invalidation, garbage collection, etc. The term “BLOB” defines a system memory pointer to an array of blob pointers. The individual blobs point to device memory. The term “COUNT” defines the number of blobs pointed to by the BLOB array. The term “FIELDID” defines an enumerated value representing a specific metadata field to read. The term “DST” defines a destination buffer in system memory large enough to hold COUNT entries of the data type represented by FIELDID. The “READFIELDS” primitive command is implementable on containers1100A-B and within memory structure1200ofFIGS.11A-Band12, and on containers included within lists ofFIGS.13-15. The “FRFADFIEFD<FIST, COUNT, FIFFDID, DST>” command reads a specific field from each of the blobs in a list, and place the values continuously and/or contiguously in the DST. The term “FIST” defines a list pointer in device memory of the list to traverse for reading fields. The term “COUNT” defines the maximum number of fields that can be held by the DST buffer. The term “FIEFDID” defines the field from each BFOB structure to be read. The term “DST” defines the destination buffer for writing data fields. The “FREADFIEFD” primitive command is implementable on containers1100A-B and within memory structure1200ofFIGS.11A-Band12, and on containers included within lists ofFIGS.13-15. The “WRITEFIEFDS{circumflex over ( )}BFOB, COUNT, FIEFDID, SRO” command reads a specific blob metadata field into a device memory destination. This command is implementable across multiple blob objects. For example, this command can be used when performing various operations related to cache invalidation, garbage collection, etc. The term “BFOB” defines a system memory pointer to an array of blob pointers. The individual blobs point to device memory. The term “COUNT” defines the number of blobs pointed to by the BFOB array. The term “FIEFDID” defines the enumerated value representing a specific metadata field to write. The term “SRC” defines the source buffer in system memory containing COUNT entries of the data type represented by FIEFDID. This array is pre-populated with the values to be written to the BFOB(s) pointed to by the BFOB array, in one implementation. The “WRITEFIEFDS” primitive command is implementable on containers1100A-B and within memory structure1200ofFIGS.11A-Band12, and on containers included within lists ofFIGS.13-15. The “MREAD<COUNT, [SRC, DST, LENGTH]>” command is configured to perform multiple read operations, and copying data from device memory to system memory. The term “COUNT” defines the number of read operations being requested. The term “SRC” defines an array of device memory addresses representing the source addresses for the read operation. The term “DST” defines an array of system memory addresses representing the destination addresses into which data is copied. The term “LENGTH” defines an array of respective lengths for each of the read operations being specified. The “MREAD” primitive command is implementable on containers1100A-B and within memory structure1200ofFIGS.11A-Band12, and on containers included within lists ofFIGS.13-15. The “MWRITE<COUNT, [SRC, DST, SIZE]*>” command performs multiple write operations, including copying data from system memory to device memory. The term “COUNT” defines the number of write operations being requested. The term “SRC” defines an array of system memory addresses representing the source addresses for the write operation. The term “DST” defines an array of device memory addresses representing the destination addresses into which data is copied. The term “LENGTH” defines an array of respective lengths for each of the write operations being specified. The “MWRITE” primitive command is implementable on containers1100A-B and within memory structure1200ofFIGS.11A-Band12, and on containers included within lists ofFIGS.13-15. The “ALLOC<LIST>” command unlinks and returns the first blob in the list, increments the blob reference count, and touches the creation and access time dates. The term “LIST” defines the list from which to allocate a blob. The term “COUNT” defines the number of items left in the list. The “ALLOC” primitive command is implementable on containers1100A-B and within memory structure1200ofFIGS.11A-Band12, and on containers included within lists ofFIGS.13-15. The “PREPEND<LIST, BLOB, INDEX>” inserts a blob at the beginning of a list. The term “LIST” is a pointer to a list in device memory into which the BLOB should be prepended. The term “BLOB” is a pointer to a blob in device memory to prepend into the LIST. The term “INDEX” is a listentry index in the BLOB to use for prepending. The “PREPEND” primitive command is implementable on containers1100A-B and within memory structure1200ofFIGS.11A-Band12, and on containers included within lists ofFIGS.13-15. The “APPEND<LIST, BLOB, INDEX>” command appends a blob to the end of a list. The term “LIST” is a pointer to a list in device memory into which the BLOB should be appended. The term “BLOB” is a pointer to a blob in device memory to append into the list. The term “INDEX” is a listentry index in the BLOB to use for appending. The “APPEND” primitive command is implementable on containers1100A-B and within memory structure1200ofFIGS.11A-Band12, and on containers included within lists ofFIGS.13-15. The “INSERT ALTER<LIST, BLOB1, BLOB2, INDEX>” command inserts BLOB1 after BLOB 2 in a list-LIST. The term “LIST” defines the list into which to insert BLOB1. The term “BLOB1” defines the blob to insert. The term “BLOB2 defines the blob after which to interest BLOB1. The term “INDEX” defines the listentry index to use for inserting. The “INSERT ALTER” primitive command is implementable on containers1100A-B and within memory structure1200ofFIGS.11A-Band12, and on containers included within lists ofFIGS.13-15. The “INSERT BELORE<LIST, BLOB1, BLOB2, INDEX>” command inserts BLOB1 before BLOB2 in LIST. The term “LIST” defines the list into which to insert BLOB1. The term “BLOB1” defines the blob to insert. The term “BLOB2” defines the blog before which to insert BLOB1. The term “INDEX” defines the listentry index to user for inserting. The “INSERT BEFORE” primitive command is implementable on containers1100A-B and within memory structure1200ofFIGS.11A-Band12, and on containers included within lists ofFIGS.13-15. The “FREE<BLOB>” command will decrement a reference count and link a blob into its free list if ref==0. The command will return a reference count. The command uses the listentry (index 0) reserved for use by the slab manager. Using a reference counting model, it is possible that threads can hold references to blobs that have been “freed”. In such a case, when the reference count is not 0 when FREE is invoked, the BLOB will only be added to the free list for subsequent allocation when the outstanding references are decremented by reference holders. Note that DECR REFCOUNT can result in an implicit free operation. The term “BLOB” defines the blob to free. The “FREE” primitive command is implementable on containers1100A-B and within memory structure1200ofFIGS.11A-Band12, and on containers included within lists ofFIGS.13-15. The “SELECT<LIST, CRITERIA, **BLOB, COUNT>” command returns all blobs from the LIST that meet the specified criteria, up to a maximum of COUNT. The fundamental idea is to facilitate multi-selection of blobs within a given list. Use-cases include rapidly identifying cache objects past their expiration date and key comparisons for exact matches in lists representing a specific hash bucket. The “SELECT” primitive command is implementable on containers1100A-B and within memory structure1200ofFIGS.11A-Band12, and on containers included within lists ofFIGS.13-15. The “LINK<LIST, BLOB, INDEX>” adds a BLOB to a LIST in device memory. The command uses the specific listentry in the BLOB represented by INDEX. The term “LIST” defines the list pointer, in device memory, into which to insert the BLOB. The term “BLOB” defines the blob pointer, in device memory, to insert into the LIST. The term “INDEX” defines the listentry index in the BLOB to use for this LIST. The “LINK” primitive command is implementable on containers1100A-B and within memory structure1200ofFIGS.11A-Band12, and on containers included within lists ofFIGS.13-15. The “UNLINK<LIST, BLOB, INDEX>” removes the BLOB from the LIST, clearing the next and previous pointers in listentry [INDEX], The term “LIST” defines the pointer in device memory to list containing the blob to unlink. The term “BLOB” defines the pointer to device memory for the BLOB being unlinked. The term “INDEX” defines the listentry index to clear. The “UNLINK” primitive command is implementable on containers1100A-B and within memory structure1200ofFIGS.11A-Band12, and on containers included within lists ofFIGS.13-15. The “DELINE_LIST<ID, HEAD, TAIL” command will define various parameters for a list, including the identifier, head and tail. The “DELINE” primitive command is implementable on containers1100A-B and within memory structure1200ofFIGS.11A-Band12, and on containers included within lists ofFIGS.13-15. The “CONDITIONAL_UNLINK<LIST, CRITERIA, INDEX>” command defines an unlink operation on a particular list. The “CONDITIONAL” primitive command is implementable on containers1100A-B and within memory structure1200ofFIGS.11A-Band12, and on containers included within lists ofFIGS.13-15. The “INCR_RELCOUNT<BLOB>” command increments the reference count associated with a blob. The “INCR RELCOUNT” primitive command is implementable on containers1100A-B and within memory structure1200ofFIGS.11A-Band12, and on containers included within lists ofFIGS.13-15. The “DECR_REFCOUNT<BLOB>” command decrements the reference count for BLOB and links the BLOB back into free list, if ref count goes to 0. Otherwise, the command returns a decremented reference count. The “DECR REFCOUNT” primitive command is implementable on containers1100A-B and within memory structure1200ofFIGS.11A-Band12, and on containers included within lists ofFIGS.13-15. The “MOVE_MEMBERSHIP<SRC LIST, DST LIST, BLOB, INDEX>” command moves membership of a blob between lists. The “MOVE” primitive command is implementable on containers1100A-B and within memory structure1200ofFIGS.11A-Band12, and on containers included within lists ofFIGS.13-15. The “FILL<BYTE, DST, COUNT>” command fills device memory at DST address with BYTE for length of COUNT. The term “BYTE” defines the byte to fill the device memory with. The term “DST” defines the pointer to device memory where FILL operation begins. The term “COUNT” defines the number of bytes from DST over which is written the value of BYTE. The “FILL” primitive command is implementable on containers1100A-B and within memory structure1200ofFIGS.11A-Band12, and on containers included within lists ofFIGS.13-15. The “BLOB_FILL<BLOB, BYTE>” command fills blob data with BYTE. The term “BLOB” points to device memory for this blob. The term “BYTE” defines the value to fill in BLOB's variable length data. The “BLOB FILL” primitive command is implementable on containers1100A-B and within memory structure1200ofFIGS.11A-Band12, and on containers included within lists ofFIGS.13-15. The “BLOB_WRITE_DATA<BLOB, SRC, LENGTH>” command overwrites blob data. The term “BLOB” points to device memory for this blob. The term “SRC” defines a pointer to system memory where data to be written resides. The term “LENGTH” defines the length of data to write. The “BLOB WRITE” primitive command is implementable on containers1100A-B and within memory structure1200ofFIGS.11A-Band12, and on containers included within lists ofFIGS.13-15. The “BLOB AND<BLOB 1, BLOB2, BLOB DST>” command performs bitwise AND operation using BLOB1 and BLOB2 variable data storing result in BLOB DST. The term “BLOB1” defines the first blob operation for bitwise AND operation. The term “BLOB2” defines the second blob operation for bitwise AND operation. The term “BLOB DST” defines the blob resulting from bitwise AND operation of BLOB 1 and BLOB2. The “BLOB AND” primitive command is implementable on containers1100A-B and within memory structure1200ofFIGS.11A-Band12, and on containers included within lists ofFIGS.13-15. The “BLOB OR<BLOB1, BLOB2, BLOB DST>” command performs bitwise OR operation using BLOB1 and BLOB2 variable data storing result in BLOB DST. The term “BLOB 1” defines the first blob operation for bitwise OR operation. The term “BLOB2” defines the second blob operation for bitwise OR operation. The term “BLOB DST” defines the blob resulting from bitwise OR operation of BLOB 1 and BLOB2. The “BLOB OR” primitive command is implementable on containers1100A-B and within memory structure1200ofFIGS.11A-Band12, and on containers included within lists ofFIGS.13-15. The “BLOB XOR<BLOB 1, BLOB2, BLOB DST>” command performs bitwise XOR operation using BLOB 1 and BLOB2 variable data storing result in BLOB DST. The term “BLOB1” defines the first blob operation for bitwise XOR operation. The term “BLOB2” defines the second blob operation for bitwise XOR operation. The term “BLOB DST” defines the blob resulting from bitwise XOR operation of BLOB 1 and BLOB2. The “BLOB XOR” primitive command is implementable on containers1100A-B and within memory structure1200ofFIGS.11A-Band12, and on containers included within lists ofFIGS.13-15. The “BLOB COMPLEMENT<BLOB SRC, BLOB DST>” command performs bitwise 2-s complement operation on BLOB1 storing result in BLOB DST. The term “BLOB1” defines the blob containing bits for NOT operation. The term “BLOB2” defines the resulting blob. The “BLOB COMPLEMENT” primitive command is implementable on containers1100A-B and within memory structure1200ofFIGS.11A-Band12, and on containers included within lists ofFIGS.13-15. Portions of the detailed descriptions are presented in terms of procedures, logic blocks, processing, and other symbolic representations of operations on data bits within a computer memory. These descriptions and representations are the means used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. In the present application, a procedure, logic block, process, or the like, is conceived to be a self-consistent sequence of steps or instructions leading to a desired result. The steps are those utilizing physical manipulations of physical quantities. Usually, although not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated in a computer system. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as transactions, bits, values, elements, symbols, characters, samples, pixels, or the like. It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the following discussions, it is appreciated that throughout the present disclosure, discussions utilizing terms such as “accessing,” “receiving,” “selecting,” “storing,” “loading,” “reprogramming,” “determining,” “searching,” “moving,” “copying,” “deleting,” “identifying,” “executing,” “compiling,” “providing,” or the like, refer to actions and processes (e.g., flowcharts described herein) of a computer system or similar electronic computing device or processor (e.g., system1710ofFIG.17). The computer system or similar electronic computing device manipulates and transforms data represented as physical (electronic) quantities within the computer system memories, registers or other such information storage, transmission or display devices. Embodiments described herein may be discussed in the general context of computer-executable instructions residing on some form of computer-readable storage medium, such as program modules, executed by one or more computers or other devices. By way of example, and not limitation, computer-readable storage media may comprise non-transitory computer storage media and communication media. Generally, program modules include routines, programs, objects, components, data structures, etc., that perform particular tasks or implement particular abstract data types. The functionality of the program modules may be combined or distributed as desired in various embodiments. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules or other data. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, FLASH memory, non-volatile memory or other memory technology, CD-ROM, DVDs or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to store the desired information and that can accessed to retrieve that information. Communication media can embody computer-executable instructions, data structures, and program modules, and includes any information delivery media. By way of example, and not limitation, communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared and other wireless media. Combinations of any of the above can also be included within the scope of computer-readable media. FIG.17is a block diagram of an example of a computing system1710capable of implementing embodiments of the present disclosure. Computing system1710broadly represents any single or multi-processor computing device or system capable of executing computer-readable instructions. Examples of computing system1710include, without limitation, workstations, laptops, client-side terminals, servers, distributed computing systems, handheld devices, or any other computing system or device. In its most basic configuration, computing system1710may include at least one processor1714and a system memory1716. Processor1714generally represents any type or form of processing unit capable of processing data or interpreting and executing instructions. In certain embodiments, processor1714may receive instructions from a software application or module. These instructions may cause processor1714to perform the functions of one or more of the example embodiments described and/or illustrated herein. For example, processor1714may perform and/or be a means for performing, either alone or in combination with other elements, one or more of the identifying, determining, using, implementing, translating, tracking, receiving, moving, and providing described herein. Processor1714may also perform and/or be a means for performing any other steps, methods, or processes described and/or illustrated herein. System memory1716generally represents any type or form of volatile or non-volatile storage device or medium capable of storing data and/or other computer-readable instructions. Examples of system memory1716include, without limitation, RAM, ROM, FLASH memory, or any other suitable memory device. Although not required, in certain embodiments computing system1710may include both a volatile memory unit (such as, for example, system memory1716) and a non-volatile storage device (such as, for example, primary storage device1732. Computing system1710may also include one or more components or elements in addition to processor1714and system memory1716. For example, in the embodiment ofFIG.17, computing system1710includes a memory controller1718, an I/O controller1720, and a communication interface1722, each of which may be interconnected via a communication infrastructure1712. Communication infrastructure1712generally represents any type or form of infrastructure capable of facilitating communication between one or more components of a computing device. Examples of communication infrastructure1712include, without limitation, a communication bus (such as an USA, PCI, PCIe, or similar bus) and a network. In one embodiment, system memory1716communicates via a dedicated memory bus. Memory controller1718generally represents any type or form of device capable of handling memory or data or controlling communication between one or more components of computing system1710. For example, memory controller1718may control communication between processor1714, system memory1716, and I/O controller1720via communication infrastructure1712. Memory controller may perform and/or be a means for performing, either alone or in combination with other elements, one or more of the operations or features described herein. I/O controller1720generally represents any type or form of module capable of coordinating and/or controlling the input and output functions of a computing device. For example, I/O controller1720may control or facilitate transfer of data between one or more elements of computing system1710, such as processor1714, system memory1716, communication interface1722, display adapter1726, input interface1730, and storage interface1734. I/O controller1720may be used, for example, to perform and/or be a means for performing, either alone or in combination with other elements, one or more of the operations described herein. I/O controller1720may also be used to perform and/or be a means for performing other operations and features set forth in the instant disclosure. Communication interface1722broadly represents any type or form of communication device or adapter capable of facilitating communication between example computing system1710and one or more additional devices. For example, communication interface1722may facilitate communication between computing system1710and a private or public network including additional computing systems. Examples of communication interface1722include, without limitation, a wired network interface (such as a network interface card), a wireless network interface (such as a wireless network interface card), a modem, and any other suitable interface. In one embodiment, communication interface1722provides a direct connection to a remote server via a direct link to a network, such as the Internet. Communication interface1722may also indirectly provide such a connection through, for example, a local area network (such as an Ethernet network), a personal area network, a telephone or cable network, a cellular telephone connection, a satellite data connection, or any other suitable connection. Communication interface1722may also represent a host adapter configured to facilitate communication between computing system1710and one or more additional network or storage devices via an external bus or communications channel. Examples of host adapters include, without limitation, SCSI host adapters, USB host adapters, IEEE (Institute of Electrical and Electronics Engineers) 1394 host adapters, Serial Advanced Technology Attachment (SATA) and External SATA (eSATA) host adapters, Advanced Technology Attachment (ATA) and Parallel ATA (PATA) host adapters, Fibre Channel interface adapters, Ethernet adapters, or the like. Communication interface1722may also allow computing system1710to engage in distributed or remote computing. For example, communication interface1722may receive instructions from a remote device or send instructions to a remote device for execution. Communication interface1722may perform and/or be a means for performing, either alone or in combination with other elements, one or more of the operations disclosed herein. Communication interface1722may also be used to perform and/or be a means for performing other operations and features set forth in the instant disclosure. As illustrated inFIG.17, computing system1710may also include at least one display device1724coupled to communication infrastructure1712via a display adapter1726. Display device1724generally represents any type or form of device capable of visually displaying information forwarded by display adapter1726. Similarly, display adapter1726generally represents any type or form of device configured to forward graphics, text, and other data from communication infrastructure1712(or from a frame buffer, as known in the art) for display on display device1724. As illustrated inFIG.17, computing system1710may also include at least one input device1728coupled to communication infrastructure1712via an input interface1730. Input device1728generally represents any type or form of input device capable of providing input, either computer- or human-generated, to computing system1710. Examples of input device1728include, without limitation, a keyboard, a pointing device, a speech recognition device, or any other input device. In one embodiment, input device1728may perform and/or be a means for performing, either alone or in combination with other elements, one or more of the operations disclosed herein. Input device1728may also be used to perform and/or be a means for performing other operations and features set forth in the instant disclosure. As illustrated inFIG.17, computing system1710may also include a primary storage device1732and a backup storage device1733coupled to communication infrastructure1712via a storage interface1734. Storage devices1732and1733generally represent any type or form of storage device or medium capable of storing data and/or other computer-readable instructions. For example, storage devices1732and1733may be a magnetic disk drive (e.g., a so-called hard drive), a floppy disk drive, a magnetic tape drive, an optical disk drive, a FLASH drive, or the like. Storage interface1734generally represents any type or form of interface or device for transferring data between storage devices1732and1733and other components of computing system1710. In one example, databases1740may be stored in primary storage device1732. Databases1740may represent portions of a single database or computing device or a plurality of databases or computing devices. For example, databases1740may represent (be stored on) a portion of computing system1710and/or portions of example network architecture1800inFIG.18(below). Alternatively, databases1740may represent (be stored on) one or more physically separate devices capable of being accessed by a computing device, such as computing system1710and/or portions of network architecture1800. Continuing with reference toFIG.17, storage devices1732and1733may be configured to read from and/or write to a removable storage unit configured to store computer software, data, or other computer-readable information. Examples of suitable removable storage units include, without limitation, a floppy disk, a magnetic tape, an optical disk, a FLASH memory device, or the like. Storage devices1732and1733may also include other similar structures or devices for allowing computer software, data, or other computer-readable instructions to be loaded into computing system1710. For example, storage devices1732and1733may be configured to read and write software, data, or other computer-readable information. Storage devices1732and1733may also be a part of computing system1710or may be separate devices accessed through other interface systems. Storage devices1732and1733may be used to perform, and/or be a means for performing, either alone or in combination with other elements, one or more of the operations disclosed herein. Storage devices1732and1733may also be used to perform, and/or be a means for performing, other operations and features set forth in the instant disclosure. Many other devices or subsystems may be connected to computing system1710. Conversely, all of the components and devices illustrated inFIG.17need not be present to practice the embodiments described herein. The devices and subsystems referenced above may also be interconnected in different ways from that shown inFIG.17. Computing system1710may also employ any number of software, firmware, and/or hardware configurations. For example, the example embodiments disclosed herein may be encoded as a computer program (also referred to as computer software, software applications, computer-readable instructions, or computer control logic) on a computer-readable medium. The computer-readable medium containing the computer program may be loaded into computing system1710. All or a portion of the computer program stored on the computer-readable medium may then be stored in system memory1716and/or various portions of storage devices1732and1733. When executed by processor1714, a computer program loaded into computing system1710may cause processor1714to perform and/or be a means for performing the functions of the example embodiments described and/or illustrated herein. Additionally or alternatively, the example embodiments described and/or illustrated herein may be implemented in firmware and/or hardware. For example, computing system1710may be configured as an ASIC adapted to implement one or more of the embodiments disclosed herein. FIG.18is a block diagram of an example of a network architecture1800in which client systems1810,1820, and1830and servers1840and1845may be coupled to a network1850. Client systems1810,1820, and1830generally represent any type or form of computing device or system, such as computing system1710ofFIG.17. Similarly, servers1840and1845generally represent computing devices or systems, such as application servers or database servers, configured to provide various database services and/or run certain software applications. Network1850generally represents any telecommunication or computer network including, for example, an intranet, a WAN, a LAN, a PAN, or the Internet. As illustrated inFIG.18, one or more storage devices1860(1)-(L) may be directly attached to server1840. Similarly, one or more storage devices1870(1)-(N) may be directly attached to server1845. Storage devices1860(1)-(L) and storage devices1870(1)-(N) generally represent any type or form of storage device or medium capable of storing data and/or other computer-readable instructions. Storage devices1860(1)-(L) and storage devices1870(1)-(N) may represent NAS devices configured to communicate with servers1840and1845using various protocols, such as NFS, SMB, or GIFS. Servers1840and1845may also be connected to a SAN fabric1880. SAN fabric1880generally represents any type or form of computer network or architecture capable of facilitating communication between storage devices. SAN fabric1880may facilitate communication between servers1840and1845and storage devices1890(1)-(M) and/or an intelligent storage array1895. SAN fabric1880may also facilitate, via network1850and servers1840and1845, communication between client systems1810,1820, and1830and storage devices1890(1)-(M) and/or intelligent storage array1895in such a manner that devices1890(1)-(M) and array1895appear as locally attached devices to client systems1810,1820, and1830. As with storage devices1860(1)-(L) and storage devices1870(1)-(N), storage devices1890(1)-(M) and intelligent storage array1895generally represent any type or form of storage device or medium capable of storing data and/or other computer-readable instructions. With reference to computing system1710ofFIG.17, a communication interface, such as communication interface1722, may be used to provide connectivity between each client system1810,1820, and1830and network1850. Client systems1810,1820, and1830may be able to access information on server1840or1845using, for example, a Web browser or other client software. Such software may allow client systems1810,1820, and1830to access data hosted by server1840, server1845, storage devices1860(1)-(L), storage devices1870(1)-(N), storage devices1890(1)-(M), or intelligent storage array1895. AlthoughFIG.18depicts the use of a network (such as the Internet) for exchanging data, the embodiments described herein are not limited to the Internet or any particular network-based environment. Returning toFIG.18, in one embodiment, all or a portion of one or more of the example embodiments disclosed herein are encoded as a computer program and loaded onto and executed by server1840, server1845, storage devices1860(1)-(L), storage devices1870(1)-(N), storage devices1890(1)-(M), intelligent storage array1895, or any combination thereof. All or a portion of one or more of the example embodiments disclosed herein may also be encoded as a computer program, stored in server1840, run by server1845, and distributed to client systems1810,1820, and1830over network1850. Accordingly, network architecture1800may perform and/or be a means for performing, either alone or in combination with other elements, one or more of the operations disclosed herein. Network architecture1800may also be used to perform and/or be a means for performing other operations and features set forth in the instant disclosure. The above described embodiments may be used, in whole or in part, in systems that process large amounts of data and/or have tight latency constraints, and, in particular, with systems using one or more of the following protocols and formats: Key-Value (KV) Store, Memcached, Redis, Neo4J (Graph), Fast Block Storage, Swap Device, and Network RAMDisk. In addition, the above described embodiments may be used, in whole or in part, in systems employing virtualization, Virtual Desktop Infrastructure (VDI), distributed storage and distributed processing (e.g., Apache Hadoop), data analytics cluster computing (e.g., Apache Spark), Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and other cloud computing platforms (e.g., Vmware vCloud, Open Stack, and Microsoft Azure). Further, the above described embodiments may be used, in whole or in party, in systems conducting various types of computing, including Scale Out, Disaggregation, Multi-Thread/Distributed Processing, RackScale, Data Center Scale Computing, Elastic Memory Provisioning, Memory as a Service, page migration and caching and Application Offloading/Acceleration and Integration, using various types of storage, such as Non-Volatile Memory Express, Flash, Multi-Tenancy, Internet Small Computer System Interface (iSCSI), Object Storage, Scale Out storage, and using various types of networking, such as 10/40/1 OOGbE, Software-Defined Networking, Silicon Photonics, Rack TOR Networks, and Low-Latency networking. While the foregoing disclosure sets forth various embodiments using specific block diagrams, flowcharts, and examples, each block diagram component, flowchart step, operation, and/or component described and/or illustrated herein may be implemented, individually and/or collectively, using a wide range of hardware, software, or firmware (or any combination thereof) configurations. In addition, any disclosure of components contained within other components should be considered as examples because many other architectures can be implemented to achieve the same functionality. The process parameters and sequence of steps described and/or illustrated herein are given by way of example only and can be varied as desired. For example, while the steps illustrated and/or described herein may be shown or discussed in a particular order, these steps do not necessarily need to be performed in the order illustrated or discussed. The various example methods described and/or illustrated herein may also omit one or more of the steps described or illustrated herein or include additional steps in addition to those disclosed. While various embodiments have been described and/or illustrated herein in the context of fully functional computing systems, one or more of these example embodiments may be distributed as a program product in a variety of forms, regardless of the particular type of computer-readable media used to actually carry out the distribution. The embodiments disclosed herein may also be implemented using software modules that perform certain tasks. These software modules may include script, batch, or other executable files that may be stored on a computer-readable storage medium or in a computing system. These software modules may configure a computing system to perform one or more of the example embodiments disclosed herein. One or more of the software modules disclosed herein may be implemented in a cloud computing environment. Cloud computing environments may provide various services and applications via the Internet. These cloud-based services (e.g., software as a service, platform as a service, infrastructure as a service, etc.) may be accessible through a Web browser or other remote interface. Various functions described herein may be provided through a remote desktop environment or any other cloud-based computing environment. The foregoing description, for purpose of explanation, has been described with reference to specific embodiments. However, the illustrative discussions above are not intended to be exhaustive or to limit the disclosure to the precise forms disclosed. Many modifications and variations are possible in view of the above teachings. The embodiments were chosen and described in order to best explain the principles of the disclosure and its practical applications, to thereby enable others skilled in the art to best utilize the disclosure and various embodiments with various modifications as may be suited to the particular use contemplated. Embodiments according to the present disclosure are thus described. While the present disclosure has been described in particular embodiments, it should be appreciated that the disclosure should not be construed as limited by such embodiments, but rather construed according to the below claims.
153,798
11860814
DETAILED DESCRIPTION OF THE INVENTION The present invention discloses systems and methods for deterministic concurrent communication between PEs either 1) implemented in a two dimensional grid (“2D-grid”) in a single die, or 2) implemented in a plurality of dies on a semiconductor wafer, or 3) implemented in a plurality of integrated circuits or chips; all the scenarios are called collectively called scalable distributed computing system or massively parallel system or multiprocessor system (“MPS”). In one embodiment the concurrent communication is each PE broadcasts data tokens to all the rest of PEs concurrently in a deterministic number of time steps. In another embodiment the concurrent communication is each PE multicasts data tokens to one or more of PEs concurrently in a deterministic number of time steps; if each PE concurrently transmits to another PE it is unicast and if each PE concurrently transmits to two or more other PEs it is multicast. The 2D-grid of PEs is of size α×b , where α≥1, b≥1, α+b>2, and both α and b are integers is disclosed. A scalable multi-stage hypercube-based interconnection network to connect one or more PEs using vertical and horizontal buses is disclosed. Accordingly, each two PEs with connection between them, are connected by a separate bus in each direction where a bus is one or more wires. In one embodiment the buses are connected in pyramid network configuration, i.e., all the vertical buses and horizontal buses are connected between same corresponding switches of the PEs. At each PE, the interconnection network comprises one or more switches (collectively “interconnect”) with each switch concurrently capable to send and receive packets from one PE to another PE through the bus connected between them. (To be specific, interconnection network is the combination of interconnects of all PEs i.e. including the switches and all buses connected to the switches of all PEs.) In one embodiment, each switch is implemented by one or more multiplexers. Each packet comprises data token, routing information such as source and destination addresses of PEs. Each PE, in addition to interconnect, comprises a processor and memory. In one embodiment the processor is a Central Processing Unit (“CPU”) comprises functional units that perform such as additions, multiplications, or logical operations, for executing computer programs. In another embodiment the processor comprises a domain specific architecture (“DSA”) based Deep Neural Network (“DNN”) processor comprising one or more multiply accumulate (“MAC”) units for matrix multiply operations. In one embodiment each PE comprises processor, memory and interconnect which are directly connected to each two of them. A balanced MPS architecture between processor, memory and interconnect is disclosed. That is the typical bottleneck in the interconnect is alleviated for the overall throughput of the MPS close to the peak throughout especially for embarrassingly parallel applications for example, today's popular DNNs such as Multi-Layer Perceptrons (“MLP”), Convolutional Neural Networks (“CNN”), Recurrent Neural Networks (“RNN”) and Sparse Neural Networks (“SNN”). A scalable MPS to implement DNN processing requires concurrent broadcast and multicast between PEs in deterministic number of time steps. At each PE, matching the broadcast and multicast capability of interconnect, the capabilities for processor, memory and the bandwidth between each two of them will be provided for a balanced MPS architecture in accordance with the current invention. This is in contrast to providing maximum capabilities to processor, memory and the bandwidth between processor and memory but with a bottlenecked interconnect resulting in poor performance and throughput in the prior art solutions. The balanced MPS architecture disclosed in the current invention is power efficient with maximum performance at lower silicon area and also enables software simplicity. Methods for all the PEs of the 2D-grid of PEs concurrently broadcasting packets to all the other PEs in the 2D-grid in a non-blocking, collision-free and without requiring to queue in a deterministic number of time steps, in a fixed predetermined path between each two PEs are disclosed. Methods for all the PEs of the 2D-grid of PEs concurrently arbitrary fan-out multicasting and unicasting packets to the other PEs in the 2D-grid in a non-blocking, collision-free and without requiring to queue in a deterministic number of time steps, in a fixed predetermined path between each two PEs are also disclosed. Scalable multi-stage hypercube-based interconnection network with 4*4 2D-grid of PEs with buses connected in pyramid network configuration: Referring to diagram100A inFIG.1A, in one embodiment, an exemplary multi-stage hypercube-based interconnection network between 16 PEs arranged in 4*4 grid where the number of rows is four and the number of columns is four. The 16 PEs are represented in binary format namely PE0000, PE0001, PE0010, PE0011, PE0100, PE0101, PE0110, PE0111, PE1000, PE1001, PE1010, PE1011, PE1100, PE1101, PE1110, and PE1111, since 4 bits are needed to represent 16 numbers and the corresponding decimal format being PE0, PE1, PE2, PE3, PE4, PE5, PE6, PE7, PE8, PE9, PE10, PE11, PE12, PE13, PE14, and PE15respectively. Each PE comprises four switches in pyramid network configuration. For example PE0000comprises four switches S(0,0), S(0,1), S(0,2), and S(0,3). F(0,0) is a forward bus connected from S(0,0) to S(0,1). F(0,1) is a forward bus connected from S(0,1) to S(0,2). F(0,2) is a forward bus connected from S(0,2) to S(0,3). B(0,0) is a backward bus connected from S(0,1) to S(0,0). B(0,1) is a backward bus connected from S(0,2) to S(0,1). B(0,2) is a backward bus connected from S(0,3) to S(0,2). All the right going buses are referred as forward buses and are denoted by F(x,y) where x={{0−9}∪{A−F}} and y={0−3}. All the left going buses are referred as backward buses and denoted by B(x,y) where x={{0−9}∪{A−F}} and y={0−3}. Each of the four switches in each PE comprise one inlet bus and outlet bus as shown in diagram100B ofFIG.1B. For example in PE0000, switch S(0,0) comprises inlet bus I(0,0) and outlet bus O(0,0). Switch S(0,1) comprises inlet bus I(0,1) and outlet bus O(0,1). Switch S(0,2) comprises inlet bus I(0,2) and outlet bus O(0,2). Switch S(0,3) comprises inlet bus I(0,3) and outlet bus O(0,3). For simplicity of illustration the inlet buses and outlet buses to each switch of each PE are not shown in diagram100A ofFIG.1A. Accordingly with the addition of inlet buses and outlet buses illustrated in diagram100B ofFIG.1Bto diagram100A ofFIG.1Acompletes the multi-stage hypercube-based interconnection network between 16 PEs arranged in 4*4 grid. For example the diagram100C inFIG.1Cillustrates the complete details the interconnect of PE0000i.e., combining the inlet buses and outlet buses shown in diagram100B ofFIG.1Bto diagram100A ofFIG.1Aof multi-stage hypercube-based interconnection network between 16 PEs arranged in 4*4 grid, in accordance with the current invention. As illustrated in diagram100A ofFIG.1A, the buses between PEs are either vertical buses or horizontal buses. A vertical bus is denoted by V(x,y) where x and y are decimal number representation of PEs and vertical bus is connected from PE x to PE y. Similarly a horizontal bus is denoted by H(x,y) where x and y are decimal number representation of PEs and horizontal bus is connected from PE x to PE y. In the multi-stage hypercube-based interconnection network diagram100A ofFIG.1A, since all the 16 PEs are represented by 4 bits each, each PE is connected, by buses in both the directions, to four other PEs where the PE number differs in only one bit. The number of PEs each PE is connected to is called the degree of the multi-stage hypercube-based interconnection network disclosed in diagram100A ofFIG.1A. Accordingly the degree of the multi-stage hypercube-based interconnection network disclosed in diagram100A ofFIG.1Ais four. For example PE0000number being0000the four PEs it is connected to are 1) PE0001where the least significant bit is different, 2) PE0010where the second least significant bit is different, 3) PE0100where the second most significant bit is different, and 4) PE1000where the most significant bit is different. Also switch S(0,0) in PE0000is connected to switch S(1,0) in PE0001by vertical bus V(0,1) and switch S(1,0) in PE0001is connected to switch S(0,0) in PE0000by vertical bus V(1,0). Switch (0,1) in PE0000is connected to switch S(2,1) in PE0010by horizontal bus H(0,2) and switch S(2,1) in PE0010is connected to switch S(0,1) in PE0000by horizontal bus H(2,0). Switch S(0,2) in PE0000is connected to switch S(4,2) in PE0100by vertical bus V(0,4) and switch S(4,2) in PE0100is connected to switch S(0,2) in PE0000by vertical bus V(4,0). Switch (0,3) in PE0000is connected to switch S(8,3) in PE0100by horizontal bus H(0,8) and switch S(8,3) in PE0100is connected to switch S(0,3) in PE0000by horizontal bus H(8,0). In one embodiment the buses are connected in pyramid network configuration, i.e., all the vertical buses and horizontal buses are connected between same corresponding switches of the PEs. For example, in diagram100A ofFIG.1A, switch S(0,m) in PE0000is connected to switch S(1,m) in PE0001by vertical bus V(0,1) where m=0; And switch S(1,m) in PE0001is connected to switch S(0,m) in PE0000by vertical bus V(1,0) where m=0. That is vertical bus V(0,1) and vertical bus V(1,0) are connected between 0thswitches or same corresponding switches of PE0000and PE0001(specifically between switch S(0,0) of PE0000and switch S(1,0) of PE0001). Similarly Switch (0,m) in PE0000is connected to switch S(2,m) in PE0010by horizontal bus H(0,2) where m=1; And switch S(2,m) in PE0010is connected to switch S(0,m) in PE0000by horizontal bus H(2,0) where m=1. That is horizontal bus H(0,2) and horizontal bus H(2,0) are connected between 1stswitches or same corresponding switches of PE0000and PE0010(specifically between switch S(0,1) of PE0000and switch S(2,1) of PE0010). Similarly, in diagram100A ofFIG.1A, all the vertical buses and horizontal buses are connected between same corresponding switches of the PEs and so all buses are referred to as connected in pyramid network configuration in diagram100A ofFIG.1A. In general, α×b processing elements are arranged in two dimensional grid so that a first processing element of α×b processing elements is placed 2khops away either vertically or horizontally from a second processing element of α×b processing elements if all n bits of representation in binary format of the first processing element and representation in binary format of the second processing element are the same in each bit excepting in one of either (2×k+1)th least significant bit or (2×k+2)th least significant bit differ where k≥0. Also, in general, a switch of one or more switches of a first processing element of α×b processing elements is connected, by a 2khop length horizontal bus or a 2khop length vertical bus, to a switch of the one or more switches of a second processing element of α×b processing elements if all n bits of representation in binary format of the first processing element and representation in binary format of the second processing element are the same in each bit excepting in one of either (2×k+1)th least significant bit or (2×k+2)th least significant bit differ where k≥0 and also the switch of the one or more switches of the first processing element of α×b processing elements is connected, by a 2khop length horizontal bus or a 2khop length vertical bus, from the switch of the one or more switches of the second processing element of α×b processing elements if all n bits of representation in binary format of the first processing element and the representation in binary format of the second processing element are the same in each bit excepting in one of either (2×k+1)th least significant bit or (2×k+2)th least significant bit differ where k≥0 so that the interconnect of each processing element of α×b processing elements comprising one or more horizontal buses connecting to the interconnect of one or more processing elements of α×b processing elements and the interconnect of each processing element of α×b processing elements comprising one or more vertical buses connecting to the interconnect of one or more processing elements of α×b processing elements. Applicant notes that the PEs are connected by horizontal busses and vertical busses as in a binary hypercube network The diagram100A ofFIG.1Awith 4*4 2D-grid of PEs, a=4 and b=4. In the embodiment of diagram100A ofFIG.1A, for example PE0001is placed 20=1 hop away from PE0000since (2×0+1)th =1stleast significant bit for PE0000is 0 which is different from 1stleast significant bit of PE0001which is 1. Accordingly, PE0001is placed one hop away vertically down from PE0000. Similarly Also PE0010is placed 20=1 hop away from PE0000since (2×0+2)th=2ndleast significant bit for PE0000is 0 which is different from 2ndleast significant bit of PE0010which is 1. Accordingly, PE0010is placed one hop away horizontally to right from PE0000. Similarly switch S(0,0) in PE0000is connected to switch S(1,0) in PE0001by vertical bus V(0,1) and switch S(1,0) in PE0001is connected to switch S(0,0) in PE0000by vertical bus V(1,0). And switch S(0,1) in PE0000is connected to switch S(2,1) in PE0010by horizontal bus H(0,2) and switch S(2,1) in PE0010is connected to switch S(0,1) in PE0000by horizontal bus H(2,0). Alternatively, in accordance with the current invention, in another embodiment PE0001will be placed one hop away horizontally to right from PE0000and PE0010will be placed one hop away vertically down from PE0000. Similarly a switch in PE0000will be connected to a switch in PE0001by a horizontal bus and a switch in PE0001will be connected to a switch in PE0000by a horizontal bus. And a switch in PE0000will be connected to a switch in PE0010by a vertical bus and switch in PE0010will be connected to a switch in PE0000by a vertical bus. More embodiments with modifications, adaptations and implementations described herein will be apparent to the skilled artisan. There are four quadrants in the diagram100A ofFIG.1Anamely top-left, bottom-left, top-right and bottom-right quadrants. Top-left quadrant implements PE0000, PE0001, PE0010, and PE0011. Bottom-left quadrant implements PE0100, PE0101, PE0110, and PE0111. Top-right quadrant implements PE1000, PE1001, PE1010, and PE1011. Bottom-right quadrant implements PE1100, PE1101, PE1110, and PE1111. There are two halves in diagram100A ofFIG.1Anamely left-half and right-half. Left-half consists of top-left and bottom-left quadrants. Right-half consists of top-right and bottom-right quadrants. Recursively in each quadrant there are four sub-quadrants. For example in top-left quadrant there are four sub-quadrants namely top-left sub-quadrant, bottom-left sub-quadrant, top-right sub-quadrant and bottom-right sub-quadrant. Top-left sub-quadrant of top-left quadrant implements PE0000. Bottom-left sub-quadrant of top-left quadrant implements PE0001. Top-right sub-quadrant of top-left quadrant implements PE0010. Finally bottom-right sub-quadrant of top-left quadrant implements PE0011. Similarly there are two sub-halves in each quadrant. For example in top-left quadrant there are two sub-halves namely left-sub-half and right-sub-half. Left-sub-half of top-left quadrant implements PE0000and PE0001. Right-sub-half of top-left quadrant implements PE0010and PE0011. Recursively in larger multi-stage hypercube-based interconnection network where the number of PEs>16, the diagram in this embodiment in accordance with the current invention, will be such that the super-quadrants will also be connected as in a binary hypercube network. Some of the key aspects of the multi-stage hypercube-based interconnection network are 1) the buses for each PE are connected as alternative vertical and horizontal buses. It scales recursively for larger multi-stage interconnection network of number PEs >16 as will be illustrated later; 2) the hop length of both vertical buses and horizontal buses are 2∧0=1 and 2∧1=2. And the longest bus is ceiling of half of the breadth (or width) of the complete 2D-grid. The hop length is measured as the number of hops between PEs; for example the hop length between nearest neighbor PEs is one. Breadth and width being 3 the longest bus is of size2or ceiling of 1.5. It also scales recursively for larger multi-stage interconnection network of number PEs>16 as will be illustrated later; The diagram100A inFIG.1A,100BinFIG.1B, and100CinFIG.1Ccan be recursively extended for any larger size multi-stage hypercube-based interconnection network with the sub-quadrants, quadrants, and super-quadrants are arranged in binary hypercube manner and also the vertical and horizontal buses are accordingly connected in binary hypercube topology. Referring to diagram200ofFIG.2, illustrates the extension of multi-stage hypercube-based interconnection network100A ofFIG.1A,100BinFIG.1B, and100CinFIG.1Cfor 2D-grid of size8*8. There are four super-quadrants in diagram200namely top-left super-quadrant, bottom-left super-quadrant, top-right super-quadrant, bottom-right super-quadrant. Total number of PEs in diagram200is sixty four. Top-left super-quadrant implements PEs from PE000000to PE001111. Each PE in all the super-quadrants has two more bits to represent the PE number in binary format representation and also has two more switches namely switch S(x,4) and switch s(x,5), with each switch having one inlet bus and one outlet bus just like it is illustrated in diagram100B ofFIG.1B, in addition to the four switches illustrated in diagram100A ofFIG.1A,100BinFIG.1B, and100CinFIG.1C. The bus connection topology is exactly the same between the switches S(x,y) where x={{0−9}∪{A−F}} and y={0−3} as it is shown in diagram100A ofFIG.1A. The degree of the multi-stage hypercube-based interconnection network disclosed in diagram200ofFIG.2is six. Bottom-left super-quadrant implements the blocks from PE010000to PE011111. Top-right super-quadrant implements the blocks from PE100000to PE101111. And bottom-right super-quadrant implements the blocks from PE110000to PE111111. In all these three super-quadrants also, the bus connection topology is exactly the same between the switches S(x,y) where x={{0−9}∪{A−F}} and y={0−3} as it is shown in diagram100ofFIG.1Aand just as that of the top-left super-quadrant. Recursively in accordance with the current invention, the buses connecting between the switches S(*,4) are vertical buses in top-left super-quadrant, bottom-left super-quadrant, top-right super-quadrant and bottom-right super-quadrant. The buses connecting between the switches S(*,5) are vertical buses in top-left super-quadrant, bottom-left super-quadrant, top-right super-quadrant and bottom-right super-quadrant. For simplicity of illustration, only S(0,4) and S(0,5) are numbered in PE000000and none of the buses between connected switches S(*,4) and the buses between connected switches S(*,5) are shown in diagram200ofFIG.2. Now multi-stage hypercube-based interconnection network for 2D-grid where number of PEs is less than 16 are illustrated. Referring to diagram300ofFIG.3is multi-stage hypercube-based interconnection network of 2*1 2D-grid with the number of PEs is 2. There are two PEs i.e., PE0and PE1with PE0comprising switch S(0,0) and PE1comprising switch S(1,0). Switch S(0,0) in PE0has one inlet bus I(0,0) and one outlet bus0(0,0). Switch S(1,0) in PE1has one inlet bus I(1,0) and one outlet bus0(1,0). Since only one bit is needed to represent number of PEs, there is only one switch in each PE. And there is only one vertical bus V(0,1) connected from Switch S(0,0) in PE0to Switch (1,0) in PE1and there is only one vertical bus V(1,0) connected from Switch S(1,0) in PE1to Switch (0,0) in PE0. The degree of the multi-stage hypercube-based interconnection network disclosed in diagram300ofFIG.3is one. Applicant notes that the buses between PE0and PE1are vertical buses in this embodiment. In another embodiment PE0and PE1is placed horizontally and the buses between PE0and PE1are horizontal buses. Referring to diagram400ofFIG.4is multi-stage hypercube-based interconnection network of 2*2 2D-grid with the number of PEs is 4. There are four PEs namely PE00, PE01, PE10, and PE11. Each PE comprises two switches and each switch comprises one inlet bus and outlet bus. For example PE0000has two switches S(0,0) and S(0,1). Switch S(0,0) comprises inlet bus I(0,0) and outlet bus0(0,0). Switch S(0,1) comprises inlet bus40,1) and outlet bus0(0,1). F(0,0) is a forward bus connected from S(0,0) to S(0,1). B(0,0) is a backward bus connected from S(0,1) to S(0,0). In accordance with the current invention, in the multi-stage hypercube-based interconnection network diagram400ofFIG.4, since all the 4 PEs are represented by 2 bits each, each PE is connected, by buses in both the directions, to two other PEs where the PE number differs in only one bit. For example PE00number being 00 the two PEs it is connected to are 1) PE01where the least significant bit is different and 2) PE10where the most significant bit is different. Also switch S(0,0) in PE00is connected to switch S(1,0) in PE01by vertical bus V(0,1) and switch S(1,0) in PE01is connected to switch S(0,0) in PE00by vertical bus V(1,0). Switch (0,1) in PE00is connected to switch S(2,1) in PE10by horizontal bus H(0,2) and switch S(2,1) in PE10is connected to switch S(0,1) in PE00by horizontal bus H(2,0). The degree of the multi-stage hypercube-based interconnection network disclosed in diagram300ofFIG.3is two. Referring to diagram500A inFIG.5A, in one embodiment, an exemplary multi-stage hypercube-based interconnection network between 8 PEs arranged in 4*2 grid where the number of rows is four and the number of columns is two. The 8 PEs are represented in binary format namely PE000, PE001, PE010, PE011, PE100, PE101, PE110, and PE111, since 3 bits are needed to represent 8 numbers and the corresponding decimal format being PE0, PE1, PE2, PE3, PE4, PE5, PE6, and PE7respectively. Each PE comprises three switches and each switch comprises one inlet bus and outlet bus. For example PE0000has three switches namely S(0,0), S(0,1), and S(0,2). Each of the three switches in each PE comprise one inlet bus and outlet bus as shown in diagram500B ofFIG.5B. For example in PE000, switch S(0,0) comprises inlet bus I(0,0) and outlet bus0(0,0). Switch S(0,1) comprises inlet bus40,1) and outlet bus0(0,1). Switch S(0,2) comprises inlet bus I(0,2) and outlet bus0(0,2). For simplicity of illustration the inlet buses and outlet buses to each switch of each PE are not shown in diagram500A ofFIG.5A. Accordingly with the addition of inlet buses and outlet buses illustrated in diagram500B ofFIG.5Bto diagram500A ofFIG.5Acompletes the multi-stage hypercube-based interconnection network between 8 PEs arranged in 4*2 grid, in accordance with the current invention. F(0,0) is a forward bus connected from S(0,0) to S(0,1). F(0,1) is forward bus connected from S(0,1) to S(0,2). B(0,0) is a backward bus connected from S(0,1) to S(0,0). B(0,1) is backward bus connected from S(0,2) to S(0,1). Applicant notes that the PEs are connected as a binary hypercube network, in accordance with the current invention. The degree of the multi-stage hypercube-based interconnection network disclosed in diagram500A ofFIG.5Ais three. In the multi-stage hypercube-based interconnection network diagram500A ofFIG.5, since all the 8 PEs are represented by 3 bits each, each PE is connected, by buses in both the directions, to three other PEs where the PE number differs in only one bit. For example PE000number being 000 the three PEs it is connected to are 1) PE001where the least significant bit is different, 2) PE010where the second least significant bit is different, and 3) PE100where the most significant bit is different. Also switch S(0,0) in PE000is connected to switch S(1,0) in PE001by vertical bus V(0,1) and switch S(1,0) in PE001is connected to switch S(0,0) in PE000by vertical bus V(1,0). Switch (0,1) in PE000is connected to switch S(2,1) in PE010by horizontal bus H(0,2) and switch S(2,1) in PE010is connected to switch S(0,1) in PE000by horizontal bus H(2,0). Switch S(0,2) in PE000is connected to switch S(4,2) in PE100by vertical bus V(0,4) and switch S(4,2) in PE100is connected to switch S(0,2) in PE000by vertical bus V(4,0). Scalable multi-stage hypercube-based interconnection network with 2D-grid of PEs with buses connected in pyramid network configuration (Total number of PEs is not a perfect power of 2): Now multi-stage hypercube-based interconnection network for 2D-grid where number of PEs is non-perfect power of 2 are disclosed. Referring to diagram600A inFIG.6A, in one embodiment, an exemplary multi-stage hypercube-based interconnection network between 12 PEs arranged in 4*3 grid where the number of rows is four and the number of columns is three. Number 12 is not a perfect power of 2. First the next biggest perfect power of 2 for 12 (or equivalently smallest of all bigger perfect powers of 2 greater than 12) which is 16 PEs network is built. As illustrated in diagram100A ofFIG.1A, multi-stage hypercube-based interconnection network between 16 PEs arranged in 4*4 grid is built first. And then the PEs in the fourth column are removed with all the switches in them. All the vertical and horizontal buses connected to and connected from the PEs in the fourth column are also removed, as shown in diagram600A ofFIG.6A. The degree of the multi-stage hypercube-based interconnection network disclosed in diagram600A ofFIG.6Ais four since it requires 4 bits to represent all the PEs. In general α×b processing elements are numbered with a representation in binary format having n bits, where 2n−1<α×b≤2nand where n is a positive number. In diagram600A ofFIG.6Aand diagram600B ofFIG.6B, a=4, b=3, according to the current invention, it requires n=4 bits since 23<12≤24. Just like in diagram100A ofFIG.1Aand diagram100B ofFIG.1B, for simplicity of illustration the inlet buses and outlet buses to each switch of each PE are not shown in diagram g00A ofFIG.6A. Accordingly with the addition of inlet buses and outlet buses illustrated in diagram600B ofFIG.6Bto diagram600A ofFIG.6Acompletes the multi-stage hypercube-based interconnection network between 12 PEs arranged in 4*3 grid, in accordance with the current invention. Applicant notes that in this embodiment the key aspects of multi-stage hypercube-based interconnection network between 2 PEs arranged in 4*3 grid are: 1) the numbering of PEs in 4*3 2D-grid is consistent with the numbering of PEs in 4*4 2D-grid. That is even though there are only 12 PEs in 4*3 grid, the PE number in the third row and third column is PE1100and the PE number in the fourth row and third column is PE1101with the decimal equivalent of them being 12 and 13 respectively. They are not changed to1010and1011which are 10 and 11 respectively. This will preserve the bus connecting pattern in binary hypercube as disclosed earlier which is a PE is connected to another if there is only one bit different in their binary format. 2) Each PE in the 4*3 2D-grid still has 4 switches, just the same way 4*4 2D-grid of PEs as illustrated in diagram1ofFIG.1A. This will preserve the same binary hypercube properties of 4*4 2D-grid. This will also greatly benefit the software to be simple to program; and 3) Some of the PEs have 4 buses connected to and 4 buses connected from other PEs, for example PE0000. Some other PEs have 3 buses connected to and 3 buses connected from other PEs, for example PE0010. Now multi-stage hypercube-based interconnection network for 2D-grid where number of PEs is non-perfect power of 2 and 2D-grid is a square grid are disclosed. Referring to diagram700A inFIG.7A, in one embodiment, an exemplary multi-stage hypercube-based interconnection network between 9 PEs arranged in 3*3 grid where the number of rows is three and the number of columns is three. Number 9 is not a perfect power of 2. First the next biggest perfect power of 2 for 9 (or equivalently smallest of all bigger perfect powers of 2 greater than 9) which is 16 PEs network is built. As illustrated in diagram100A ofFIG.1A, multi-stage hypercube-based interconnection network between 16 PEs arranged in 4*4 grid is built first. And then the PEs in the fourth column and fourth row are removed with all the switches in them. All the vertical and horizontal buses connected to and connected from the PEs in the fourth row and fourth column are also removed, as shown in diagram700A ofFIG.7A. The degree of the multi-stage hypercube-based interconnection network disclosed in diagram700A ofFIG.7Ais four since it requires 4 bits to represent all the PEs. In general α×b processing elements are numbered with a representation in binary format having n bits, where 2n−1<α×b≤2nand where n is a positive number. In diagram700A ofFIG.7Aand diagram700B ofFIG.7B, a=3, b=3, in accordance with the current invention, it requires n=4 bits since 23<9≤24. Just like in diagram100A ofFIG.1Aand diagram100B ofFIG.1B, for simplicity of illustration the inlet buses and outlet buses to each switch of each PE are not shown in diagram g00A ofFIG.7A. Accordingly with the addition of inlet buses and outlet buses illustrated in diagram700B ofFIG.7Bto diagram700A ofFIG.7Acompletes the multi-stage hypercube-based interconnection network between 9 PEs arranged in 3*3 grid, in accordance with the current invention. Applicant notes that in this embodiment the key aspects of multi-stage hypercube-based interconnection network between 2 PEs arranged in 3*3 grid are: 1) the numbering of PEs in 3*3 2D-grid is consistent with the numbering of PEs in 4*4 2D-grid. That is even though there are only 9 PEs in 3*3 grid, the PE number in the third row and second column is PE1001and the PE number in the third row and third column is PE1100with the decimal equivalent of them being 9 and 12 respectively. They are not changed to0101and0111which are 5 and 7 respectively. Again this will preserve the bus connecting pattern in binary hypercube as disclosed earlier which is a PE is connected to another if there is only one bit different in their binary format. 2) Each PE in the 3*3 2D-grid still has 4 switches, just the same way 4*4 2D-grid of PEs as illustrated in diagram1ofFIG.1A. This will preserve the same binary hypercube properties of 4*4 2D-grid. This will greatly benefit the software to be simple to program; and 3) Some of the PEs have 4 buses connected to and 4 buses connected from other PEs, for example PE0000. Some other PEs have 3 buses connected to and 3 buses connected from other PEs, for example PE0010. Some other PEs have 2 buses connected to and 2 buses connected from other PEs, for example PE1001. Deterministic concurrent broadcast by all PEs in one time step in an exemplary multi-stage hypercube-based interconnection network with 2*1 2D-grid of PEs: Referring to diagram800ofFIG.8illustrates of deterministic concurrent broadcast by all PEs in one time step in the exemplary multi-stage hypercube-based interconnection network with 2*1 2D-grid of PEs shown in diagram300ofFIG.3. PE0has packet P0and PE1has packet P1to broadcast to the rest of PEs. That is each PE needs to transmit its packet to only one other PE. In time step1, Packet PO is unicasted from PE0to PE1via inlet bus I(0,0), switch S(0,0), vertical bus V(0,1), switch S(1,0), and outlet bus O(1,0). Concurrently in time step1, Packet P1is unicasted from PE1to PE0via inlet bus I(1,0), switch S(1,0), vertical bus V(1,0), switch S(0,0), and outlet bus O(0,0). A time step is certain time duration determined by the length of the packet, the length of the bus, the number of wires in the inlet buses, outlet buses and vertical buses, the implemented non-transitory medium of each bus and clock speed of operation. So in the multi-stage hypercube-based interconnection network with 2*1 2D-grid of PEs shown in diagram800ofFIG.8, for concurrent broadcast by each of the two PEs to transmit a packet to the other PE, it takes one time step. Since the interconnection network is non-blocking, as illustrated in diagram800ofFIG.8, no queuing of packets is needed and no collisions will occur. Also all the vertical buses i.e. the two vertical buses are completely and concurrently utilized. To broadcast “n” number of packets by each PE to the rest of the PEs, it requires “n” number of time steps in the exemplary multi-stage hypercube-based interconnection network with 2*1 2D-grid of 2 PEs shown in diagram300ofFIG.3. In one embodiment, applicant notes that all “n” packets from PE0will be transmitted to PE1in the same fixed path as packet P0as illustrated in diagram800ofFIG.8. Similarly all “n” packets from PE1will be transmitted to PE0in the same path as packet P1as illustrated in diagram800ofFIG.8. Applicant also notes that “n” number of packets from PE0will reach PE1in the order they are transmitted and similarly “n” number of packets from PE1will reach PE0in the order they are transmitted. Accordingly to concurrently broadcast “n” number of packets by PE0to PE1and PE1to PE0, in the exemplary multi-stage hypercube-based interconnection network with 2*1 2D-grid of 2 PEs shown in diagram300ofFIG.3, it requires “n” number of time steps and no out of order arrival of packets occurs. Diagrams900A ofFIG.9A and900BofFIG.9Billustrate deterministic concurrent broadcast by all PEs in two time steps in the exemplary multi-stage hypercube-based interconnection network with 2*2 2D-grid of 4 PEs shown in diagram400ofFIG.4. PE00has packet P0, PE01has packet P1, PE10has packet P2, and PE11has packet P3to broadcast to rest of the PEs. As shown in diagram900A ofFIG.9A, in time step1, Packet P0is multicasted with fan out 2 from PE00to PE01and PE10. From PE00to PE01the path is via inlet bus I(0,0), switch S(0,0), vertical bus V(0,1), switch S(1,0), and outlet bus O(1,0). From PE00to PE10the path is via inlet bus I(0,0), switch S(0,0), forward bus F(0,0), switch S(0,1), horizontal bus H(0,2), switch S(2,1), and outlet bus O(2,1). Concurrently in time step1, Packet P1is multicasted with fan out 2 from PE01to PE00and PE10. From PE01to PE00the path is via inlet bus I(1,0), switch S(1,0), vertical bus V(1,0), switch S(0,0), and outlet bus O(0,0). From PE01to PE11the path is via inlet bus I(1,0), switch S(1,0), forward bus F(1,0), switch S(1,1), horizontal bus H(1,2), switch S(2,1), and outlet bus0(2,1). As shown in diagram900A ofFIG.9A, in time step1, Packet P2is multicasted with fan out 2 from PE10to PE11and PE00. From PE10to PE11the path is via inlet bus I(2,0), switch S(2,0), vertical bus V(2,3), switch S(3,0), and outlet bus O(3,0). From PE10to PE00the path is via inlet bus I(2,0), switch S(2,0), forward bus F(2,0), switch S(2,1), horizontal bus H(2,0), switch S(0,1), and outlet bus O(0,1). Concurrently in time step1, Packet P3is multicasted with fan out 2 from PE11to PE10and PE01. From PE11to PE10the path is via inlet bus I(3,0), switch S(3,0), vertical bus V(3,2), switch S(2,0), and outlet bus O(2,0). From PE11to PE01the path is via inlet bus I(3,0), switch S(3,0), forward bus F(3,0), switch S(3,1), horizontal bus H(3,1), switch S(1,1), and outlet bus O(1,1). Also in time step1, the four vertical buses namely V(0,1), V(1,0), V(2,3) and V(3,2), and the four horizontal buses namely H(0,2), H(2,0), H(1,3) and H(3,1) are concurrently utilized. To summarize in time step1, PE00received packets P1and P2; PE01received packets P0and P3; PE10received packets P0and P3; and PE11received packets P1and P2. As shown in diagram900B ofFIG.9B, in time step2, Packet P2is unicasted from PE00to PE01. From PE00to PE01the path is via inlet bus I(0,1), switch S(0,1), backward bus B(0,0), switch S(0,0), vertical bus V(0,1), switch S(1,0), and outlet bus O(1,0). Concurrently in time step2, Packet P3is unicasted from PE01to PE00. From PE01to PE00the path is via inlet bus I(1,1), switch S(1,1), backward bus B(1,0), switch S(1,0), vertical bus V(1,0), switch S(0,0), and outlet bus O(0,0). As shown in diagram900B ofFIG.9B, in time step2, Packet P0is unicasted from PE10to PE11. From PE10to PE11the path is via inlet bus I(2,1), switch S(2,1), backward bus B(2,0), switch S(2,0), vertical bus V(2,3), switch S(3,0), and outlet bus O(3,0). Concurrently in time step2, Packet P1is unicasted from PE11to PE10. From PE11to PE10the path is via inlet bus I(3,1), switch S(3,1), backward bus B(3,0), switch S(3,0), vertical bus V(3,2), switch S(2,0), and outlet bus O(2,0). Also in time step2, the four vertical buses namely V(0,1), V(1,0), V(2,3) and V(3,2) are concurrently utilized and the four horizontal buses namely H(0,2), H(2,0), H(1,3) and H(3,1) do not need to be utilized. (Alternatively in another embodiment, instead of vertical buses, the four horizontal buses namely H(0,2), H(2,0), H(1,3) and H(3,1) can be concurrently utilized without needing to utilize the four vertical buses namely V(0,1), V(1,0), V(2,3) and V(3,2)). To summarize in time step2, PE00received packet P3; PE01received packet P2; PE10received packet P1; and PE11received packet P0. As shown in diagram900A ofFIG.9Aand diagram900B ofFIG.9B, the path for Packet P0to transmit from PE00to PE11is via PE10. Specifically, as shown in diagram900A ofFIG.9A, in time step1, Packet PO is multicasted with fan out 2 from PE00to PE01and PE10. From PE00to PE01the path is via inlet bus I(0,0), switch S(0,0), vertical bus V(0,1), switch S(1,0), and outlet bus O(1,0). From PE00to PE10the path is via inlet bus I(0,0), switch S(0,0), forward bus F(0,0), switch S(0,1), horizontal bus H(0,2), switch S(2,1), and outlet bus O(2,1). Then as shown in diagram900B ofFIG.9B, in time step2, Packet P0is unicasted from PE10to PE11. From PE10to PE11the path is via inlet bus I(2,1), switch S(2,1), backward bus B(2,0), switch S(2,0), vertical bus V(2,3), switch S(3,0), and outlet bus O(3,0). In this example, in the path for P0to transmit from PE00to PE11, PE00is hereinafter referred to as source processing element, PE11is hereinafter referred to as target processing element and PE10is hereinafter referred to as intermediate processing element. In general, with α×b processing elements arranged in two dimensional grid according to the current invention, in the path of a packet from a source processing element to a target processing element there will be one or more intermediate processing elements. So in the multi-stage hypercube-based interconnection network with 2*2 2D-grid of PEs shown in diagram900A ofFIG.9Aand diagram900B ofFIG.9B, for concurrent broadcast by each of the four PEs to transmit a packet to all the rest of the PEs, it takes two time steps. Since the interconnection network is non-blocking, as illustrated in diagram900A ofFIG.9Aand diagram900B ofFIG.9B, no queuing of packets is needed and no collisions will occur. Also the four vertical buses and the four horizontal buses are concurrently utilized in time step1where as in time step2only the four vertical buses are needed. A time step is certain time duration determined by the length of the packet, the length of the bus, the number of wires in the inlet buses, outlet buses and vertical buses, the implemented non-transitory medium of each bus and clock speed of operation. To broadcast “n” number of packets by each PE to the rest of the PEs, it requires 2*n number of time steps in the exemplary multi-stage hypercube-based interconnection network with 2*2 2D-grid of 4 PEs shown in diagram400ofFIG.4. In one embodiment, applicant notes that all “n” packets from PE0will be transmitted to PE1in the same fixed path as packet P0as illustrated in diagram900A ofFIG.9Aand diagram900B ofFIG.9B. Similarly all “n” packets from PE0will be transmitted to PE2in the same fixed path as packet P0as illustrated in diagram900A ofFIG.9Aand diagram900B ofFIG.9B. Also all “n” packets from PE0will be transmitted to PE3in the same fixed path as packet P0as illustrated in diagram900A ofFIG.9Aand diagram900B ofFIG.9B. Similarly all “n” packets from PE1will be transmitted to PE0, PE2, and PE3in the same path as packet P1as illustrated in diagram900A ofFIG.9Aand diagram900B ofFIG.9B; all “n” packets from PE2will be transmitted to PE0, PE1, and PE3in the same path as packet P2as illustrated in diagram900A ofFIG.9Aand diagram900B ofFIG.9B; and all “n” packets from PE3will be transmitted to PE0, PE1, and PE2in the same path as packet P3as illustrated in diagram900A ofFIG.9Aand diagram900B ofFIG.9B. Applicant also notes that “n” number of packets from each PE will reach the rest of PEs in the order they are transmitted as they are transmitted in the same fixed path. For example “n” number of packets from PE0will reach PE1, PE2and PE3in the order they are transmitted as they are transmitted in the same fixed path as packet P0as illustrated in diagram900A ofFIG.9Aand diagram900B ofFIG.9B. Accordingly, concurrent broadcast “n” number of packets from each PE to the rest of PEs, as shown in the exemplary multi-stage hypercube-based interconnection network with 2*2 2D-grid of 4 PEs in diagram400ofFIG.4, it requires “2*n” number of time steps and no out of order arrival of packets occurs. Referring to diagrams1000A ofFIG.10A,1000BofFIG.10B,1000CofFIG.10C,1000DofFIG.10Dillustrate deterministic concurrent broadcast by all PEs in four time steps in the exemplary multi-stage hypercube-based interconnection network with 4*2 2D-grid of 8 PEs shown in diagram500A ofFIG.5Aand diagram500B ofFIG.5B. PE000has packet P0, PE001has packet P1, PE010has packet P2, PE011has packet P3, PE100has packet P4, PE101has packet P5, PE110has packet P6, and PE111has packet P7to broadcast to rest of the PEs. As shown in diagram1000A ofFIG.10A, in time step1, Packet P0is multicasted with fan out 3 from PE000to PE001, PE010, and PE100. From PE000to PE001the path is via inlet bus I(0,0), switch S(0,0), vertical bus V(0,1), switch S(1,0), and outlet bus O(1,0). From PE000to PE010the path is via inlet bus I(0,0), switch S(0,0), forward bus F(0,0), switch S(0,1), horizontal bus H(0,2), switch S(2,1), and outlet bus O(2,1). From PE000to PE100the path is via inlet bus I(0,0), switch S(0,0), forward bus F(0,0), switch S(0,1), forward bus F(0,1), switch S(0,2), vertical bus V(0,4), switch S(4,2), and outlet bus O(4,2). Concurrently in time step1, Packet P1is multicasted with fan out 3 from PE001to PE000, PE011, and PE101. From PE001to PE000the path is via inlet bus I(1,0), switch S(1,0), vertical bus V(1,0), switch S(0,0), and outlet bus O(0,0). From PE001to PE011the path is via inlet bus I(1,0), switch S(1,0), forward bus F(1,0), switch S(1,1), horizontal bus H(1,3), switch S(3,1), and outlet bus O(3,1). From PE001to PE101the path is via inlet bus I(1,0), switch S(1,0), forward bus F(1,0), switch S(1,1), forward bus F(1,1), switch S(1,2), vertical bus V(1,5), switch S(5,2), and outlet bus O(5,2). As shown in diagram1000A ofFIG.10A, in time step1, Packet P2is multicasted with fan out 3 from PE010to PE011, PE000, and PE110. From PE010to PE011the path is via inlet bus I(2,0), switch S(2,0), vertical bus V(2,3), switch S(3,0), and outlet bus O(3,0). From PE010to PE000the path is via inlet bus I(2,0), switch S(2,0), forward bus F(2,0), switch S(2,1), horizontal bus H(2,0), switch S(0,1), and outlet bus O(0,1). From PE010to PE110the path is via inlet bus I(2,0), switch S(2,0), forward bus F(0,0), switch S(2,1), forward bus F(2,1), switch S(2,2), vertical bus V(2,6), switch S(6,2), and outlet bus O(6,2). Concurrently in time step1, Packet P3is multicasted with fan out 3 from PE011to PE010, PE001, and PE111. From PE011to PE010the path is via inlet bus I(3,0), switch S(3,0), vertical bus V(3,2), switch S(2,0), and outlet bus O(2,0). From PE011to PE001the path is via inlet bus I(3,0), switch S(3,0), forward bus F(3,0), switch S(3,1), horizontal bus H(3,1), switch S(1,1), and outlet bus O(1,1). From PE011to PE111the path is via inlet bus I(3,0), switch S(3,0), forward bus F(3,0), switch S(3,1), forward bus F(3,1), switch S(3,2), vertical bus V(3,7), switch S(7,2), and outlet bus O(7,2). As shown in diagram1000A ofFIG.10A, in time step1, Packet P4is multicasted with fan out 3 from PE100to PE101, PE110, and PE000. From PE100to PE101the path is via inlet bus I(4,0), switch S(4,0), vertical bus V(4,5), switch S(5,0), and outlet bus O(5,0). From PE100to PE110the path is via inlet bus I(4,0), switch S(4,0), forward bus F(4,0), switch S(4,1), horizontal bus H(4,6), switch S(6,1), and outlet bus O(6,1). From PE100to PE000the path is via inlet bus I(4,0), switch S(4,0), forward bus F(4,0), switch S(4,1), forward bus F(4,1), switch S(4,2), vertical bus V(4,0), switch S(0,2), and outlet bus O(0,2). Concurrently in time step1, Packet P5is multicasted with fan out 3 from PE101to PE100, PE111, and PE001. From PE101to PE100the path is via inlet bus I(5,0), switch S(5,0), vertical bus V(5,4), switch S(4,0), and outlet bus O(4,0). From PE101to PE111the path is via inlet bus I(5,0), switch S(5,0), forward bus F(5,0), switch S(5,1), horizontal bus H(5,7), switch S(7,1), and outlet bus O(7,1). From PE101to PE001the path is via inlet bus I(5,0), switch S(5,0), forward bus F(5,0), switch S(5,1), forward bus F(5,1), switch S(5,2), vertical bus V(5,1), switch S(1,2), and outlet bus O(1,2). As shown in diagram1000A ofFIG.10A, in time step1, Packet P6is multicasted with fan out 3 from PE110to PE111, PE100, and PE010. From PE110to PE111the path is via inlet bus I(6,0), switch S(6,0), vertical bus V(6,7), switch S(7,0), and outlet bus O(7,0). From PE110to PE100the path is via inlet bus I(6,0), switch S(6,0), forward bus F(6,0), switch S(6,1), horizontal bus H(6,4), switch S(4,1), and outlet bus O(4,1). From PE110to PE010the path is via inlet bus I(6,0), switch S(6,0), forward bus F(6,0), switch S(6,1), forward bus F(6,1), switch S(6,2), vertical bus V(6,2), switch S(2,2), and outlet bus O(2,2). Concurrently in time step1, Packet P7is multicasted with fan out 3 from PE111to PE110, PE101, and PE011. From PE111to PE110the path is via inlet bus I(7,0), switch S(7,0), vertical bus V(7,6), switch S(6,0), and outlet bus O(6,0). From PE111to PE101the path is via inlet bus I(7,0), switch S(7,0), forward bus F(7,0), switch S(7,1), horizontal bus H(7,5), switch S(5,1), and outlet bus O(5,1). From PE111to PE011the path is via inlet bus I(7,0), switch S(7,0), forward bus F(7,0), switch S(7,1), forward bus F(7,1), switch S(7,2), vertical bus V(7,3), switch S(3,2), and outlet bus0(3,2). Also in time step1, the sixteen vertical buses namely V(0,1), V(1,0), V(2,3), V(3,2), V(4,5), V(5,4), V(6,7), V(7,6), V(0,4), V(4,0), V(1,5), V(5,1), V(2,6), V(6,2), V(3,7), and V(7,3) and the eight horizontal buses namely H(0,2), H(2,0), H(1,3), H(3,1), H(4,6), H(6,4), H(5,7) and H(7,5) are completely and concurrently utilized. To summarize in time step1, PE000received packets P1, P2, and P4; PE001received packets P0, P3and P5; PE010received packets P0, P3, and P6; PE011received packets P1, P2and P7; PE100received packets P5, P6, and P0; PE101received packets P4, P7and P1; PE110received packets P4, P7, and P2; and PE111received packets P6, P5and P3. As shown in diagram1000B ofFIG.10B, in time step2, Packet P2is unicasted from PE000to PE001. From PE000to PE001the path is via inlet bus I(0,1), switch S(0,1), backward bus B(0,0), switch S(0,0), vertical bus V(0,1), switch S(1,0), and outlet bus O(1,0). Concurrently in time step2, Packet P3is unicasted from PE001to PE000. From PE001to PE000the path is via inlet bus I(1,1), switch S(1,1), backward bus B(1,0), switch S(1,0), vertical bus V(1,0), switch S(0,0), and outlet bus O(0,0). Concurrently in time step2, Packet P0is unicasted from PE010to PE011. From PE010to PE011the path is via inlet bus I(2,1), switch S(2,1), backward bus B(2,0), switch S(2,0), vertical bus V(2,3), switch S(3,0), and outlet bus O(3,0). Concurrently in time step2, Packet P1is unicasted from PE011to PE010. From PE011to PE010the path is via inlet bus I(3,1), switch S(3,1), backward bus B(3,0), switch S(3,0), vertical bus V(3,2), switch S(2,0), and outlet bus O(2,0). As shown in diagram1000B ofFIG.10B, in time step2, Packet P6is unicasted from PE100to PE101. From PE100to PE101the path is via inlet bus I(4,1), switch S(4,1), backward bus B(4,0), switch S(4,0), vertical bus V(4,5), switch S(5,0), and outlet bus O(5,0). Concurrently in time step2, Packet P7is unicasted from PE101to PE100. From PE101to PE100the path is via inlet bus I(5,1), switch S(5,1), backward bus B(5,0), switch S(5,0), vertical bus V(5,4), switch S(4,0), and outlet bus O(4,0). Concurrently in time step2, Packet P4is unicasted from PE110to PE111. From PE110to PE111the path is via inlet bus I(6,1), switch S(6,1), backward bus B(6,0), switch S(6,0), vertical bus V(6,7), switch S(7,0), and outlet bus O(7,0). Concurrently in time step2, Packet P5is unicasted from PE111to PE110. From PE111to PE110the path is via inlet bus I(7,1), switch S(7,1), backward bus B(7,0), switch S(7,0), vertical bus V(7,6), switch S(6,0), and outlet bus O(6,0). Also in time step2, the eight vertical buses namely V(0,1), V(1,0), V(2,3) and V(3,2), V(4,5), V(5,4), V(6,7), and V(7,6) are concurrently utilized. (Alternatively in another embodiment, instead of vertical buses, the four horizontal buses namely H(0,2), H(2,0), H(1,3), H(3,1), H(4,6), H(6,4), H(5,7) and H(7,5) can be concurrently utilized). To summarize in time step2, PE000received packet P3; PE001received packet P2; PE010received packet P1; PE011received packet P0; PE100received packet P7; PE101received packet P6; PE110received packet P5; and PE111received packet P4. As shown in diagram1000C ofFIG.10C, in time step3, Packet P4is multicasted with fan out 2 from PE000to PE001and PE010. From PE000to PE001the path is via inlet bus I(0,2), switch S(0,2), backward bus B(0,1), switch S(0,1), backward bus B(0,0), switch S(0,0), vertical bus V(0,1), switch S(1,0), and outlet bus O(1,0). From PE000to PE010the path is via inlet bus I(0,2), switch S(0,2), backward bus B(0,1), switch S(0,1), horizontal bus H(0,2), switch S(2,1), and outlet bus O(2,1). Concurrently in time step3, Packet P5is multicasted with fan out2from PE001to PE000and PE011. From PE001to PE000the path is via inlet bus I(1,2), switch S(1,2), backward bus B(1,1), switch S(1,1), backward bus B(1,0), switch S(1,0), vertical bus V(1,0), switch S(0,0), and outlet bus O(0,0). From PE001to PE011the path is via inlet bus I(1,2), switch S(1,2), backward bus B(1,1), switch S(1,1), horizontal bus H(1,3), switch S(3,1), and outlet bus O(3,1). As shown in diagram1000C ofFIG.10C, in time step3, Packet P6is multicasted with fan out 2 from PE010to PE011and PE000. From PE010to PE011the path is via inlet bus I(2,2), switch S(2,2), backward bus B(2,1), switch S(2,1), backward bus B(2,0), switch S(2,0), vertical bus V(2,3), switch S(3,0), and outlet bus O(3,0). From PE010to PE000the path is via inlet bus I(2,2), switch S(2,2), backward bus B(2,1), switch S(2,1), horizontal bus H(2,0), switch S(0,1), and outlet bus O(0,1). Concurrently in time step3, Packet P7is multicasted with fan out 2 from PE011to PE010and PE001. From PE011to PE010the path is via inlet bus I(3,2), switch S(3,2), backward bus B(3,1), switch S(3,1), backward bus B(3,0), switch S(3,0), vertical bus V(3,2), switch S(2,0), and outlet bus O(2,0). From PE011to PE001the path is via inlet bus I(3,2), switch S(3,2), backward bus B(3,1), switch S(3,1), horizontal bus H(3,1), switch S(1,1), and outlet bus O(1,1). As shown in diagram1000C ofFIG.10C, in time step3, Packet P0is multicasted with fan out 2 from PE100to PE101and PE110. From PE100to PE101the path is via inlet bus I(4,2), switch S(4,2), backward bus B(4,1), switch S(4,1), backward bus B(4,0), switch S(4,0), vertical bus V(4,5), switch S(5,0), and outlet bus O(5,0). From PE100to PE110the path is via inlet bus I(4,2), switch S(4,2), backward bus B(4,1), switch S(4,1), horizontal bus H(4,6), switch S(6,1), and outlet bus O(6,1). Concurrently in time step3, Packet P1is multicasted with fan out 2 from PE101to PE100and PE111. From PE101to PE100the path is via inlet bus I(5,2), switch S(5,2), backward bus B(5,1), switch S(5,1), backward bus B(5,0), switch S(5,0), vertical bus V(5,4), switch S(4,0), and outlet bus O(4,0). From PE101to PE111the path is via inlet bus I(5,2), switch S(5,2), backward bus B(5,1), switch S(5,1), horizontal bus H(5,7), switch S(7,1), and outlet bus O(7,1). As shown in diagram1000C ofFIG.10C, in time step3, Packet P2is multicasted with fan out 2 from PE110to PE111and PE100. From PE110to PE111the path is via inlet bus I(6,2), switch S(6,2), backward bus B(6,1), switch S(6,1), backward bus B(6,0), switch S(6,0), vertical bus V(6,7), switch S(7,0), and outlet bus O(7,0). From PE110to PE100the path is via inlet bus I(6,2), switch S(6,2), backward bus B(6,1), switch S(6,1), horizontal bus H(6,4), switch S(4,1), and outlet bus O(4,1). Concurrently in time step3, Packet P3is multicasted with fan out 2 from PE111to PE110and PE101. From PE111to PE110the path is via inlet bus I(7,2), switch S(7,2), backward bus B(7,1), switch S(7,1), backward bus B(7,0), switch S(7,0), vertical bus V(7,6), switch S(6,0), and outlet bus O(6,0). From PE111to PE101the path is via inlet bus I(7,2), switch S(7,2), backward bus B(7,1), switch S(7,1), horizontal bus H(7,5), switch S(5,1), and outlet bus O(5,1). Also in time step3, the eight vertical buses namely V(0,1), V(1,0), V(2,3), V(3,2), V(4,5), V(5,4), V(6,7), and V(7,6), and the eight horizontal buses namely H(0,2), H(2,0), H(1,3), H(3,1), H(4,6), H(6,4), H(5,7) and H(7,5) are completely and concurrently utilized. To summarize in time step3, PE000received packets P5and P6; PE001received packets P4and P7; PE010received packets P4and P7; PE011received packets P4and P7; PE100received packets P1and P2; PE101received packets P0and P3; PE110received packets P0and P3; PE111received packets P4and P7. As shown in diagram1000D ofFIG.10D, in time step4, Packet P6is unicasted from PE000to PE001. From PE000to PE001the path is via inlet bus I(0,1), switch S(0,1), backward bus B(0,0), switch S(0,0), vertical bus V(0,1), switch S(1,0), and outlet bus O(1,0). Concurrently in time step4, Packet P7is unicasted from PE001to PE000. From PE001to PE000the path is via inlet bus I(1,1), switch S(1,1), backward bus B(1,0), switch S(1,0), vertical bus V(1,0), switch S(0,0), and outlet bus O(0,0). Concurrently in time step4, Packet P4is unicasted from PE010to PE011. From PE010to PE011the path is via inlet bus I(2,1), switch S(2,1), backward bus B(2,0), switch S(2,0), vertical bus V(2,3), switch S(3,0), and outlet bus O(3,0). Concurrently in time step4, Packet P5is unicasted from PE011to PE010. From PE011to PE010the path is via inlet bus I(3,1), switch S(3,1), backward bus B(3,0), switch S(3,0), vertical bus V(3,2), switch S(2,0), and outlet bus O(2,0). As shown in diagram1000D ofFIG.10D, in time step4, Packet P2is unicasted from PE100to PE101. From PE100to PE101the path is via inlet bus I(4,1), switch S(4,1), backward bus B(4,0), switch S(4,0), vertical bus V(4,5), switch S(5,0), and outlet bus O(5,0). Concurrently in time step4, Packet P3is unicasted from PE101to PE100. From PE101to PE100the path is via inlet bus I(5,1), switch S(5,1), backward bus B(5,0), switch S(5,0), vertical bus V(5,4), switch S(4,0), and outlet bus O(4,0). Concurrently in time step4, Packet P0is unicasted from PE110to PE111. From PE110to PE111the path is via inlet bus I(6,1), switch S(6,1), backward bus B(6,0), switch S(6,0), vertical bus V(6,7), switch S(7,0), and outlet bus O(7,0). Concurrently in time step4, Packet P1is unicasted from PE111to PE110. From PE111to PE110the path is via inlet bus I(7,1), switch S(7,1), backward bus B(7,0), switch S(7,0), vertical bus V(7,6), switch S(6,0), and outlet bus O(6,0). Also in time step4, the eight vertical buses namely V(0,1), V(1,0), V(2,3) and V(3,2), V(4,5), V(5,4), V(6,7), and V(7,6) are concurrently utilized. (Alternatively in another embodiment, instead of vertical buses, the four horizontal buses namely H(0,2), H(2,0), H(1,3), H(3,1), H(4,6), H(6,4), H(5,7) and H(7,5) can be concurrently utilized). To summarize in time step4, PE000received packet P7; PE001received packet P6; PE010received packet P5; PE011received packet P4; PE100received packet P3; PE101received packet P2; PE110received packet P1; and PE111received packet P0. In general, with α×b processing elements arranged in two dimensional grid according to the current invention, in the path of a packet from a source processing element to a target processing element there will be one or more intermediate processing elements. Applicant notes that, for example, in diagram1000A ofFIG.10A, diagram1000B ofFIG.10B, diagram1000C ofFIG.1Cand diagram1000D ofFIG.10D, for example in the path of Packet P0from PE000to PE111, packet P0traverses from PE000to PE100to PE110to PE111. In this example, in the path for packet P0, PE000is the source processing element; PE111is the target processing element; And PE100, PE110are intermediate processing elements. So in the multi-stage hypercube-based interconnection network with 4*2 2D-grid of PEs shown in diagram1000A ofFIG.10A, diagram1000B ofFIG.10B, diagram1000C ofFIG.1Cand diagram1000D ofFIG.10D, for concurrent broadcast by each of the four PEs to transmit a packet to all the rest of the PEs, it takes four time steps. A time step is certain time duration determined by the length of the packet, the length of the bus, the number of wires in the inlet buses, outlet buses and vertical buses, the implemented non-transitory medium of each bus and clock speed of operation. Since the interconnection network is non-blocking, as illustrated in diagram1000A ofFIG.10A, diagram1000B ofFIG.10B, diagram1000C ofFIG.1Cand diagram1000D ofFIG.10D, no queuing of packets is needed and no collisions will occur. Also the sixteen vertical buses and the eight horizontal buses are completely and concurrently utilized in time step1. In time step2, only eight vertical buses are needed. Only eight vertical buses and eight horizontal buses are concurrently needed in time step3. In time step4, only eight vertical buses are needed. To broadcast “n” number of packets by each PE to the rest of the PEs, it requires 4*n number of time steps in the exemplary multi-stage hypercube-based interconnection network with 4*2 2D-grid of 8 PEs shown in diagram500A ofFIG.5Aand diagram500B ofFIG.5B. In one embodiment, applicant notes that all “n” packets from PE0will be transmitted to PE1in the same fixed path as packet P0as illustrated in diagram1000A ofFIG.10A, diagram1000B ofFIG.10B, diagram1000C ofFIG.1Cand diagram1000D ofFIG.10D; All “n” packets from PE0will be transmitted to PE2in the same fixed path as packet P0as illustrated in diagram1000A ofFIG.10A, diagram1000B ofFIG.10B, diagram1000C ofFIG.1Cand diagram1000D ofFIG.10D; All “n” packets from PE0will be transmitted to PE3in the same fixed path as packet P0as illustrated in diagram1000A ofFIG.10A, diagram1000B ofFIG.10B, diagram1000C ofFIG.1Cand diagram1000D ofFIG.10D; All “n” packets from PE0will be transmitted to PE4in the same fixed path as packet P0as illustrated in diagram1000A ofFIG.10A, diagram1000B ofFIG.10B, diagram1000C ofFIG.1Cand diagram1000D ofFIG.10D; All “n” packets from PE0will be transmitted to PE5in the same fixed path as packet P0as illustrated in diagram1000A ofFIG.10A, diagram1000B ofFIG.10B, diagram1000C ofFIG.1Cand diagram1000D ofFIG.10D; All “n” packets from PE0will be transmitted to PE6in the same fixed path as packet P0as illustrated in diagram1000A ofFIG.10A, diagram1000B ofFIG.10B, diagram1000C ofFIG.1Cand diagram1000D ofFIG.10D; And all “n” packets from PE0will be transmitted to PE7in the same fixed path as packet P0as illustrated in diagram1000A ofFIG.10A, diagram1000B ofFIG.10B, diagram1000C ofFIG.1Cand diagram1000D ofFIG.10D. Similarly all “n” packets from PE1will be transmitted to PE0, PE2, PE3, PE4, PE5, PE6and PE7in the same path as packet P1as illustrated in diagram1000A ofFIG.10A, diagram1000B ofFIG.10B, diagram1000C ofFIG.1Cand diagram1000D ofFIG.10D; All “n” packets from PE2will be transmitted to PE0, PE1, PE3, PE4, PE5, PE6and PE7in the same path as packet P2as illustrated in diagram1000A ofFIG.10A, diagram1000B ofFIG.10B, diagram1000C ofFIG.1Cand diagram1000D ofFIG.10D; All “n” packets from PE3will be transmitted to PE0, PE1, PE2, PE4, PE5, PE6and PE7in the same path as packet P3as illustrated in diagram1000A ofFIG.10A, diagram1000B ofFIG.10B, diagram1000C ofFIG.1Cand diagram1000D ofFIG.10D; All “n” packets from PE4will be transmitted to PE0, PE1, PE2, PE3, PE5, PE6and PE7in the same path as packet P4as illustrated in diagram1000A ofFIG.10A, diagram1000B ofFIG.10B, diagram1000C ofFIG.1Cand diagram1000D ofFIG.10D; All “n” packets from PE5will be transmitted to PE0, PE1, PE2, PE3, PE4, PE6and PE7in the same path as packet P5as illustrated in diagram1000A ofFIG.10A, diagram1000B ofFIG.10B, diagram1000C ofFIG.1Cand diagram1000D ofFIG.10D; All “n” packets from PE6will be transmitted to PE0, PE1, PE2, PE3, PE4, PE5and PE7in the same path as packet P6as illustrated in diagram1000A ofFIG.10A, diagram1000B ofFIG.10B, diagram1000C ofFIG.1Cand diagram1000D ofFIG.10D; And all “n” packets from PE7will be transmitted to PE0, PE1, PE2, PE3, PE4, PE5and PE6in the same path as packet P7as illustrated in diagram1000A ofFIG.10A, diagram1000B ofFIG.10B, diagram1000C ofFIG.1Cand diagram1000D ofFIG.10D; Applicant also notes that “n” number of packets from each PE will reach the rest of PEs in the order they are transmitted as they are transmitted in the same fixed path. For example “n” number of packets from PE0will reach PE1, PE2, PE3, PE4, PE5, PE6and PE7in the order they are transmitted as they are transmitted in the same fixed path as packet PO as illustrated in diagram1000A ofFIG.10A, diagram1000B ofFIG.10B, diagram1000C ofFIG.1Cand diagram1000D ofFIG.10D. Accordingly, concurrent broadcast “n” number of packets from each PE to the rest of PEs, as shown in the exemplary multi-stage hypercube-based interconnection network with 4*2 2D-grid of 8 PEs in diagram500A ofFIG.5Aand diagram500B ofFIG.5B, it requires “4*n” number of time steps and no out of order arrival of packets occurs. Applicant also notes that in each PE packets arrive in different order as can be observed in the foregoing disclosure, particularly in diagram900A ofFIG.9A, diagram900B ofFIG.9B, diagram1000A ofFIG.10A, diagram1000B ofFIG.10B, diagram1000C ofFIG.1Cand diagram1000D ofFIG.10D. So each processor will be enabled to execute the instructions in the order of the arrival of packets in each PE, which means processor in each PE is based on dataflow architecture. For example PE000in the multi-stage hypercube-based interconnection network of diagram1000A ofFIG.10A, diagram1000B ofFIG.10B, diagram1000C ofFIG.1Cand diagram1000D ofFIG.10D, executes instructions requiring P1, P2, P4, P3, P5, P6, and P7in that order whereas example PE001executes instructions requiring P0, P3, P5, P2, P4, P7, and P6in that order and so both PEs execute the program instructions in different order which is dataflow architecture. Applicant notes that each time step is not the same duration as other time steps as a time duration of a time step is determined by the length of the packet, the length of the bus, the number of wires in the inlet buses, outlet buses and vertical buses, the implemented non-transitory medium of each bus and clock speed of operation. However the packet received sooner is transmitted through the interconnection network so that time steps are interleaved without changing the order of transmission of the packets, according to the current invention. Also Applicant notes that, in one embodiment, for each PE to one or more fan outs of multicast of a packet to one or more of the rest of PEs in the multi-stage hypercube-based interconnection network, in accordance with the current invention, is performed by concurrent broadcast by each PE to all the rest of PEs in the multi-stage hypercube-based interconnection network as disclosed in diagram800ofFIG.8, diagram900A ofFIG.9A, in diagram900B ofFIG.9B, in diagram1000A ofFIG.10A, diagram1000B ofFIG.10B, in diagram1000C ofFIG.10C, and diagram1000D ofFIG.10D. In one embodiment, in diagram100A ofFIG.1A, diagram100B ofFIG.1B, in diagram100C ofFIG.1C, diagram200B ofFIG.2B, in diagram300ofFIG.3, diagram400ofFIG.4, in diagram500A ofFIG.5A, diagram500B ofFIG.5B, in diagram600A ofFIG.6A, diagram600B ofFIG.6B, in diagram700A ofFIG.7A, diagram700B ofFIG.7B, in diagram800ofFIG.8, diagram900A ofFIG.9A, in diagram900B ofFIG.9B, in diagram1000A ofFIG.10A, diagram1000B ofFIG.10B, in diagram1000C ofFIG.10C, and diagram1000D ofFIG.10Deach PE comprises, in addition to the interconnect, a processor and memory as shown in diagram1100ofFIG.11. Referring to diagram1100ofFIG.11, in an exemplary processing element PE0000comprises a processor1110, memory or computer memory1120, and interconnect1130. Processor1110and memory1120is connected by a bus1140. Processor1110and interconnect1130is connected by a bus1150. Interconnect1130and memory1120is connected by a bus1160. Interconnect1130is connected to the rest of processing elements through buses1170and so to the interconnection network. In general, in accordance with the current invention, each of α×b processing elements arranged in a two dimensional grid comprise a processor, memory and interconnect as shown in diagram1100ofFIG.11. In one embodiment, in diagram100A ofFIG.1A, diagram100B ofFIG.1B, in diagram100C ofFIG.1C, diagram200B ofFIG.2B, in diagram300ofFIG.3, diagram400ofFIG.4, in diagram500A ofFIG.5A, diagram500B ofFIG.5B, in diagram600A ofFIG.6A, diagram600B ofFIG.6B, in diagram700A ofFIG.7A, diagram700B ofFIG.7B, in diagram800ofFIG.8, diagram900A ofFIG.9A, in diagram900B ofFIG.9B, in diagram1000A ofFIG.10A, diagram1000B ofFIG.10B, in diagram1000C ofFIG.10C, diagram1000D ofFIG.10D, and diagram1100ofFIG.11processor of each processing element is a Central Processing Unit (“CPU”) comprises functional units that perform such as additions, multiplications, or logical operations, for executing computer programs. In another embodiment, in diagram100A ofFIG.1A, diagram100B ofFIG.1B, in diagram100C ofFIG.1C, diagram200B ofFIG.2B, in diagram300ofFIG.3, diagram400ofFIG.4, in diagram500A ofFIG.5A, diagram500B ofFIG.5B, in diagram600A ofFIG.6A, diagram600B ofFIG.6B, in diagram700A ofFIG.7A, diagram700B ofFIG.7B, in diagram800ofFIG.8, diagram900A ofFIG.9A, in diagram900B ofFIG.9B, in diagram1000A ofFIG.10A, diagram1000B ofFIG.10B, in diagram1000C ofFIG.10C, diagram1000D ofFIG.10D, and diagram1100ofFIG.11processor of each processing element comprises a domain specific architecture (“DSA”) based Deep Neural Network (“DNN”) processor comprising one or more multiply accumulate (“MAC”) units for matrix multiply operations. In diagram100A ofFIG.1A, diagram100B ofFIG.1B, in diagram100C ofFIG.1C, diagram200B ofFIG.2B, in diagram300ofFIG.3, diagram400ofFIG.4, in diagram500A ofFIG.5A, diagram500B ofFIG.5B, in diagram600A ofFIG.6A, diagram600B ofFIG.6B, in diagram700A ofFIG.7A, diagram700B ofFIG.7B, in diagram800ofFIG.8, diagram900A ofFIG.9A, in diagram900B ofFIG.9B, in diagram1000A ofFIG.10A, diagram1000B ofFIG.10B, in diagram1000C ofFIG.10C, diagram1000D ofFIG.10D, and diagram1100ofFIG.11, α×b processing elements are 1) implemented in two dimensional grid in a single die in one embodiment, or 2) implemented in a plurality of dies on a semiconductor wafer in another embodiment, or 3) implemented in a plurality of integrated circuit chips in yet another embodiment. Numerous modifications and adaptations of the embodiments, implementations, and examples described herein will be apparent to the skilled artisan in view of the disclosure.
68,212